For now, the artificial intelligence tool named Neutron Enterprise is just meant to help workers at the plant navigate extensive technical reports and regulations — millions of pages of intricate documents from the Nuclear Regulatory Commission that go back decades — while they operate and maintain the facility. But Neutron Enterprise’s very existence opens the door to further use of AI at Diablo Canyon or other facilities — a possibility that has some lawmakers and AI experts calling for more guardrails.
It’s just a custom LLM for records management and regulatory compliance. Literally just for paperwork, one of the few things that LLMs are actually good at.
Does anyone read more than the headline? OP even said this in the summary.
It depends what purpose that paperwork is intended for.
If the regulatory paperwork it’s managing is designed to influence behaviour, perhaps having an LLM do the work will make it less effective in that regard.
Learning and understanding is hard work. An LLM can’t do that for you.
Sure it can summarise instructions for you to show you what’s more pertinent in a given instance, but is that the same as someone who knows what to do because they’ve been wading around in the logs and regs for the last decade?
It seems like, whether you’re using an LLM to write a business report, or a legal submission, or a SOP for running a nuclear reactor, it can be a great tool but requires high level knowledge on the part of the user to review the output.
As always, there’s a risk that a user just won’t identify a problem in the information produced.
I don’t think this means LLMs should not be used in high risk roles, it just demonstrates the importance of robust policies surrounding their use.
I agree with you but you could see the slippery slope with the LLM returning incorrect/hallucinate data in the same way that is happening in the public space. It could be trivial for documentation until you realize the documentation could be critical for some processes.
If you’ve never used a custom LLM or wrapper for regular ol’ ChatGPT, a lot of what it can hallucinate gets stripped out and the entire corpus of data it’s trained on is your data. Even then, the risk is pretty low here. Do you honestly think that a human has never made an error on paperwork?
I do and even contained one do return hallucination or incorrect data. So it depends on the application that you use it. It is for a quick summary / data search why not? But if it is for some operational process that might be problematic.
NOOOOOO ITS DOING NUCLEAR PHYSICS!!!111
It’s eating the rods, it’s eating the ions!
I unfortunately don’t can someone explain?
This
Oh shit had already forgotten about this amid so many other scandals. The guy who said this is running the whole of US like a fucking medieval kingdom, another reality slap in the face. At that time I was like, “surely no one right in the mind would vote for this scammer”.
Don’t blame the people who just read the headline.
Blame the people who constantly write misleading headlines.
There is literally no “artificial intelligence” here either.