For now, the artificial intelligence tool named Neutron Enterprise is just meant to help workers at the plant navigate extensive technical reports and regulations — millions of pages of intricate documents from the Nuclear Regulatory Commission that go back decades — while they operate and maintain the facility. But Neutron Enterprise’s very existence opens the door to further use of AI at Diablo Canyon or other facilities — a possibility that has some lawmakers and AI experts calling for more guardrails.

  • hansolo@lemm.ee
    link
    fedilink
    English
    arrow-up
    92
    ·
    2 days ago

    It’s just a custom LLM for records management and regulatory compliance. Literally just for paperwork, one of the few things that LLMs are actually good at.

    Does anyone read more than the headline? OP even said this in the summary.

    • null_dot@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      It depends what purpose that paperwork is intended for.

      If the regulatory paperwork it’s managing is designed to influence behaviour, perhaps having an LLM do the work will make it less effective in that regard.

      Learning and understanding is hard work. An LLM can’t do that for you.

      Sure it can summarise instructions for you to show you what’s more pertinent in a given instance, but is that the same as someone who knows what to do because they’ve been wading around in the logs and regs for the last decade?

      It seems like, whether you’re using an LLM to write a business report, or a legal submission, or a SOP for running a nuclear reactor, it can be a great tool but requires high level knowledge on the part of the user to review the output.

      As always, there’s a risk that a user just won’t identify a problem in the information produced.

      I don’t think this means LLMs should not be used in high risk roles, it just demonstrates the importance of robust policies surrounding their use.

    • cyrano@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      28
      ·
      2 days ago

      I agree with you but you could see the slippery slope with the LLM returning incorrect/hallucinate data in the same way that is happening in the public space. It could be trivial for documentation until you realize the documentation could be critical for some processes.

      • hansolo@lemm.ee
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 days ago

        If you’ve never used a custom LLM or wrapper for regular ol’ ChatGPT, a lot of what it can hallucinate gets stripped out and the entire corpus of data it’s trained on is your data. Even then, the risk is pretty low here. Do you honestly think that a human has never made an error on paperwork?

        • cyrano@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 days ago

          I do and even contained one do return hallucination or incorrect data. So it depends on the application that you use it. It is for a quick summary / data search why not? But if it is for some operational process that might be problematic.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Don’t blame the people who just read the headline.

      Blame the people who constantly write misleading headlines.

      There is literally no “artificial intelligence” here either.