A Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children.

According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as “a convicted criminal who murdered two of his children and attempted to murder his third son,” a Noyb press release said.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    ·
    19 days ago

    It’s AI. There’s nothing to delete but the erroneous response. There is no database of facts to edit. It doesn’t know fact from fiction, and the response is also very much skewed by the context of the query. I could easily get it to say the same about nearly any random name just by asking it about a bunch of family murders and then asking about a name it doesn’t recognize. It is more likely to assume that person is in the same category as the others and if the one or more of the names have any association (real or fictional) with murder.

    • bluGill@fedia.io
      link
      fedilink
      arrow-up
      4
      ·
      19 days ago

      I don’t care why. That is still libel and it is illegal for good reason. if you can’t stop this for all cases then you ai is and should be illegal.

      • tfm@europe.pub
        link
        fedilink
        English
        arrow-up
        4
        ·
        19 days ago

        None of the moneybags will listen, unfortunately. But I’m with you. The rollout of AI was extremely irresponsible. Just to make it profitable as quickly as possible.

        • Flagstaff@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          ·
          19 days ago

          To be fair, based on observations after these years, it doesn’t appear that waiting longer before release would have significantly improved Autocomplete Idiocy in any way.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 days ago

        Seems to me libel would require AI to have credibility, which it does not.

        It’s a tool. Like most useful tools it can do harmful things. We know almost nothing about the provenance of this output. It could have been poisoned either accidentally or deliberately.

        But above all, the problem is ignorant people believing the output of AI is truth. It’s pretty good at some things, but the more esoteric the knowledge, the less reliable it is. It’s best to treat AI as a storyteller. Yeah there are a lot of facts in there but when they don’t serve the story they can be embellished. I don’t see the harm in just acknowledging that and moving on.

        • kibiz0r@midwest.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          19 days ago

          Meanwhile, AI vendors:

          “AI will soon be the only way we access information and make decisions!”

        • deur@feddit.nl
          link
          fedilink
          English
          arrow-up
          1
          ·
          19 days ago

          Im not a lawyer but the most conclusive missing piece of what we commonly understand to be libel is the information has to be published.

          • MagicShel@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            19 days ago

            I thought about that.

            The definition of publish could get a little murky here. Actually the best defense here is that, so far as we know, this was not disclosed to a third party by ChatGPT (that’s pretty flimsy, though, because it likely has no idea who it is talking to.)

            I acknowledge there is some level of nuance here, which is why I come back to no one should have any expectation that AI will be factual. The disclaimers are everywhere. There is really no excuse for anyone to treat the output as gospel.

      • Ech@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 days ago

        Except it’s not libel. It’s a one time string of text generated exclusively for him. Literally no one would have known what it said if the guy didn’t get the exact thing he wants “deleted” published online for everyone to see. Now it’ll be linked to his name forever, but the llm didn’t do that.

        • AwesomeLowlander@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 days ago

          It’s been shown repeatedly that putting the same input into a gen AI will often get the same output, or extremely similar. So he has grounds to be concerned that anybody else asking the LLM about him would be getting the same libelous result.

      • JohnEdwa@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        19 days ago

        Libel requires the claims to be published or broadcasted, so it isn’t. A predictive text algorithm strung some random words together, and the guy got offended.
        It’s like suing because your phone keyboard autosuggested “is a murderer” as the next words after you wrote your name. Btw, I tried it a few times for lulz and managed to get it to write out “bluGill and the kids are going to get it on”, so I guess you can sue Google now?

    • surewhynotlem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 days ago

      I have this gun machine that shoots in all directions randomly. I can’t predict it, so I can’t stop it from shooting you. So sorry. It’s uncontrollable.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 days ago

        Yeah but I can just ignore the bullets because they are nerf. And I have my own nerf guns as well.

        I mean at some point any analogy fails, but AI is nothing like a gun.

        • surewhynotlem@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          19 days ago

          AI is a thing people choose to host and are responsible for the outcomes of its use. The internal working and limitations of the machine do not make the owners less responsible.

          • MagicShel@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            19 days ago

            Okay, so I agree with none of that, but you’re saying as long as we host our own AI or rent our own processing from the cloud we’re in the clear? I want to make sure that’s your fundamental argument because that leaves all open models in the clear and frankly I could be down with that. I like AI but I’m not a huge fan of AI companies.

            • SendPrudes@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              19 days ago

              So insurance companies use AI to screen claims.

              It denies a claim for life saving intervention - person dies. Who is responsible for that? Historically it would be the insurance company - and worker. Would it be them or the AI company?

              Psych screening tools were using it to pre screen calls.

              Ai tells the person to kill themselves - who is at fault if they do it. Psych screener would lose their job and their license. What and who is impacted if AI does it.

              QA check on a car or product is passed by AI but should have failed.

              Thousands die before the recall. Who is at fault for it? The Company leveraging AI. Or the AI itself?

            • surewhynotlem@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              19 days ago

              I’m not sure you get my point.

              If I’m proving a service, and that service is creating and publishing disparaging information about you, you should have recourse against me. I don’t get off the hook just because of the way I’ve set up the technology.

              • MagicShel@lemmy.zip
                link
                fedilink
                English
                arrow-up
                0
                ·
                19 days ago

                Right. Well if your service is a well-known bullshiter I wouldn’t give a fuck. That being said, I’d be happy to agree that AI should all be open source and self-hosted. I run local AI myself, but the quality isn’t there. I’d have to rent time on a big boy machine if the big players went away. That would be a little inconvenient because I’d want to have a whole bunch of requests queued up to use maximum power over minimum time and that’s not really how anyone uses AI.

                Maybe I could share that rental with other AI enthusiasts… hmmm.

        • Petter1@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          19 days ago

          Yea, I’m mind blown, how, after 3 years people still don’t know how to use LLM effectively in use cases they bring value (by reducing work time)

          • start a second chat and ask different to verify
          • if you use chatGPT reason feature, read reasoning output as well!
          • best search for verifiable thing, like code, that you can run or similar
          • if you use it for research, only trust the info, if it used web search and you have read the webpages it used to summarise as well, or use traditional web search to verify based on the output
          • it is great to manipulate text until sounds as desired (if you are not good in wording stuff anyway)
          • plan what steps to do in a project next (like “i want to do xxx have y and need it to be z, make me a list of todos)
          • and of course it is great to generate simple python scripts fast (I often use it as my python writing slave)

          Using AI like this, helped me enormously in work and live Like, I learned a lot C, C++, how linux kernel modules work, how PO/POT works, helped me with translations, introduced me into music production, helped me set up appFlowy and general windows/linux issues.

          • michaelmrose@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            19 days ago

            Because it makes up things that are 99% correct and in some areas the 99% + verification and expansion can be superior time wise to the 100% manual route

          • michaelmrose@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            19 days ago

            It would be more accurate to say that rather than knowing anything at all they have a model of the statistical relationship between a series of tokens and subsequent tokens which words are apt to follow other words and because the training set contains many true things the words produced in response to queries often contain true statements and almost always contain statements that LOOK like true statements.

            Since it has no inherent model of the world to draw on and only such statistical relationships you should check anything important

            • pyre@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              19 days ago

              you say more accurate but all I see is a very roundabout way of saying fucking wrong all the goddamn time

                • pyre@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  19 days ago

                  maybe you should tell that to the companies that shove it in every crevice of every website and app. why is it on search results? why is it summarizing emails? why is it literally doing anything? it’s useless. actually it’s less than useless. it’s misleading and harmful. and the companies should be held liable for it.

    • FiskFisk33@startrek.website
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      19 days ago

      The fact you chose to make your data storage unreadable, doesn’t relieve you of the responsibilities inherent to storing the data.

      Throwing away my car key won’t protect me from paying parking tickets i accrue while being physically unable to move my car.

      • DoPeopleLookHere@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 days ago

        It’s not unreadable, it doesn’t exist.

        The responses are just statistically what sounds vaugly what you want to hear.

        They can erase the chat responses, but that won’t stop it from generating it again.

        Generative AI doesn’t start with facts and work from there. It’s just statistically what you want to hear.

        • FiskFisk33@startrek.website
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          19 days ago

          It’s not unreadable, it doesn’t exist.

          Then what do you mean trained AI models are?

          The ai model is trained on data and encodes unknown parts of that data in its weights.

          This is data storage. Unmanageable, almost unknowable data storage, but still data storage.

          If it didn’t store data it couldn’t learn from its training.

          • DoPeopleLookHere@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            19 days ago

            Your still placing more intent and facts into those processes than actually exist.

            You cant even get it to count how many letter p are in the word apple. At least not last time I tried.

            That storage your talking about isn’t facts. It’s how sentences are structured and what they “mean”.

            As for the output “meaning” it’s still just guessing what you want to hear. No facts involved.

            • FiskFisk33@startrek.website
              link
              fedilink
              English
              arrow-up
              0
              ·
              19 days ago

              Your still placing more intent and facts into those processes than actually exist.

              No? When they train AI’s on data they lose control of that data. If the data is sensitive, they aren’t being responsible.

              GPT models are as you say dumb statistical models, I agree. But in its weights are encoded ghost images of its training data. The model being dumb is not sufficient to make the data storing itself defensible in my opinion.

              • DoPeopleLookHere@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                19 days ago

                Sure, but are you suggesting they somehow encoded, falsely, that they were a murder?

                Because it’s very unlikely.

                It fabricated this from no where. So there’s nothing to delete. Because it’s just a response to a prompt.

                • FiskFisk33@startrek.website
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  19 days ago

                  No I’m not, that part is absolutely hallucinated. Where the problem comes in is that it then output correct personal information about him and his children. A to me clear violation of GDPR.

                  but it also mixed “clearly identifiable personal data”—such as the actual number and gender of Holmen’s children and the name of his hometown—with the “fake information,”

    • ℍ𝕂-𝟞𝟝@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 days ago

      From the GDPR’s standpoint, I wonder if it’s still personal information if it is made up bullshit. The thing is, this could have weird outcomes. Like for example, by the letter of the law, OpenAI might be liable for giving the same answer to the same query again.

      • FiskFisk33@startrek.website
        link
        fedilink
        English
        arrow-up
        0
        ·
        19 days ago

        then again

        but it also mixed “clearly identifiable personal data”—such as the actual number and gender of Holmen’s children and the name of his hometown—with the “fake information,”

        The made up bullshit aside, this should be a quite clear indicator of an actual GDPR breach

        • Petter1@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          19 days ago

          Maybe he has a insta profile with the name of his kids in his bio

          How would that be a GDPR breach?

          • FiskFisk33@startrek.website
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            19 days ago

            Maybe he has a insta profile with the name of his kids in his bio

            Irrelevant. The data being public does not make it up for grabs.

            ‘Personal data’ means any information relating to an identified or identifiable natural person (‘data subject’);

            They store his personal data without his permission.

            also

            Information that is inaccurately attributed to a specific individual, be it factually incorrect or information that in reality is related to another individual, is still considered personal data as it relates to that specific individual. If data are inaccurate to the point that no individual can be identified, then the information is not personal data.

            Storing it badly, does not make them excempt.

            • Petter1@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              19 days ago

              If you run an chatbot with with integrated web search, it garbs that info as a web crawler does, it does not mean that this data really is in the “knowledge/statistics” of the AI itself.

              Nobody stores the information if it is like this, it is only temporary used to generate that specific output.

              (You can not use chatGPT without websearch on chatgpt domain (only if you self host, or use a service like DDG))

              • FiskFisk33@startrek.website
                link
                fedilink
                English
                arrow-up
                0
                ·
                19 days ago

                That’s a good point, that muddies the waters a bit. Makes it hard to say wether it’s spouting info from the web or if it’s data from the model.

                I can’t comment on actual legality in this case, but I feel handling personal data like this, even from the open web, in a context where hallucinations are an overwhelming possibility, is still morally wrong. I don’t know the GDPR well enough to say wether it covers temporary information like this, but I kinda hope it does.

                • Petter1@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  19 days ago

                  Lol, I definitely hope not 🤪 imagine a web without search engines, with GDPR counting for temporary information as well, it would not be feasible to offer.

                  • FiskFisk33@startrek.website
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    ·
                    19 days ago

                    hmm, true enough. But in my mind there’s a clear difference between showing information unedited and referring to its source, and this.