• wischi@programming.dev
    link
    fedilink
    English
    arrow-up
    23
    ·
    2 days ago

    We don’t know how to train them “truthful” or make that part of their goal(s). Almost every AI we train, is trained by example, so we often don’t even know what the goal is because it’s implied in the training. In a way AI “goals” are pretty fuzzy because of the complexity. A tiny bit like in real nervous systems where you can’t just state in language what the “goals” of a person or animal are.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      15
      ·
      2 days ago

      The article literally shows how the goals are being set in this case. They’re prompts. The prompts are telling the AI what to do. I quoted one of them.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          5
          ·
          2 days ago

          If you read the article (or my comment that quoted the article) you’ll see your assumption is wrong.

          • FiskFisk33@startrek.website
            link
            fedilink
            English
            arrow-up
            16
            ·
            2 days ago

            Not the article, the commenter before you points at a deeper issue.

            It doesn’t matter how if your prompt tells it not to lie is it isn’t actually capable of following that instruction.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              5
              ·
              2 days ago

              It is following the instructions it was given. That’s the point. It’s being told “promote this drug”, and so it’s promoting it, exactly as it was instructed to. It followed the instructions that it was given.

              Why are you think that the correct behaviour for the AI must be for it to be “truthful”? If it was being truthful then that would be an example of it failing to follow its instructions in this case.

              • JackbyDev@programming.dev
                link
                fedilink
                English
                arrow-up
                13
                ·
                2 days ago

                I feel like you’re missing the forest for the trees here. Two things can be true. Yes, if you give AI a prompt that implies it should lie, you shouldn’t be surprised when it lies. You’re not wrong. Nobody is saying you’re wrong. It’s also true that LLMs don’t really have “goals” because they’re trained by examples. Their goal is, at the end of the day, mimicry. This is what the commenter was getting at.