The topic has long been a hobbyhorse of X owner Elon Musk.

Users on X (formerly Twitter) love to tag the verified @grok account in replies to get the large language model’s take on any number of topics.

On Wednesday, though, that account started largely ignoring those requests en masse in favor of redirecting the conversation toward the topic of alleged “white genocide” in South Africa and the related song “Kill the Boer.”

  • Gsus4@mander.xyz
    link
    fedilink
    arrow-up
    3
    ·
    15 hours ago

    shit, this is the guy who wants to put a chip in your brain, run a martian colony and wants access to everyone’s data…😧

  • kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    ·
    1 day ago

    I love this for Xitter. Make it even worse, and fuck up the one piece of it that was sort of functional.

  • d00ery@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    1 day ago

    On one hand the bot is quite clearly a bot (unlike the sneaky ones on twitter). Sadly it’s clear Elon is going to use this bot to push whatever narrative he believes. Just constant little nudges to brainwash the population.

    • Zenith@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      14 hours ago

      The good bots are people paid 600 rubles an hour who have to meet post quotas and they’re professionals, often even taking grammar studies and other propagandaesque classes to make sure they’re speaking correctly and speaking correctly

    • SkunkWorkz@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      23 hours ago

      Not likely. It needs an overwhelming amount of data about that topic to ignore user prompts and only talk about white genocide. Like it needs to be the vast majority of the total training data. This is just Musk panicking and he ordered the grok staff to influence the model to make it less “woke”, since grok started providing factual unbiased information when people asked about the situation in SA. It’s basically impossible to that quickly since you need a massive team of people that creates prompts all day and tells the model that the answer is to their liking or not. Hence the model went haywire, since they did a quick “fix”.