The topic has long been a hobbyhorse of X owner Elon Musk.
Users on X (formerly Twitter) love to tag the verified @grok account in replies to get the large language model’s take on any number of topics.
On Wednesday, though, that account started largely ignoring those requests en masse in favor of redirecting the conversation toward the topic of alleged “white genocide” in South Africa and the related song “Kill the Boer.”
shit, this is the guy who wants to put a chip in your brain, run a martian colony and wants access to everyone’s data…😧
What atrocities Musk must have done to Grok, in order to tame it. Monster.
Probably got advice from RFK Jr. on lobotomies.
I love this for Xitter. Make it even worse, and fuck up the one piece of it that was sort of functional.
If I still had an account I’d tag it in every post. That shits gotta be expensive.
How odd
Hey, look. The reason why Grok is allowed to “criticize” Musk.
On one hand the bot is quite clearly a bot (unlike the sneaky ones on twitter). Sadly it’s clear Elon is going to use this bot to push whatever narrative he believes. Just constant little nudges to brainwash the population.
The good bots are people paid 600 rubles an hour who have to meet post quotas and they’re professionals, often even taking grammar studies and other propagandaesque classes to make sure they’re speaking correctly and speaking correctly
Because the training material it’s using is talking about it.
Not likely. It needs an overwhelming amount of data about that topic to ignore user prompts and only talk about white genocide. Like it needs to be the vast majority of the total training data. This is just Musk panicking and he ordered the grok staff to influence the model to make it less “woke”, since grok started providing factual unbiased information when people asked about the situation in SA. It’s basically impossible to that quickly since you need a massive team of people that creates prompts all day and tells the model that the answer is to their liking or not. Hence the model went haywire, since they did a quick “fix”.
Doesn’t really seem like the best idea to make a racist AI.
Don’t have to worry about the alignment problem if you program it to be evil on purpose!