I’d say the second, based on evidence we’re already seeing in real-life outcomes. LLM chatbots have been linked directly to several suicides already, which would be a startling pattern if these were visits to a therapist or repeated encouragement from another person.
The most common way ppl interact with AI is with something like chatgpt, and these exhibit some very worrisome and largely sycophantic behaviour.
Welp. Better than nothing and largely innocuous? Or addictive slippery slope? Hopefully the former.
I’d say the second, based on evidence we’re already seeing in real-life outcomes. LLM chatbots have been linked directly to several suicides already, which would be a startling pattern if these were visits to a therapist or repeated encouragement from another person.
The most common way ppl interact with AI is with something like chatgpt, and these exhibit some very worrisome and largely sycophantic behaviour.