That’s both the strength and horror of LLMs. They are super good at presenting information in a pleasing way to the user… but can you trust that what it says is correct?
To the majority of humans, a pleasing presentation is treated as evidence of truth, despite that being a logical fallacy.
This is all the article mentions. I hope you’re right about the backwards compatibility.