• 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: June 13th, 2023

help-circle
  • What is success here? The few founders and VC get filthy rich as the larger population dumps their money into Discord stock while the users and teams with limited foresight, who’ve moved their communities to discord, suffer?

    I mean yeah I guess that’s the success Cory Doctorow warns us about again and again.

    But that’s not my definition of success.

    For context I’ve been on the receiving end of an IPO and the founders and investors made out like bandits while a fair number of employees were stuck holding the bags thanks to lock-ins, dilution and over priced shares.


  • So maybe we’re kinda staring at two sides of the same coin. Because yeah, you’re not misrepresentin my point.

    But wait there’s a deeper point I’ve been trying to make.

    You’re right that I am also saying it’s all bullshit - even when it’s “right”. And the fact we’d consider artificially generated, completely made up text libellous indicates to me that we (as a larger society) have failed to understand how these tools work. If anyone takes what they say to be factual they are mistaken.

    If our feelings are hurt because a “make shit up machine” makes shit up… well we’re holding the phone wrong.

    My point is that we’ve been led to believe they are something more concrete, more exact, more stable, much more factual than they are — and that is worth challenging and holding these companies to account for. i hope cases like these are a forcing function for that.

    That’s it. Hopefully my PoV is clearer (not saying it’s right).


  • Ok hear me out: the output is all made up. In that context everything is acceptable as it’s just a reflection of the whole of the inputs.

    Again, I think this stems from a misunderstanding of these systems. They’re not like a search engine (though, again, the companies would like you to believe that).

    We can find the output offensive, off putting, gross , etc. but there is no real right and wrong with LLMs the way they are now. There is only statistical probability that a) we’ll understand the output and b) it approximates some currently held truth.

    Put another way; LLMs convincingly imitate language - and therefore also convincing imitate facts. But it’s all facsimile.



  • Surely you jest because it’s so clearly not if you understand how LLMs work (at the core it’s a statistic model - and therefore all approximation to a varying degree).

    But great can come out of this case if it gets far enough.

    Imagine the ilk of OpenAI, Google, Anthropic, XAI, etc. being forced to admit that an LLM can’t actually do anything but generate approximations of language. That these models (again LLMs in particular) produce approximations of language that are so good they’re often indistinguishable from the versions our brains approximate.

    But at the core they cannot produce facts because the way they are made includes artificially injected randomness layered on-top of mathematically encoded values that merely get expressed as tiny pieces of language (tokens) - ones that happen to be close to each other in a massively multidimensional vector space.

    TLDR - they’d be forced to admit the emperor has no clothes and that’s a win for everyone (except maybe this one guy).

    Also it’s worth noting I use LLMs for work almost daily and have studied them quite a bit. I’m not a hater on the tech. Only the capitalists trying to force it down everyone’s throat in such a way that we blindly adopt it for everything.