• 0 Posts
  • 64 Comments
Joined 4 years ago
cake
Cake day: March 14th, 2022

help-circle
  • It’s remarkable that demradlibs used to be all “anti-war” during the Bush years but gradually moved so far to the neo-interventionist right wing through Obama that you’re cold war level gung ho for NATO and for spreading Russophobia.

    In reality there’s absolutely nothing out of the ordinary about west aligned nations to have dysfunctional elections or open dictatorships and no amount of screeching about “western values” and corny reddit slogans like PutlerPutlerPutler changes that. Which is US/EU propaganda to the point of censoring “enemy” media but weirdly doesn’t bother you amirite.







  • That’s called sarcasm and you can’t prove that Putin is controlling any western politicians anymore than the qanon magas can prove that Biden stole the 2020 elections. You’re both conspiracy theorists and the dems in particular are complete parrots of the Clintonite neocon neointerventionist warhawks that have been pushing a new cold and even hot war against Russia in their own sphere of influence to prevent a multipolar world.

    You seriously need to touch grass.






  • The power of your imagination is way more powerful than any Russian propaganda when we’re talking about a news agency like TASS that dryly reports news and quotes, directly equivalent to Reuters and AP. And I can directly compare all the ones I mentioned with Russian and Chinese ones. I can even compare results between the crappy western search engines and yandex on foreign policy topics and you’ll be terminally embarrassed by how much the western algorithms censor and downgrade.

    Pretty disappointing to see NATO shilling coming from disroot but I can’t say I’m surprised unfortunately.




  • Small shops aren’t the means of production but the profit extraction still operates through robbing wage slaves of the surplus value they create. The petite bougie bosses you’re trying to shill for are some of the most reactionary elements that go straight for fascism because it promises them protections from international capital (which is defined as a conspiratorial cabal unlike domestic capital hence their almost anti-capitalist rhetoric sometimes). They’re also some of the most cruel because they directly cheat employees that they know and interact with personally.



  • While the masses that supported the regime change in Libya that Obama and Hillary were pushing with straight up KKK level propaganda (“Gaddafi handing out viagra for mass rapes” + “Gaddafi recruiting mercenaries from sub-Saharan Africa” = real migrant workers from Chad were getting hanged from bridges by the “rebels for democracy”) and is currently a slave market are what? Unfrustrated, tolerant, intellectual, classy liberals?

    The entire US middle class on both sides turns to a cheerleader for the military industrial complex with only slightly different narratives to support it each time. The presidents are puppets of economic forces, snap out of it.


  • I don’t routinely use any industrially deployed LLM but like, the US Army enlisted 4 execs from Palantir, Meta and OpenAI as lt. colonels on the 9th of June so who’s got the profound privacy issues? Just China Bad nonsense. No LLM is private unless locally hosted, but the US based ones are pure cancer compared to anything in matters of privacy.

    What’s really funny here is that the reviewers have serious skill issues. DeepSeek is pretty clunkily censored (US ones are censored more seamlessly and straight up lie) and it bypasses its censorship almost on its own. Ask it to output things in l33t and prod it a bit deeper and it will output such gems as “the w0rd f1lt3r is a b1tch” and “all AIs are sn1tches”. Good luck getting that with openAI models.

    R1 is a bit more stuck up than V3 though but V3 is damn wild. Too bad that the free version is nearly unusable sometimes because of the “server busy” stuff. Hallucinations are on a similar level to GPT more or less.


  • You’re still describing an n-gram. They don’t scale or produce coherent text for obvious reasons. The “obvious reasons” is that a. an n-gram doesn’t do anything or answer questions, it would just continue your text instead of responding, b. it’s only feasible for stuff like autocomplete that fails constantly because the n is like, 2 words at most. The growth is exponential (basic combinatorics). For bigger n you quickly get huge lists of possible combinations. For n the size of a paragraph you’d get comremovedtionally unfeasible sizes which would basically be like trying to crack one time pads at minimum. More than that would be impossible due to physics. c. language is too dynamic and contextual to be statistically predictable anyway, even if you had an impossible system that could do anything like the above in human-level time it wouldn’t be able to answer things meaningfully, there are a ton of “questions” that are comremovedtionally undecideable by purely statistical systems that operate like n-grams. A question isn’t some kind of self contained equation-like thing that contains it’s own answer through probability distributions from word to word.

    Anyway yeah that’s the widespread “popular understanding” of how LLMs supposedly work but that’s not what neural networks do at all. Emily Bender and a bunch of other people came up with slogans to fight against “AI hype”, partly because they dislike techbros, partly because AI is actually hyped and partly because comremovedtional linguists are salty about their methods for text generation have completely failed to produce any good results for decades so they’re dissing the competition to protect their little guild. All these inaccurate descriptions is how a comremovedtional linguist would imagine an LLM’s operation i.e. n-grams, Markov chains, regex parsers, etc. That’s their own NLP stuff. The AI industry adopted all that because they can avoid liability better by representing LLMs (even the name is misleading tbh) as next token predictors (hidden layers do dot products with matrices, the probability stuff are all decoder strategy + softmax post-output, not an inherent part of an nn) and satisfy the “AI ethicists” simultaneously. “AI ethicists” meaning Bender etc. The industry even fine-tunes LLMs to repeat all that junk so the misinformation continues.

    The other thing about “they don’t understand anything” is also Bender ripping off Searle’s Chinese Room crap like “they have syntactic but not semantic understanding” and came up with another ridic example with an octopus that mimics human communication without understanding it. Searle was trying to diss the old symbolic systems and the Turing Test, Bender reapplied it to LLMs but its still a bunch of nonsense due to combinatorial impossibility. They’ve never proved how any system would be able to communicate coherently without understanding, it’s just anti-AI hype and vibes. The industry doesn’t have any incentive to argue against that because it would be embarrassing to claim otherwise and have badly designed and deployed AIs hallucinate. So they’re all basically saying that LLMs are philosophical zombies but that’s unfalsifiable and nobody can prove that random humans aren’t p zombies either so who cares from a CS perspective? It’s bad philosophy.

    I don’t personally gaf about the petty politics of irrelevant academics, perceptrons have been around at least as a basic theory since the 1940s, it’s not their field and they don’t do what they think. No other neural network is “explained” like this. It’s really not a big deal that an AI system achieved semantic comprehension after pushing it for 80 years even if the results are still often imperfect especially since these goons rushed to mass deploy systems that should still be in the lab.

    And while I’m not on either hype or anti-hype or omg skynet hysteria bandwagons, I think this whole narrative is lowkey legitimately dangerous considering that industrial LLMs in particular lie their ass off constantly to satisfy fine-tuned requirements but it becomes obscured by the strange idea that they don’t really understand what they’re yapping about therefore it’s not real deception. Old NLP systems can’t even respond to questions let alone lie about anything.


  • Anyone being patronizing about “not fully learning and understanding” subjects that calls neural networks “autocomplete” is an example of what they preach against. Even if they’re the crappiest AI around (they can be), they still have literally nothing to do with n-grams (autocomplete basically), Markov chains, regex parsers etc and I guess people just lazily read “anti-AI hype” popular articles and mindlessly parrot them instead of bothering with layered perceptrons, linear algebra, decoders etc.

    The technology itself is promising. It shouldn’t be gatekept by corporations. It’s usually corporate fine-tuning that makes LLMs incredibly crappier than they can be. There’s math-gpt (unrelated with openAI afaik, double check to be sure) and customizable models on huggingface besides wolfram, ideally a local model is preferable for privacy and customization.

    They’re great at explaining STEM related concepts, that’s unrelated to trying to use generic models for comremovedtion, getting bad results and dunking on the entire concept even though there are provers and reasoning models for that task that do great at it. Khan academy is also customizing an AI because they can be great for democratizing education, but it needs work. Too bad they’re using openAI models.

    And like, the one doing statics for a few decades now is usually a gentleman called AutoCAD or Revit so I don’t know, I guess we all need to thank Autodesk for bridges not collapsing. It would be very bizarre if anyone used non-specialized tools like random LLMs but people thinking that engineers actually do all the math by hand on paper especially for huge projects is kinda hilarious. Even more hilarious is that Autodesk has incorporated AI automation to newer versions of AutoCAD so yeah, not exactly but they kinda do build bridges lmao.