• 1 Post
  • 47 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle



  • OP’s being abrasive, but I sympathize with the sentiment. Bluesky is algorithmic just like Twitter.

    …Dunno about Bluesky, but Lemmy feels like a political purity test to me. Like, I love Lemmy and the Fediverse, but at the same time mega upvoted posts/comments like “X person should kill themself,” explulsion of nuance in specific issues, leaks into every community and such are making me step back more and more.



  • It’s not theoretical, it’s just math. Removing 1/3 of the bus paths, and also removing the need to constantly keep RAM powered

    And here’s the kicker.

    You’re supposing it’s (given the no refresh bonus) 1/3 as fast as dram, similar latency, and cheap enough per gigabyte to replace most storage. That is a tall order, and it would be incredible if it hits all three of those. I find that highly improbable.

    Even dram is starting to become a bottleneck for APUs, specifically, because making the bus wide is so expensive. This applies to the very top (the MI300A) and bottom (smartphones and laptop APUs).

    Optane, for reference, was a lot slower than DRAM and a lot more expensive/less dense than flash even with all the work Intel put into it and busses built into then top end CPUs for direct access. And they thought that was pretty good. It was good enough for a niche when used in conjunction with dram sticks


  • You are talking theoretical.

    A big reason that supercomputers moved to a network of “commodity” hardware architecture is that its cost effective.

    How would one build a giant unified pool of this memory? CXL, but how does it look physically? Maybe you get a lot of bandwidth in parallel, but how would it be even close to the latency of “local” DRAM busses on each node? Is that setup truly more power efficient than banks of DRAM backed by infrequently touched flash? If your particular workload needs fast random access to memory, even at scale the only advantage seems to be some fault tolerance at a huge speed cost, and if you just need bulk high latency bandwidth, flash has got you covered for cheaper.

    …I really like the idea of non a nonvolatile, single pool backed by caches, especially at scale, but ultimately architectural decisions come down to economics.













  • I use local instances of Aya 32B (and sometimes Deepseek, Qwen, LG Exaone, Japanese finetunes, others depending on the language) to translate stuff, and it is quite different than Google Translate or any machine translation you find online. They get the “meaning” of text instead of transcribing it robotically like Google, and are actually pretty loose with interpretation.

    It has soul… sometimes too much. That’s the problem: It’s great for personal use where it can ocassionally be wrong or flowery, but not good enough for publishing and selling, as the reader isn’t necessarily cognisant of errors.

    In other words, AI translation should be a tool the reader understands how to use, not something to save greedy publishers a buck.

    EDIT: Also, if you train an LLM for some job/concept in pure Chinese, a surprising amount of that new ability will work in English, as if the LLM abstracts language internally. Hence they really (sorta) do a “meaning” translation rather than a strict definitional one… Even when they shouldn’t.

    Another thing you can do is translate with one local LLM, then load another for a reflection/correction check. This is another point for “open” and local inference, as corporate AI goes for cheapness, and generally tries to restrict you from competitors.