SocialistVibes01@lemmy.ml to Linux@lemmy.mlEnglish · edit-27 hours agoWhich specs are as low as reasonable possible for local LLM models? Do you recommend some distro in particular?message-squaremessage-square8linkfedilinkarrow-up113file-text
arrow-up113message-squareWhich specs are as low as reasonable possible for local LLM models? Do you recommend some distro in particular?SocialistVibes01@lemmy.ml to Linux@lemmy.mlEnglish · edit-27 hours agomessage-square8linkfedilinkfile-text
minus-squaremeowmeow@quokk.aulinkfedilinkEnglisharrow-up1·6 hours agoFast is relative. I’m also commenting on the cost of the entire system, not just the gpu, fyi
minus-squareinfinitevalence@discuss.onlinelinkfedilinkEnglisharrow-up2·6 hours agoThat’s fair, but nearly any modern CPU at least 32gb of RAM and a current GPU with 16gb is plenty. No need for a 4k system when a 1k-1.5k will do it. If you’re willing to Frankenstein things some of the used AI/ML/mining cards can be a decent value.
minus-squaremeowmeow@quokk.aulinkfedilinkEnglisharrow-up1·5 hours agoYes, but when you compare it to codex and Claude though, it’s significantly slower. Especially over time. Better crank that AC. I think in a few years we will have current cloud levels running pretty efficiently on current computers.
Fast is relative. I’m also commenting on the cost of the entire system, not just the gpu, fyi
That’s fair, but nearly any modern CPU at least 32gb of RAM and a current GPU with 16gb is plenty. No need for a 4k system when a 1k-1.5k will do it.
If you’re willing to Frankenstein things some of the used AI/ML/mining cards can be a decent value.
Yes, but when you compare it to codex and Claude though, it’s significantly slower. Especially over time. Better crank that AC.
I think in a few years we will have current cloud levels running pretty efficiently on current computers.