SocialistVibes01@lemmy.ml to Linux@lemmy.mlEnglish · edit-24 hours agoWhich specs are as low as reasonable possible for local LLM models? Do you recommend some distro in particular?message-squaremessage-square8linkfedilinkarrow-up19file-text
arrow-up19message-squareWhich specs are as low as reasonable possible for local LLM models? Do you recommend some distro in particular?SocialistVibes01@lemmy.ml to Linux@lemmy.mlEnglish · edit-24 hours agomessage-square8linkfedilinkfile-text
minus-squareinfinitevalence@discuss.onlinelinkfedilinkEnglisharrow-up1·3 hours agoThat’s fair, but nearly any modern CPU at least 32gb of RAM and a current GPU with 16gb is plenty. No need for a 4k system when a 1k-1.5k will do it. If you’re willing to Frankenstein things some of the used AI/ML/mining cards can be a decent value.
minus-squaremeowmeow@quokk.aulinkfedilinkEnglisharrow-up1·2 hours agoYes, but when you compare it to codex and Claude though, it’s significantly slower. Especially over time. Better crank that AC. I think in a few years we will have current cloud levels running pretty efficiently on current computers.
That’s fair, but nearly any modern CPU at least 32gb of RAM and a current GPU with 16gb is plenty. No need for a 4k system when a 1k-1.5k will do it.
If you’re willing to Frankenstein things some of the used AI/ML/mining cards can be a decent value.
Yes, but when you compare it to codex and Claude though, it’s significantly slower. Especially over time. Better crank that AC.
I think in a few years we will have current cloud levels running pretty efficiently on current computers.