Something to handle code, text and math.
If you need or want to run an LLM on limited hardware, you may want to look into so-called bitnets with ternary connections. These should be efficient enough to run an OK LLM on a CPU with 16 GB of ram if not less. Unfortunately they’re barely out of the experimental stage, so you’ll probably have to compile BitNet.cpp yourself or wait a few months until full support lands in Ollama.
I haven’t run a bitnet myself yet, so I can’t personally vouch for their effectiveness or usefulness.
16 GB VRAM GPU, models stored on SSD, rest of the computer doesn’t have to be crazy. Intel Arc is best bang for the buck at the moment. You can get LLM running on 8 GB cards or even the CPU, but IMO such small models are more novelties than workhorses. I personally use Debian but you’ll be fine as long as your distro’s repo has drivers recent enough for your GPU.
For perspective, I’m using such a build to help with boilerplate code, single-use scripts that I don’t have the patience to trial-and-error (like ones that have to deal with directory structures and special characters), getting an idea of what’s what when decompiling and reverse engineering, brainstorming tip-of-the-tongue ideas, and upscaling images.
heavily depends on the model and quantization level
choose the model you want on this website and it’ll give you some specs likely to run it
any/most distros will do, especially if you run it on Docker
if you’re going with intel cards (best $ per GB VRAM right now), you could get a decent machine under $3k
A budget build is going to run you $4k+ for something like qwen3-coder:30b, and you’ll probably be annoyed at the speed of you’re used to Codex or Claude.
I disagree Gemma 4 easily runs inside of a 16g GPU and is really pretty fast.
Fast is relative. I’m also commenting on the cost of the entire system, not just the gpu, fyi
That’s fair, but nearly any modern CPU at least 32gb of RAM and a current GPU with 16gb is plenty. No need for a 4k system when a 1k-1.5k will do it.
If you’re willing to Frankenstein things some of the used AI/ML/mining cards can be a decent value.
Yes, but when you compare it to codex and Claude though, it’s significantly slower. Especially over time. Better crank that AC.
I think in a few years we will have current cloud levels running pretty efficiently on current computers.




