Something to handle code, text and math.

  • Ftumch@lemmy.today
    link
    fedilink
    arrow-up
    1
    ·
    2 hours ago

    If you need or want to run an LLM on limited hardware, you may want to look into so-called bitnets with ternary connections. These should be efficient enough to run an OK LLM on a CPU with 16 GB of ram if not less. Unfortunately they’re barely out of the experimental stage, so you’ll probably have to compile BitNet.cpp yourself or wait a few months until full support lands in Ollama.

    I haven’t run a bitnet myself yet, so I can’t personally vouch for their effectiveness or usefulness.

  • monovergent@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 hours ago

    16 GB VRAM GPU, models stored on SSD, rest of the computer doesn’t have to be crazy. Intel Arc is best bang for the buck at the moment. You can get LLM running on 8 GB cards or even the CPU, but IMO such small models are more novelties than workhorses. I personally use Debian but you’ll be fine as long as your distro’s repo has drivers recent enough for your GPU.

    For perspective, I’m using such a build to help with boilerplate code, single-use scripts that I don’t have the patience to trial-and-error (like ones that have to deal with directory structures and special characters), getting an idea of what’s what when decompiling and reverse engineering, brainstorming tip-of-the-tongue ideas, and upscaling images.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    4 hours ago

    heavily depends on the model and quantization level

    choose the model you want on this website and it’ll give you some specs likely to run it

    https://runthisllm.com/

    any/most distros will do, especially if you run it on Docker

    if you’re going with intel cards (best $ per GB VRAM right now), you could get a decent machine under $3k

  • meowmeow@quokk.au
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    A budget build is going to run you $4k+ for something like qwen3-coder:30b, and you’ll probably be annoyed at the speed of you’re used to Codex or Claude.

      • meowmeow@quokk.au
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Fast is relative. I’m also commenting on the cost of the entire system, not just the gpu, fyi

        • infinitevalence@discuss.online
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          That’s fair, but nearly any modern CPU at least 32gb of RAM and a current GPU with 16gb is plenty. No need for a 4k system when a 1k-1.5k will do it.

          If you’re willing to Frankenstein things some of the used AI/ML/mining cards can be a decent value.

          • meowmeow@quokk.au
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            Yes, but when you compare it to codex and Claude though, it’s significantly slower. Especially over time. Better crank that AC.

            I think in a few years we will have current cloud levels running pretty efficiently on current computers.