• 0 Posts
  • 3 Comments
Joined 2 months ago
cake
Cake day: March 1st, 2025

help-circle
  • Its all local. Ollama is the application, deepseek and llama and qwen and whatever else are just model weights. The models arent executables, nor do the models ping external services or whatever. The models are safe. Ollama itself is meant for hosting models locally, and I dont believe it even has capability of doing anything besides run local models.

    Where it gets more complicated is “agentic” assistants, that can read files or execute things at the terminal. The most advanced code assistance are doing this. But this is NOT a function of ollama or the model, its a function of the chat UI or code editor plugin that glues the model output together with a web search, filesystem, terminal session, etc.

    So in short, ollama just runs models. Its all local and private, no worries.