• ilmagico@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    I literally run deepseek r1 on my laptop via ollama, and many other models, nothing gets sent to anybody. Granted, it’s the smaller 7b parameter model, but still plenty good.

    Microsoft could easily host the full model on their infrastructure if they needed it.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      True, though there’s a big output difference between the 7B distil (or even 32B/70B) and the full model.

      And Microsoft does host R1 already, heh. Again, this headline is a big nothingburger.

      Also (random aside here), you should consider switching from ollama. They’re making some FOSS unfriendly moves, and depending on your hardware, better backends could host 14B models at longer context, and similar or better speeds.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          23 hours ago

          Completely depends on your laptop hardware, but generally:

          • TabbyAPI (exllamav2/exllamav3)
          • ik_llama.cpp, and its openai server
          • kobold.cpp (or kobold.cpp rocm, or croco.cpp, depends)
          • An MLX host with one of the new distillation quantizations
          • Text-gen-web-ui (slow, but supports a lot of samplers and some exotic quantizations)
          • SGLang (extremely fast for parallel calls if thats what you want).
          • Aphrodite Engine (lots of samplers, and fast at the expense of some VRAM usage).

          I use text-gen-web-ui at the moment only because TabbyAPI is a little broken with exllamav3 (which is utterly awesome for Qwen3), otherwise I’d almost always stick to TabbyAPI.

          Tell me (vaguely) what your system has, and I can be more specific.