• 5 Posts
  • 48 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle










  • Dyf_Tfh@lemmy.sdf.orgOPtoTechnology@lemmy.worldHello GPT-4o
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    6 months ago

    If you already didn’t know, you can run locally some small models with an entry level GPU.

    For example i can run Llama 3 8B or Mistral 7B on a 1060 3GB with Ollama. It is about as bad as GPT-3 turbo, so overall mildly useful.

    Although there is quite a bit of controversy of what is an “open source” model, most are only “open weight”