• FormallyKnown
    link
    fedilink
    English
    arrow-up
    3
    ·
    12 days ago

    A local LLM not using llama.cpp as the backend? Daring today aren’t we.

    Wonder what its performance is in comparison

  • notfromhere@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    12 days ago

    Thanks for posting, I haven’t seen this project yet. Looks pretty neat and will have to give it a try