You must log in or register to comment.
A local LLM not using llama.cpp as the backend? Daring today aren’t we.
Wonder what its performance is in comparison
Thanks for posting, I haven’t seen this project yet. Looks pretty neat and will have to give it a try