zai-org/GLM-4.5-Air · Hugging Face
zai-org/GLM-4.5-Air · Hugging Face

zai-org/GLM-4.5-Air · Hugging Face

GLM-4.5-Air is the lightweight variant of our latest flagship model family, also purpose-built for agent-centric applications. Like GLM-4.5, it adopts the Mixture-of-Experts (MoE) architecture but with a more compact parameter size. GLM-4.5-Air also supports hybrid inference modes, offering a "thinking mode" for advanced reasoning and tool use, and a "non-thinking mode" for real-time interaction. Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs
Blog post: https://z.ai/blog/glm-4.5
Hugging Face:
I'm currently using ollama to serve llms, what's everyone using for these models?
I'm also using open webui as well and ollama seemed the easiest (at the time) to use in conjunction with that
ik_llama.cpp (and its API server) is the end-all stop for these big MoE models. Level1tech just did a video on it, and check out ubergarm's quants on huggingface: https://huggingface.co/ubergarm
TabbyAPI (exllamav3 underneath) is great for dense models, or MoEs that will just barely squeeze onto your GPU at 3bpw. Look for exl3s: https://huggingface.co/models?sort=modified&search=exl3
Both are massively more efficient than ollama defaults, to the extent you can run models at least twice the equivalent parameter count ollama can, and support more features too. ik_llama.cpp is also how folks are running these 300B+ MoEs on a single 3090/4090 (in conjunction with a Threadripper, Xeon or EPYC, usually).
Thanks, will check that out!
I've moved to using Rama Lama mainly because it promises to do the probing to get the best acceleration possible for whatever model you launch.
It looks like it just chooses a llama.cpp backend to compile, so technically you are leaving a good bit of performance/model size on the table if you know your GPU, and the backend to choose.
All this stuff is horribly documented though.
I use Kobold most of the time.