Safety reasoning models for policy-based text classification and foundational safety tasks.
5m
10K+
2
Advanced coding agent model with 80B params (3B active MoE) for code generation and debugging
2m
10K+
1
Embedding Gemma is a state-of-the-art text embedding model from Google DeepMind
7m
10K+
3
GLM-4.7-Flash is a top 30B-A3B MoE, balancing strong performance with efficient deployment.
2m
10K+
1
Nomic Embed Text v1 is an open‑source, fully auditable text embedding model
8m
10K+
4
Qwen3 Embedding: multilingual models for advanced text/ranking tasks like retrieval & clustering.
5m
10K+
1
Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.
4m
10K+
3
Devstral Small 2 is an FP8 instruct LLM for agentic SWE tasks, codebase tooling, and SWE-bench.
3m
10K+
4
OpenAI’s open-weight models designed for powerful reasoning, agentic tasks
5m
10K+
1
24B multimodal instruction model by Mistral AI, tuned for accuracy, tool use & fewer repeats
6m
10K+
1
Designed for reasoning, agent and general capabilities, and versatile developer-friendly features
7m
10K+
2
Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.
4m
10K+
SmolVLM: lightweight multimodal model for video, image, and text analysis, optimized for devices.
6m
10K+
3
IBM's Granite 3.0 large language model (LLM), optimized for local large language model operations
1y
10K+
1
mxbai-embed-large-v1 is a top English embed model by Mixedbread AI, great for RAG and more.
1y
10.0K
3
397B-parameter MoE multimodal LLM with 17B active params, 262K context, 201 languages
2d
9.8K
1
SmolVLM: lightweight multimodal model for video, image, and text analysis, optimized for devices
5m
9.4K
Agentic coding LLM (24B) fine-tuned from Mistral-Small-3.1 with a 128K context window
6m
9.3K
4
Granite-4.0-nano: lightweight instruct model trained via SFT, RL, and merging on diverse data.
5m
9.3K
744B MoE language model with 40B active params for reasoning, coding, and agentic tasks (FP8)
1m
8.6K
3
FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.
3m
7.8K
2
7B long-context instruct model with RL alignment, IF, tool use, and enterprise optimization.
6m
7.3K
3
An open-source visual language model that interprets images via text prompts, fast and powerful.
6m
6.8K
2
Granite Embedding Multilingual is a 278 million parameter, encoder‑only XLM‑RoBERTa‑style
8m
6.2K
2
32B long-context instruct model with RL alignment, IF, tool use, and enterprise optimization.
6m
5.8K
1
FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.
3m
5.1K
1
Snowflake’s Arctic-Embed v2.0 boosts multilingual retrieval and efficiency
5m
4.7K