31 - 60 of 12,675,086 available results.
model

Safety reasoning models for policy-based text classification and foundational safety tasks.

5m

10K+

2

model

Advanced coding agent model with 80B params (3B active MoE) for code generation and debugging

2m

10K+

1

model

Embedding Gemma is a state-of-the-art text embedding model from Google DeepMind

7m

10K+

3

model

GLM-4.7-Flash is a top 30B-A3B MoE, balancing strong performance with efficient deployment.

2m

10K+

1

model

Nomic Embed Text v1 is an open‑source, fully auditable text embedding model

8m

10K+

4

model

Qwen3 Embedding: multilingual models for advanced text/ranking tasks like retrieval & clustering.

5m

10K+

1

model

Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.

4m

10K+

3

model

Devstral Small 2 is an FP8 instruct LLM for agentic SWE tasks, codebase tooling, and SWE-bench.

3m

10K+

4

model

OpenAI’s open-weight models designed for powerful reasoning, agentic tasks

5m

10K+

1

model

Mistral fine-tuned via NVIDIA NeMo for smoother enterprise use

1y

10K+

7

model

24B multimodal instruction model by Mistral AI, tuned for accuracy, tool use & fewer repeats

6m

10K+

1

model

Designed for reasoning, agent and general capabilities, and versatile developer-friendly features

7m

10K+

2

model

Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.

4m

10K+

model

SmolVLM: lightweight multimodal model for video, image, and text analysis, optimized for devices.

6m

10K+

3

image

IBM's Granite 3.0 large language model (LLM), optimized for local large language model operations

1y

10K+

1

model

mxbai-embed-large-v1 is a top English embed model by Mixedbread AI, great for RAG and more.

1y

10.0K

3

model

397B-parameter MoE multimodal LLM with 17B active params, 262K context, 201 languages

2d

9.8K

1

model

SmolVLM: lightweight multimodal model for video, image, and text analysis, optimized for devices

5m

9.4K

model

Agentic coding LLM (24B) fine-tuned from Mistral-Small-3.1 with a 128K context window

6m

9.3K

4

model

Granite-4.0-nano: lightweight instruct model trained via SFT, RL, and merging on diverse data.

5m

9.3K

model

744B MoE language model with 40B active params for reasoning, coding, and agentic tasks (FP8)

1m

8.6K

3

model

FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.

3m

7.8K

2

model

7B long-context instruct model with RL alignment, IF, tool use, and enterprise optimization.

6m

7.3K

3

model

Experimental Qwen variant—lean, fast, and a bit mysterious

11m

6.9K

3

model

An open-source visual language model that interprets images via text prompts, fast and powerful.

6m

6.8K

2

image

4-bit quantized version of model Granite-7b-lab

1y

6.3K

5

model

Granite Embedding Multilingual is a 278 million parameter, encoder‑only XLM‑RoBERTa‑style

8m

6.2K

2

model

32B long-context instruct model with RL alignment, IF, tool use, and enterprise optimization.

6m

5.8K

1

model

FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.

3m

5.1K

1

model

Snowflake’s Arctic-Embed v2.0 boosts multilingual retrieval and efficiency

5m

4.7K