Starred repositories
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
Python tool for converting files and office documents to Markdown.
Get your documents ready for gen AI
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.
The definitive Web UI for local AI, with powerful features and easy setup.
aider is AI pair programming in your terminal
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthr…
💫 Industrial-strength Natural Language Processing (NLP) in Python
Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems.
Official inference framework for 1-bit LLMs
Fully open reproduction of DeepSeek-R1
🤗 smolagents: a barebones library for agents that think in code.
Official inference repo for FLUX.1 models
Fully Local Manus AI. No APIs, No $200 monthly bills. Enjoy an autonomous agent that thinks, browses the web, and code for the sole cost of electricity. 🔔 Official updates only via twitter @Martin9…
OCR, layout analysis, reading order, table recognition in 90+ languages
"DeepCode: Open Agentic Coding (Paper2Code & Text2Web & Text2Backend)"
Official code for "F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching"
Modular visual interface for GDB in Python
Fast and accurate AI powered file content types detection
HexStrike AI MCP Agents is an advanced MCP server that lets AI agents (Claude, GPT, Copilot, etc.) autonomously run 150+ cybersecurity tools for automated pentesting, vulnerability discovery, bug b…
The Ultimate Collection of 500+ Agentic Skills for Claude Code/Antigravity/Cursor. Battle-tested, high-performance skills for AI agents including official skills from Anthropic and Vercel.
This is the Personality Core for GLaDOS, the first steps towards a real-life implementation of the AI from the Portal series by Valve.
Performance-Optimized AI Inference on Your GPUs. Unlock it by selecting and tuning the optimal inference engine for your model.
g1: Using Llama-3.1 70b on Groq to create o1-like reasoning chains
