Skip to content

Liquid4All/cookbook

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

106 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Liquid AI
Try LFM β€’ Documentation β€’ LEAP

Join Discord

Examples, tutorials, and applications to help you build with our open-weight LFMs and the LEAP SDK on laptops, mobile, and edge devices.

Contents

πŸ–₯️ Desktop Apps

Python and CLI applications for running LFM models on your laptop or desktop machine.

Name Description Link
Invoice Parser Extract structured data from invoice images using LFM2-VL-3B Code
Audio Transcription CLI Real-time audio-to-text transcription using LFM2-Audio-1.5B with llama.cpp Code
Flight Search Assistant Find and book plane tickets using LFM2.5-1.2B-Thinking with tool calling Code
Audio Car Cockpit Voice-controlled car cockpit demo combining LFM2.5-Audio-1.5B with LFM2-1.2B-Tool Code
LocalCowork On-device AI agent for file ops, security scanning, OCR, and more, powered by LFM2-24B-A2B Code
Home Assistant Local home assistant with tool calling, benchmarking, and fine-tuning pipeline using LFM2-350M and LFM2.5-1.2B Code

🌐 Browser Apps

Zero-install applications running LFM models directly in the browser via WebGPU and ONNX Runtime Web.

Name Description Link
Tool Calling Run LFM2 entirely in your browser with WebGPU for in-browser tool calling Code | Demo
Voice Assistant Run LFM2.5-Audio-1.5B entirely in your browser for speech recognition, TTS, and conversation Code | Demo
Live Video Captioning Real-time video captioning with LFM2.5-VL-1.6B running in-browser using WebGPU Code | Demo
Chain-of-Thought Reasoning Run LFM2.5-1.2B-Thinking entirely in your browser with WebGPU for on-device chain-of-thought reasoning Code | Demo
Hand & Voice Racer Browser driving game controlled by hand gestures (MediaPipe) and voice commands (LFM2.5-Audio-1.5B), running fully local Code

πŸ“± Mobile Apps

Native examples for deploying LFM2 models on iOS and Android using the LEAP Edge SDK. Written for Android (Kotlin) and iOS (Swift), the goal of the Edge SDK is to make Small Language Model deployment as easy as calling a cloud LLM API endpoint.

Android

Name Description Link
LeapChat Chat app with real-time streaming, persistent history, and modern UI Code
LeapAudioDemo Audio input and output with LFM2.5-Audio-1.5B for on-device AI inference Code
LeapKoogAgent Integration with Koog framework for AI agent functionality Code
SloganApp Single turn marketing slogan generation with Android Views Code
ShareAI Website summary generator Code
Recipe Generator Structured output generation with the LEAP SDK Code
VLM Example Visual Language Model integration Code

iOS

Name Description Link
LeapChat Chat app with real-time streaming, conversation management, and SwiftUI Code
LeapSloganExample Basic LeapSDK integration for text generation in SwiftUI Code
Recipe Generator Structured output generation Code
Audio Demo Audio input/output with LeapSDK for on-device AI inference Code

🎯 Fine-Tuning Notebooks

Colab notebooks and Python scripts for customizing LFM models with your own data.

Name Description Link
Supervised Fine-Tuning (SFT)
SFT with Unsloth Memory-efficient SFT using Unsloth with LoRA for 2x faster training Notebook
SFT with TRL Supervised fine-tuning using Hugging Face TRL library with parameter-efficient LoRA Notebook
Reinforcement Learning
GRPO with Unsloth Train reasoning models using Group Relative Policy Optimization for verifiable tasks Notebook
GRPO with TRL Train reasoning models using Group Relative Policy Optimization with rule-based rewards Notebook
Continued Pre-Training (CPT)
CPT for Translation Adapt models to specific languages or translation domains using domain data Notebook
CPT for Text Completion Teach models domain-specific knowledge and creative writing styles Notebook
Vision-Language Models
VLM SFT with Unsloth Supervised fine-tuning for LFM2-VL models on custom image-text datasets Notebook

Third-Party Apps Powered by LFM

Production and open-source applications that support LFM models as an inference backend, among other providers.

Name Description Link
DeepCamera Open-source AI camera system for local vision intelligence with facial recognition, person re-ID, and edge deployment on Jetson and Raspberry Pi Code
Osaurus Native macOS AI harness for managing agents, memory, tools, and identity locally, with support for LFM models via MLX on Apple Silicon Code

🌟 Community Projects

Open-source projects built by the community showcasing LFMs with real use cases.

Featured

Name Description Link
Image Classification on Edge End-to-end tutorial covering fine-tuning and deployment for super fast and accurate image classification using local VLMs Code
Chess Game with Small LMs End-to-end tutorial covering fine-tuning and deployment to build a Chess game using Small Language Models Code
Private Doc Q&A On-device document Q&A with RAG and voice input Code
LFM2.5 Mobile Actions LoRA fine-tuned LFM2.5-1.2B that translates natural language into Android OS function calls for on-device mobile action recognition Code
Photo Triage Agent Private photo library cleanup using LFM vision model Code
Tiny-MoA Mixture of Agents on CPU with LFM2.5 Brain (1.2B) Code
barq-web-rag Browser-based RAG app for document Q&A with LFM2.5-1.2B-Thinking running fully local via WebGPU Code
LFM-Scholar Automated literature review agent for finding and citing papers Code

More

Name Description Link
Private Summarizer 100% local text summarization with multi-language support Code
TranslatorLens Offline translation camera for real-time text translation Code
Meeting Intelligence CLI CLI tool for meeting transcription and analysis Code
Food Images Fine-tuning Fine-tune LFM models on food image datasets Code
LFM2-KoEn-Tuning Fine-tuned LFM2 1.2B for Korean-English translation Code
LFM-2.5 Thinking on Web LFM2.5 1.2B parameter reasoning language model running locally in the browser with WebGPU, using Transformers.js and ONNX Runtime Web Code
SFT + DPO Fine-tuning Teaching a 1.2B Model to be a Grumpy Italian Chef: SFT + DPO Fine-Tuning with Unsloth Code
Tauri Plugin LEAP AI Tauri plugin to integrate LEAP and Liquid LFMs into desktop and mobile apps built with Tauri Crate
LFM-2.5 JP on Web LFM2.5 1.2B parameter Japanese language model running locally in the browser with WebGPU, using Transformers.js and ONNX Runtime on Web Code
grosme CLI grocery assistant that reads Apple Notes lists and finds Walmart product matches using LFM-2.5 tool-calling agent via Ollama Code
Chat with LEAP SDK LEAP SDK integration for React Native Code
Discord Moderator Use LFM2.5-1.2B to check messages that exceed a predetermined character limit for suspicious content Code

πŸ• Technical Deep Dives

Recorded sessions (~60 minutes) covering advanced topics and hands-on implementations.

Date Topic Link
2025-11-06 Fine-tuning LFM2-VL for image classification Video
2025-11-27 Building a 100% local Audio-to-Speech CLI with LFM2-Audio Video
2025-12-26 Fine-tuning LFM2-350M for browser control with GRPO and OpenEnv Video
2026-01-22 Local video-captioning with LFM2.5-VL-1.6B and WebGPU Video
2026-03-05 Build your own local AI coding assistant Video

Join the next session! Head to the #live-events channel on Discord.

Contributing

We welcome contributions! Open a PR with a link to your project GitHub repo in the Community Projects section.

Support

About

Examples, end-2-end tutorials and apps built using Liquid AI Foundational Models (LFM) and the LEAP SDK

Topics

Resources

Stars

Watchers

Forks

Contributors