Skip to content

Pre-built binary versions of llama.cpp for Ubuntu with CUDA and Vulkan support. Used by yzma.

Notifications You must be signed in to change notification settings

hybridgroup/llama-cpp-builder

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 

Repository files navigation

llama.cpp - Linux prebuilt binaries for CUDA and Vulkan

This repo builds binary versions of llama.cpp libraries and executables for architectures that are not already part of the normal builds, such as Linux with CUDA or Vulkan support, and Linux arm64 CPU or Vulkan.

New releases are automatically built for the latest release version of llama.cpp. The latest release is checked once per hour.

yzma logo

Used by yzma installer. yzma lets you write Go applications that directly integrate the latest llama.cpp libraries.

CUDA

Currently supported CUDA build configurations:

CPU arch OS CUDA Nvidia Compute arch
amd64 Ubuntu 24.04 12.9 86, 89
amd64 Ubuntu 24.04 13.0.88 86, 89
arm64 Ubuntu 22.04 12.9 87
arm64 Ubuntu 22.04 13.0.88 87

Compute architectures 86 and 89 are those used by consumer video cards.

Compute architecture 87 is used by Jetson Orin and Jetson AGX.

Vulkan

Currently supported Vulkan build configurations:

CPU arch OS Vulkan
arm64 Ubuntu 22.04/Debian Bookworm 1.4.328.1
arm64 Ubuntu 24.04/Debian Trixie 1.4.328.1

The prebuilt Vulkan SDK for ARM64 used for our builds comes from https://github.com/jakoch/vulkan-sdk-arm

Thank you!

CPU

Currently supported CPU build configurations:

CPU arch OS
arm64 Ubuntu 22.04/Debian Bookworm
arm64 Ubuntu 24.04/Debian Trixie

About

Pre-built binary versions of llama.cpp for Ubuntu with CUDA and Vulkan support. Used by yzma.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors