Skip to content
View RahulSChand's full-sized avatar
🤔
🤔

Block or report RahulSChand

Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. gpu_poor gpu_poor Public

    Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization

    JavaScript 1.4k 87

  2. llama2.c-for-dummies llama2.c-for-dummies Public

    Step by step explanation/tutorial of llama2.c

    C 226 20

  3. Weighted-low-rank-factorization-Pytorch Weighted-low-rank-factorization-Pytorch Public

    PyTorch implementation of Language model compression with weighted low-rank factorization

    Python 13 4

  4. chess_vlm chess_vlm Public

    Repo to test and finetune VLMs on chess tasks

    Python 9

  5. NVIDIA-NeMo/RL NVIDIA-NeMo/RL Public

    Scalable toolkit for efficient model reinforcement

    Python 1.5k 313

  6. NVIDIA-NeMo/Gym NVIDIA-NeMo/Gym Public

    Build RL environments for LLM training

    Python 798 104