Skip to main content

All Questions

Tagged with
0 votes
0 answers
39 views

The issue of mask fragmentation during SAM2 tracking

I am currently working on object tracking. I use Moondream2 to identify objects in the scene, filter out duplicate bounding boxes, and then use SAM2 to track the objects. During the tracking process, ...
Limit 's user avatar
1 vote
0 answers
87 views

Running DeepSeek-V3 inference without GPU (on CPU only)

I am trying to run the DeepSeek-V3 model inference on a remote machine (SSH). This machine does not have any GPU, but has many CPU cores. 1rst method/ I try to run the model inference using the ...
The_Average_Engineer's user avatar
2 votes
1 answer
95 views

Load DeepSeek-V3 model from local repo

I want to run the DeepSeek-V3 model inference using the Hugging-Face Transformer library (>= v4.51.0). I read that you can do the following to do that (download the model and run it) from ...
The_Average_Engineer's user avatar
0 votes
1 answer
41 views

about llama-2-7b model loading from huggingface even with meta liscence access

I am trying to load this model, but it gives me the same error. How to fix this? from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "meta-llama/Llama-2-7b-hf" ...
orchestration sam's user avatar
1 vote
1 answer
81 views

FastAPI + Transformers + 4-bit Mistral: .to() is not supported for bitsandbytes 4-bit models error

I'm deploying a FastAPI backend using HuggingFace Transformers with the mistralai/Mistral-7B-Instruct-v0.1 model, quantized to 4-bit using BitsAndBytesConfig. I’m running this inside an NVIDIA GPU ...
Dalmouda's user avatar
0 votes
0 answers
42 views

LoRA Adapter Loading Issue with Llama 3.1 8B - Missing Keys Warning

I'm having trouble loading my LoRA adapters for inference after fine-tuning Llama 3.1 8B. When I try to load the adapter files in a new session, I get a warning about missing adapter keys: /usr/local/...
Mohanad Hafez's user avatar
1 vote
0 answers
49 views

How to upload local LLM to private JFrog huggingface-ml repository?

I want to upload a local LLM to a private Jfrog "huggingface-ml" repository. First I have installed the HF python client: python3 -m pip install "huggingface_hub[cli]" With this ...
V. Pravi's user avatar
0 votes
0 answers
33 views

Using TableTransformer in Standalone Mode Without Hugging Face Hub Access

I need help with using the Hugging Face transformers library, specifically with the TableTransformer model. Due to a network firewall, I cannot directly download models from the Hugging Face Hub (all ...
enzo's user avatar
  • 1
0 votes
0 answers
115 views

How do I use DeepSeek R1 Distill through HuggingFace Inference API?

I have been looking forward to use lightweighted LLM's for a project to convert human language to SQL queries for a database. To do this, I am reading tutorials and official docs on HuggingFace ...
franjefriten's user avatar
1 vote
1 answer
96 views

Why does my Llama 3.1 model act differently between AutoModelForCausalLM and LlamaForCausalLM?

I have one set of weights, one tokenizer, the same prompt, and identical generation parameters. Yet somehow, when I load the model using AutoModelForCausalLM, I get one output, and when I construct it ...
han mo's user avatar
  • 23
0 votes
0 answers
178 views

TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType. Huggingface model locally

I'm trying to run a LM locally I don't really have much knowledge of huggingface. What I did is creating account there then creating a token that can read/write. I created a project did pip install ...
El Pandario's user avatar
0 votes
0 answers
71 views

LangChain: 'dict' object has no attribute 'replace' when using Chroma retriever

I am working on a chatbot using LangChain, ChromaDB, and Hugging Face models. However, when I try to run my script, I get the following error: import os import dotenv from langchain.prompts ...
Saiyad Aamir's user avatar
0 votes
0 answers
27 views

Problems with pretraining models using huggingface

When I went to run a demo using hugging face's pre-trained model HumanVLM, there was a KeyError: 'llava' problem, the demo was as follows. And I have downloaded the pre-trained model locally in the ...
lumos's user avatar
  • 1
6 votes
2 answers
2k views

Why does HuggingFace-provided Deepseek code result in an 'Unknown quantization type' error?

I am using this code from huggingface: This code is directly pasted from the HuggingFace website's page on deepseek and is supposed to be plug-and-play code: from transformers import pipeline ...
Akshit Gulyan's user avatar
0 votes
1 answer
486 views

Checkpoints ValueError with downloading HuggingFace models

I am having trouble downloading deepseek_vl_v2 into my computer. Here is the error in my terminal ValueError: The checkpoint you are trying to load has model type deepseek_vl_v2 but Transformers does ...
θ_enthusiast's user avatar

15 30 50 per page
1
2 3 4 5
28