All Questions
14 questions
0
votes
0
answers
115
views
How do I use DeepSeek R1 Distill through HuggingFace Inference API?
I have been looking forward to use lightweighted LLM's for a project to convert human language to SQL queries for a database. To do this, I am reading tutorials and official docs on HuggingFace ...
0
votes
0
answers
538
views
SSL Certificate Verification Error with Hugging Face Transformers CLI
I'm trying to download the TheBloke/falcon-40b-instruct-GPTQ model using the Hugging Face Transformers CLI in PowerShell on Windows 10, but I consistently encounter an SSL certificate error. It ...
0
votes
1
answer
1k
views
`repo_type` argument if needed
Now trying to run a python script, loading a model from hugging face.
In the terminal it gives an error:
huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/...
2
votes
2
answers
2k
views
Cannot load a gated model from hugginface despite having access and logging in
I am training a Llama-3.1-8B-Instruct model for a specific task.
I have request the access to the huggingface repository, and got access, confirmed on the huggingface webapp dashboard.
I tried calling ...
0
votes
0
answers
42
views
langchain huggingface hub invoking issue
I wanted to create a ReAct Agent in langchain, it always didn't work, and when I invoked the llm itself (not the agent) I found out that its response starts with the input and then with the response
...
1
vote
0
answers
413
views
autotrain.trainers.common:wrapper:92 - No GPU found. A GPU is needed for quantization
❌ ERROR | 2024-02-06 11:07:29 | autotrain.trainers.common:wrapper:91 - train has failed due to an exception: Traceback (most recent call last):
File "/app/src/autotrain/trainers/common.py"...
0
votes
1
answer
1k
views
ChromaDB and HuggingFace cannot process large files
I am trying to process 1000+ page PDFs using huggingface embeddings and chroma db. Whenever I try to upload a large file, however, I get the error below. I don't know if chromadb can handle that big ...
0
votes
3
answers
3k
views
How to perform inference with a Llava Llama model deployed to SageMaker from Huggingface?
I deployed a Llava Llama Huggingface model (https://huggingface.co/liuhaotian/llava-llama-2-13b-chat-lightning-preview/discussions/3) to a SageMaker Domain + Endpoint by using the deployment card ...
6
votes
2
answers
6k
views
How to Merge Fine-tuned Adapter and Pretrained Model in Hugging Face Transformers and Push to Hub?
I have fine-tuned the Llama-2 model following the llama-recipes repository's tutorial. Currently, I have the pretrained model and fine-tuned adapter stored in two separate directories as follows:
...
8
votes
4
answers
29k
views
How to load a huggingface dataset from local path?
Take a simple example in this website, https://huggingface.co/datasets/Dahoas/rm-static:
if I want to load this dataset online, I just directly use,
from datasets import load_dataset
dataset = ...
0
votes
1
answer
2k
views
How does one create a pytorch data loader with a custom hugging face data set without having errors?
Currently my custom data set gives None indices in the data loader, but NOT in the pure data set. When I wrap it in pytorch data loader it fails.
Code is in colab but will put it here in case colab ...
1
vote
1
answer
1k
views
is there a way to search a huggingface Repository for a specific filename?
I'd like to search a huggingface repository for a specific filename, without having to clone it first as it is a rather large repo with thousands of files.
I couldn't find a way to do it with the web ...
6
votes
1
answer
1k
views
Indefinite wait while using Langchain and HuggingFaceHub in python
from langchain import PromptTemplate, HuggingFaceHub, LLMChain
import os
os.environ['HUGGINGFACEHUB_API_TOKEN'] = 'token'
# initialize HF LLM
flan_t5 = HuggingFaceHub(
repo_id="google/flan-...
5
votes
2
answers
2k
views
Using a custom trained huggingface tokenizer
I’ve trained a custom tokenizer using a custom dataset using this code that’s on the documentation. Is there a method for me to add this tokenizer to the hub and to use it as the other tokenizers by ...