Skip to main content
-3 votes
0 answers
26 views

T2I Adapter with SDXL produces only black images when conditioning on COCO-WholeBody skeletons (no error, loss decreasing) [closed]

I'm trying to fine-tune a T2I Adapter (full_adapter_xl) on COCO-WholeBody skeleton images, using Hugging Face Diffusers 0.33.0.dev0 + Stable Diffusion XL (stabilityai/stable-diffusion-xl-base-1.0). ...
Ting Yen Chang's user avatar
0 votes
0 answers
81 views

IP-adapter plus face model not working as expected

I came from these two links, https://huggingface.co/h94/IP-Adapter-FaceID https://stable-diffusion-art.com/consistent-face/ They all mentioned I can preserve face id with the controlnet model. So I ...
daisy's user avatar
  • 23.7k
3 votes
1 answer
10k views

ImportError: cannot import name 'cached_download' from 'huggingface_hub'

huggingface_hub==0.27.1 diffusers==0.28.0 I am getting this error: Traceback (most recent call last): File "/data/om/Lotus/infer.py", line 11, in <module> from diffusers.utils ...
Om Rastogi's user avatar
  • 1,037
0 votes
1 answer
1k views

ModuleNotFoundError: No module named 'diffusers.models.unet_2d_blocks'

when I use the diffusers in the https://github.com/alvinliu0/HumanGaussian project, I got the error : Traceback (most recent call last): File "launch.py", line 239, in <module> ...
x k G's user avatar
  • 1
0 votes
0 answers
125 views

Diffusers pipeline Instant ID with Ipadapter

I want to use an implementation of InstantID with Ipadapter using Diffusers library. So far I got : import diffusers from diffusers.utils import load_image from diffusers.models import ControlNetModel ...
Felox's user avatar
  • 502
0 votes
0 answers
73 views

Differences in no of ResNet blocks in up blocks and no of channels for Unet2D model of diffusers

I have been reading about Unets and Stable diffusion and want to train one. I understand the original architecture for unets and how its channels, height and width evolve over down blocks and up ...
Krishna Dave's user avatar
2 votes
1 answer
288 views

Huge memory consumption with SD3.5-medium

I have a g4dn.xlarge AWS GPU instance, it has 16GB memory + 48GB swap, and a Tesla T4 GPU Instance with 16GB vRAM. According to the stability blog, it should be sufficient to run SD3.5 Medium model. ...
daisy's user avatar
  • 23.7k
0 votes
0 answers
314 views

Stable Diffusion 3.5 Turbo extremely slow using diffusers library

Running example code directly from the huggingface stable diffusion 3.5 site link and I am getting extremely slow run times, averaging 90 seconds per iteration. For reference when I use Stable ...
ProfessionalFrog's user avatar
1 vote
1 answer
273 views

Cannot merge Lora weights back to the Flux Dev base model

I have a Flux-Dev base model which has been trained with the LoRA technique using the SimpleTuner framework (https://github.com/bghira/SimpleTuner/blob/main/documentation/quickstart/FLUX.md). The ...
user1875136's user avatar
0 votes
0 answers
64 views

why unet forward takes whole GPU memory in every denoising loop

trying to write some toy example code of stable diffusion denoising without diffuser lib. in diffusers examples : https://huggingface.co/docs/diffusers/stable_diffusion we just use the pipe style to ...
flankechen's user avatar
  • 1,255
0 votes
0 answers
69 views

Run a pretrained Image-2-Image model using diffusers without CUDA?

I need to run a pretrained SD model locally for a small proof of concept project generating images out of outer images. I want to use the python diffusers library for this but if there are better ...
willaayy's user avatar
0 votes
1 answer
669 views

Flux.1 Schnell image generator issue, GPU resources getting exhausted after 1 prompt

So, I tried to train a prompt based image generation model using FLUX.1-schnell. I used Lightning AI Studio (an alternate to Google Colab), that helped me to access to L40 GPU, that came with 48gb ...
ACHINTYA GUPTA's user avatar
0 votes
0 answers
392 views

huggingface diffusers inference for flux in fp16

the Hugginface Flux documentation, links to this comment, describing how to run inference in fp16: https://github.com/huggingface/diffusers/pull/9097#issuecomment-2272292516 it says: FP16 ...
memical's user avatar
  • 2,533
3 votes
2 answers
1k views

Issue loading FluxPipeline components

import torch from diffusers import FluxPipeline pipe = FluxPipeline.from_pretrained('C:\\Python\\Projects\\test1\\flux1dev', torch_dtype=torch.bfloat16) pipe.enable_sequential_cpu_offload() prompt = ...
Donald Moore's user avatar
1 vote
1 answer
239 views

Shapes mismatch while training diffusers/UNet2DConditionModel

I am trying to train diffusers/UNet2DConditionModel from scratch. Currently I have error on unet forwarding: mat1 and mat2 shapes cannot be multiplied (288x512 and 1280x512). I noticed that mat1 first ...
u1ug's user avatar
  • 11

15 30 50 per page