Skip to main content

All Questions

Tagged with
0 votes
0 answers
314 views

Stable Diffusion 3.5 Turbo extremely slow using diffusers library

Running example code directly from the huggingface stable diffusion 3.5 site link and I am getting extremely slow run times, averaging 90 seconds per iteration. For reference when I use Stable ...
ProfessionalFrog's user avatar
0 votes
0 answers
64 views

why unet forward takes whole GPU memory in every denoising loop

trying to write some toy example code of stable diffusion denoising without diffuser lib. in diffusers examples : https://huggingface.co/docs/diffusers/stable_diffusion we just use the pipe style to ...
flankechen's user avatar
  • 1,255
0 votes
1 answer
669 views

Flux.1 Schnell image generator issue, GPU resources getting exhausted after 1 prompt

So, I tried to train a prompt based image generation model using FLUX.1-schnell. I used Lightning AI Studio (an alternate to Google Colab), that helped me to access to L40 GPU, that came with 48gb ...
ACHINTYA GUPTA's user avatar
3 votes
2 answers
1k views

Issue loading FluxPipeline components

import torch from diffusers import FluxPipeline pipe = FluxPipeline.from_pretrained('C:\\Python\\Projects\\test1\\flux1dev', torch_dtype=torch.bfloat16) pipe.enable_sequential_cpu_offload() prompt = ...
Donald Moore's user avatar
1 vote
1 answer
239 views

Shapes mismatch while training diffusers/UNet2DConditionModel

I am trying to train diffusers/UNet2DConditionModel from scratch. Currently I have error on unet forwarding: mat1 and mat2 shapes cannot be multiplied (288x512 and 1280x512). I noticed that mat1 first ...
u1ug's user avatar
  • 11
0 votes
0 answers
38 views

How to get the grad of input image of StableDiffusionXLControlNetPipeline

I have another model to generate the input image of StableDiffusionXLControlNetPipeline, and I used the loss based on the output of StableDiffusionXLControlNetPipeline, I want my model can get the ...
OoOoO's user avatar
  • 1
1 vote
0 answers
83 views

GPU out of memory using hugging face

Pytorch is throwing GPU out of memory error this is the code from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler from diffusers.utils import load_image ...
sreerag m's user avatar
0 votes
1 answer
221 views

stabilityai/stable-cascade takes 7+ hours to generate an image

I am using this model: https://huggingface.co/stabilityai/stable-cascade from diffusers import StableCascadeCombinedPipeline print("LOADING MODEL") pipe = StableCascadeCombinedPipeline....
x89's user avatar
  • 3,500
2 votes
0 answers
383 views

How to use batch prediction in Diffusers.StableDiffusionXLImg2ImgPipeline library

I'm currently exploring the StableDiffusion Image to Image library within HuggingFace. My goal is to generate images similar to the ones I have stored in a folder. Currently, I'm using the following ...
Adarsh Wase's user avatar
  • 1,900
-2 votes
1 answer
714 views

I use Diffusers to train LoRA. Training images are my photos, but the result image is not like me

Here is my training code. from accelerate.utils import write_basic_config write_basic_config() import os os.environ["MODEL_NAME"] = "runwayml/stable-diffusion-v1-5" os.environ[&...
Han Pengbo's user avatar
  • 1,436
1 vote
0 answers
452 views

How VAE and UNet sample size work in HF Diffusers?

Does anyone know how sample size work in SD's VAE and UNet? All I know is the SD v1.5 was trained with 512*512, so it can generate 512*512 more properly. But when I set the pipeline like 384*384 or ...
MAPLE LEAF's user avatar