Hacker News new | past | comments | ask | show | jobs | submit login

is it the model that made it really fast?



Yes, but its not a change to the model architecture, so you can actually subtract out the base SDXL model, merge in any existing SDXL checkpoint, and get an Turbo version of the existing checkpoint..

(In ComfyUI, this just takes loading all three checkpoints -- turbo, base, and the target one -- and using a ModelMergeSubtract node to subtract base from turbo, to get the "turbocharger" alone, and a ModelMergeAdd to add the turbocharger to the target checkpoint.)


So basically like what they do with the inpainting models. We need a jugernaught XL turbo


Yes, SDXL Turbo can do inference in a single step.


Technically, any SD model can do inference in a single step, SDXL Turbo just is an improvement in quality with a small number of steps (per the comparisons in the announcement, at 4 steps it is a little bit better than SDXL base at 50 steps.)

EDIT: When I say "just", I'm not saying this isn't super impressive, just that the change is in quality of low-step-count generations, not making it possible.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: