Yes, but its not a change to the model architecture, so you can actually subtract out the base SDXL model, merge in any existing SDXL checkpoint, and get an Turbo version of the existing checkpoint..
(In ComfyUI, this just takes loading all three checkpoints -- turbo, base, and the target one -- and using a ModelMergeSubtract node to subtract base from turbo, to get the "turbocharger" alone, and a ModelMergeAdd to add the turbocharger to the target checkpoint.)
Technically, any SD model can do inference in a single step, SDXL Turbo just is an improvement in quality with a small number of steps (per the comparisons in the announcement, at 4 steps it is a little bit better than SDXL base at 50 steps.)
EDIT: When I say "just", I'm not saying this isn't super impressive, just that the change is in quality of low-step-count generations, not making it possible.