I'm curious how this works in practice? I have a laptop with AMD integrated + dedicated graphics, and AMD's implementation with dual-graphics resulted in herky-jerky framerates (it seemed to alternate between the GPUs, so frames rendered with the integrated one took slightly longer than the dedicated one). I ended up telling the driver not to do that and just use the dedicated GPU (and splitting the work didn't really result in observable improvements anyway).
It's a DirectX 12 feature that software developers have to opt-in to, and it is up to the software developer to figure out how they plan to spread the workload across the GPUs, but they can program against any and every GPU on a device regardless of manufacturer now. (The fun thing here will be to see software handle hot drops as GPUs get attached/reattached with the Surface Book.)