No. In the latter case the "upstream source code" is the key, and the
compiler is the functional transformation under scrutiny. Think of a
triangle where we want two fixed (trustworthy) points to ascertain
the third. Here our trusted principles are a repository (better
multiple repositories) of source code, and a deterministic executable
from someone who has already compiled using a known good compiler.
It really limits the value of a stolen or leaked key. Similarly it limits the amount of trust you need in the key holder.
If the NSA could steal a debian key (or coerce a packager), without deterministic builds, they could put a backdoor in a popular Debian package with barely any chance of getting caught.
Deterministic builds make it much easier to detect this. That makes the value of such a stolen/coerced key much lower. Making it much less likely this gets attempted, and likely stopping anyone who used to try this attack.
Perhaps, but with deterministic builds, anyone can read the source code, recompile, verify integrity of the signed code, and even sign it themselves as a confirmation of its integrity. The more (independent) signatures there are, the more you can trust the precompiled binary.
Code signatures have already existed for decades and do not require reproducable builds.