In general, yes. That my computer can do many things is something I take advantage of as a user. The programs should use them to their advantage, but by and large, most programs do not need all of the processing capabilities of my computer, so I expect they should play well together. (Indeed, it takes effort to get the GPU of my computer to help out with anything.)
This is the job of your operating system's scheduler - to divide the limited resource of your computer's CPU time among different competing tasks.
In the modern era, a program cannot take more than its fair share of CPU time - otherwise, a runaway program could easily render your computer nearly unusable. (Linux, macOS and Windows all use preemptive multitasking.)
The way to tell your operating system what you desire prioritized is, on *nix systems, 'nice'.
I'm well aware of that. I also know that, in general, having to schedule things slows them down. If everything I'm running is trying to schedule something on my entire machine, it is giving my OS more work. Which will, by necessity, be harder to schedule and slow things down.
I'm not necessarily against all of this, but I'm also not eagerly embracing more crap to slow down my machine for no apparent reason.
I can guarantee you that I have jumped over to my browser when doing a compile that is definitely limited by my machine. I'm ok doing this knowing that I will only take up so much work on my machine. I do not go off and kick off another giant compile at the same time.
Now, if the browser becomes more and more consuming, hitting my browser could get closer and closer to kicking off another giant compile.
Keep in mind that by parallelizing work the browser can finish the work it's doing faster, which means your CPU can spend more time idle, which is better for power consumption.
Using the various CPU* options, you can turn on CPUAccounting, pin a given process and its children/threads to a range of CPUs, place CPUQuotas and so forth. There's a lot of power and granularity there.
I know, anecdotally, that devops/sysadmin folk use this to also audit and test energy consumption of processes over time. (Certain popular PID 1 programs have a run tool that allows you to easily, dynamically change and audit process resource usage.)
My typical use case, for instance, is auditing and managing various Emacs' processes lifetimes while running potentially racy elisp code.
I know the data can be gathered. Could go even more direct and measure power usage of the computer before and after the upgrade.
It would be nice if everyone pushing some of these would collect some data for their claims, though. Especially if any of them have better setups (read: more than the single machine I have).