Hacker News new | past | comments | ask | show | jobs | submit login

In general, yes. That my computer can do many things is something I take advantage of as a user. The programs should use them to their advantage, but by and large, most programs do not need all of the processing capabilities of my computer, so I expect they should play well together. (Indeed, it takes effort to get the GPU of my computer to help out with anything.)



This is the job of your operating system's scheduler - to divide the limited resource of your computer's CPU time among different competing tasks.

In the modern era, a program cannot take more than its fair share of CPU time - otherwise, a runaway program could easily render your computer nearly unusable. (Linux, macOS and Windows all use preemptive multitasking.)

The way to tell your operating system what you desire prioritized is, on *nix systems, 'nice'.


I'm well aware of that. I also know that, in general, having to schedule things slows them down. If everything I'm running is trying to schedule something on my entire machine, it is giving my OS more work. Which will, by necessity, be harder to schedule and slow things down.

I'm not necessarily against all of this, but I'm also not eagerly embracing more crap to slow down my machine for no apparent reason.


You are assuming that your OS scheduler is nearly at capacity, it isn't. And moreover your CPU is will spend most of it's life idle.

I could almost guarantee you this is not the bottleneck of any modern setup.


I can guarantee you that I have jumped over to my browser when doing a compile that is definitely limited by my machine. I'm ok doing this knowing that I will only take up so much work on my machine. I do not go off and kick off another giant compile at the same time.

Now, if the browser becomes more and more consuming, hitting my browser could get closer and closer to kicking off another giant compile.


Keep in mind that by parallelizing work the browser can finish the work it's doing faster, which means your CPU can spend more time idle, which is better for power consumption.


That is a big "ostensibly" there. Hard data showing this would be somewhat nice.

And, as others, you are also assuming I was not pegging out my machine doing something by choice.


Metajack mention power usage with parallelization briefly here[0], but doesn't provide the data.

[0] https://youtu.be/7q9vIMXSTzc?t=35m (2015)


To be clear, the reasoning is sound on this argument. I'm skeptical due to it never having delivered data, though. :(

I want it to be true. I expect that someone should be able to show this with data. I've never seen it done, though.


You can collect much of this data for yourself on a GNU Linux system by using the cgroups feature of the Linux kernel which is more powerful than nice https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.t... .

Using the various CPU* options, you can turn on CPUAccounting, pin a given process and its children/threads to a range of CPUs, place CPUQuotas and so forth. There's a lot of power and granularity there.

I know, anecdotally, that devops/sysadmin folk use this to also audit and test energy consumption of processes over time. (Certain popular PID 1 programs have a run tool that allows you to easily, dynamically change and audit process resource usage.)

My typical use case, for instance, is auditing and managing various Emacs' processes lifetimes while running potentially racy elisp code.


I know the data can be gathered. Could go even more direct and measure power usage of the computer before and after the upgrade.

It would be nice if everyone pushing some of these would collect some data for their claims, though. Especially if any of them have better setups (read: more than the single machine I have).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: