Hacker News new | past | comments | ask | show | jobs | submit | mjgerm's comments login

Roughly speaking, an N-level digital logic system requires O(N) transistors in order to buffer/force a signal into one of N states, but only performs O(log(N)) more work with them relative to binary.

Without the buffering step, you'll eventually get the middle logic levels drifting (e.g. your "1"s become "0"s or "2"s). Binary gets this for "free" because there's no middle states; this doesn't apply just to a simple buffer, similar details apply to the implementation of all other gates (many of which are rather awkward to implement).

Analog works out for rough calculations because you can skip the buffering process, at the expense of having your calculation's precision limited by the linearity of your circuit.

SSDs are more of a special case, because to my knowledge they're not really doing work on multi-level logic outside of the storage cells. They pump current in on one axis of a matrix, read it out on the other, and then ADC it back to binary as fast as possible before doing any other logic.

Random sidebar: I don't see any constraint like this for mechanical computers, so a base-10 mechanical computer doesn't strike me as any more unreasonable than a base-2 mechanical computer (i.e. slop and tolerance is independent of gear size). In fact, it might be reasonable to say you should use the largest gears that the technology of your time can support (sorry Babbage).


6 million gallons/yr / 40 inches/yr (average annual rainfall in NY) = 5.5 acres


You have to admit that there is some gap between "I want the latest re-rounded corners on app icons" (which fits into that category neatly) and "I don't want 4 years of unpatched 0-days still open on my phone" (which I'd argue does not).


I don’t know why I have to admit this, but sure I’ll grant it. I don’t think the former is a reasonable characterization of “married to the software”, I think it’s a caricature.


I think the answer amounts to "anyone who has at least a rack full of Dell/HPE hardware".

Dell manages an annual revenue of greater than $20B, so on-prem HW is clearly going strong, regardless of whether or not you think it should be.


Looking at the hardware, I'd wag somewhere in the $2-3M range. Depends on how big of a profit margin they want to make or how scrappy they feel like being (could probably go as low as ~$1M if they can make it up in volume).

If they're following the typical 1/5x pricing model for support, that'd be roughly $500k/yr/rack. But it's also hard to do that while simultaneously describing Dell as "rapacious".


I suspect they are not going to be cheap at all lol, aside from being a very nice hardware package they also have an entire custom hypervisor platform which is intended to be one of the big plusses for the platform.


Thanks!


Roughly, the power of a digital system is sum of the static power and the dynamic power (P=1/2 CV^2*F).

1. Discrete logic chips tend to be built in substantially larger process nodes (microns vs nanometers) that are less efficient. This means higher leakage current and more static power.

2. Discrete logic has to drive traces on a PCB, which have substantially higher capacitance (C) and therefore use more power getting across a board.

3. Discrete logic operates at higher voltages. Contrast 5V TTL vs. 1V core voltage inside a processor. Power is proportional to the voltage squared.

4. A microprocessor running even at low speed can replace a massive number of discrete logic chips, so for simple solutions F is low. If you're doing something very simple and interrupt-driven, F can be in the tens-hundreds of kHz.

Consequently, there's a whole lot more of both static and dynamic power with discrete logic than with a uC.


False. CMOS gates (4000-series) consume no power except when they switch. And they work at low voltage.

There are ways to make a microcontroller use less power, by making it hard-sleep when nothing is happening, but you don't get that without extra work.


Appreciate that description. Helps crystalize exactly how revolutionary microprocessors were compared to other contemporary approaches.


You are now debugging a distributed system.


Oh, good point; I was thinking at the component level


fun fact: you already are in linux. being a monolith doesn't change the nature of the problem.


Are you? I'm paddy pretty sure I can run a single Linux and point a single gdb at it[0] and debug it in a single memory space; I don't think you can do that with a microkernel.

[0] possibly resorting to UML, but still


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: