You can make any sound with 1 bit if you can switch it fast enough. In fact it is called a Class D amplifier https://en.wikipedia.org/wiki/Class-D_amplifier. They are often very-high quality, and very efficient!
This is also how high quality electronic motor drives and servos generally work.
Sony used to hype its 1-bit CD players. They did invest in sending a bitstream to the amplifier at a high enough bit rate to keep the resulting "quantization" noise from being audible in the first place.
Switched power supplies have always seemed to be, to me, kin to class D amplifiers, though the block diagrams don't highlight the similarities.
They're basically identical circuits except in a power supply the input is DC (or for an inverter, a sinewave) while in a Class D amp the input is an arbitrary analog signal.
> They did invest in sending a bitstream to the amplifier at a high enough bit rate to keep the resulting "quantization" noise from being audible in the first place.
I should have added: "But Sigma-Delta is as easy (in fact, probably easier) to implement that I really don't understand why people ever use PWM anywhere. It's so inferior IMO."
Well PWM is probably easier to analyze mathematically. The harmonics produced are more predictable. Knowing min off-time and on-time so easily is helpful when designing circuits. It's also super easy to implement PWM in digital logic or software, it basically needs only timer and a comparator.
Sigma-Delta is very easy to implement as well; it just requires a different way of thinking. In S-D you always try to preserve and compensate for the errors you make in quantization, while in PWM you throw the errors away. In physics you'd label S-D an error-conserving system vs PWM an error-dissipative system.
Well put. Another example is Floyd–Steinberg dithering which is also error-conserving.
My (effectively one-line) implementation above was derived from first-principle:
Given a desired target level 0 <= T <= 1, control the output O in {1,0}, such that O on on average is T. Do this by integrating the error T - O over time and switching O such that the sum of (T - O) is finite.
S = O = 0
loop:
S = S + (T - O)
O = (S >= 0)
In fixed point arithmetic this becomes even simpler (assume N-bit arith) S = Sf * 2^N = Sf << N. As |S| <= 1, N+2 bits is sufficient
S = O = 0
loop:
D = T + (~O + 1) << N === T + (O << N) + (O << (N+1))
S = S + D
O = 1 & ~(S >> (N+1))
Completely agree. I've always considered Floyd-Steinberg dithering to be a 2D version of Sigma-Delta. It's all about diffusing and minimizing error quantities, to trade higher sampling rate for lower sample resolution while conserving total information content.
Another example is Bresenham's algorithm for drawing straight lines on raster displays. The quantity being approximated there is the slope of the line, which is approximated with minimal diffused error as the line is being drawn with only integer adds and subtracts. No divisions and no floating point needed.
These are some of the most subtly beautiful algorithms in computing.
Yeah, I think the author needs to take an introductory Signals & Systems class. They could learn not only about delta-sigma DACs but also the relationship between Fourier decomposition and LTI signals, plus the basics of psychoacoustics. They say, "Appreciation of how the 1-bit waveform acts conceptually (and psychoacoustically) is necessary for understanding and implementing timbral and sonic interest in 1-bit music," but they themselves lack anything more than the rudiments of that appreciation.
There are some fun techniques in the article, though! I was surprised to see my friend Norm Hardy mentioned — I had never realized he was a pioneer of computer music :)
They are very good indeed. But of course, a true audiophile wouldn't use them ;)
By the way, while the principle behind class D amplifiers was already long known, it's the gallium nitride MOSFET technology used for switching that makes them really of sufficient quality.
This is also how high quality electronic motor drives and servos generally work.