Hacker News new | past | comments | ask | show | jobs | submit login
Grayscale on 1-bit LCDs (2022) (zephray.me)
670 points by _Microft on Jan 12, 2023 | hide | past | favorite | 92 comments



My father Bryce Bayer studied this question fifty years ago at Eastman Kodak; his approach is known as ordered dithering:

https://en.wikipedia.org/wiki/Ordered_dithering

One is effectively posterizing a grayscale image, and his primary goal was to reduce artifacts drawing unwanted attention to the borders between poster levels.

With improvements in hardware other approaches to dithering took over. The last time I saw my father's pattern was on a DEC box logo. He moved to the color group at Kodak, and designed the "Bayer filter" used in digital cameras.


I think this is one of the better, well my dad can beat up your dad type of stories. I know you absolutely didn't mean it that way, but there's a part of me that read it that on the second reading of it.

Oh yeah, well my dad...! kind of thing =)


My grandad maintained that he has sorted out some issue with the soon to be rover V8 when it was bought from GM.

My dad manufactured some nozzles to make bonio biscuits. That's not quite as impressive though... Unless you have a dog.


Your father invented the Bayer filter. HN never ceases to amaze me :)


If you think that’s good, also check out further down in this thread. Gotta love HN for finding people that wrote old libraries. In the below case, it’s a library for using grayscale on a TI83.

https://news.ycombinator.com/item?id=34356244


Kudos to your dad! Every time I explain how a digital camera works to people, invariably it's the concept and application of the Bayer filter that causes their jaw to drop.

Most people think 50 megapixels = 50 red, 50 blue, and 50 green megapixels, so it's quite eye-opening. That our eyes work similarly with cones tuned to specific frequencies is just icing on the cake after that.


Yeah, realizing the actual image is only about 1/4 of the resolution of the sensor is something not everyone grasps. Marketing of course doesn't try to explain it either. I tend to take it too far by then explaining the benefits of using 3-chip monocolor sensors (or if not real-time, a single sensor triple flashed) to get the full resolution for each color channel. These are usually CCDs instead of CMOS, but the theory is the same. This is how most astronomy things work, like the Hubble, but instead of RGB, they use other filters that get "colorized" into RGB-ish.


> Marketing of course doesn't try to explain it either.

I remember early CCD digital camera brochures stating "actual resolution", and "enhanced resolution" (or something similar), where the latter is ~3x the former number. I wonder if this was referring to the demosaicing procedure.


Nope, since the early sensors were very low resolution, many cameras included interpolation engines inside, and some of them were pretty good at what they did.

They used blurry language to thinly veil the fact that the 1024x768 image you got indeed started its life as 640x480 one out of the sensor.


It's really not 1/4 of the resolution. That's the beauty of the demosaicing process.


sure, and i knew the pedantics would beat me up over it (maybe rightly so, but i tried to use a number hyperbolically to make it obvious). however, it's not the full resolution of the chip. hence the use of monochrome chips for "full" resolution.


Luma is at full resolution, chroma is "upscaled", but we're less sensitive to color than to light so it's almost as good.

Most JPEG uses subsampling on chroma channels anyway.


Pedants, actually. :)

I couldn't help myself not to point out that the people being pedantic are pedants.

Thanks for the example though, as it was a simple way of mostly expressing a complicated concept.


i'm going to go with autotypo. yeah, that's it.


It's not about being pedantic, but that's literally the whole point of the Bayer process (which is what was being discussed).

If we were fine with getting 1/4 resolution output we'd just put one color filter for each sensor pixel and don't Bayer demosaic anything and call it a day.


For a Foveon, the way you first think it works is the way it works :).

https://en.wikipedia.org/wiki/Foveon_X3_sensor


To add to the above, an alternative dithering approach is error diffusion, e.g.:

https://en.wikipedia.org/wiki/Floyd–Steinberg_dithering

The article mentions also doing this in the temporal domain.


I find this to be the most elegant way to handle dithering. Been working with this on a side project.

I've been looking to build an engine/game that approximates this art style: https://obradinn.com


> I find this to be the most elegant way to handle dithering

Yes, it's so simple, it can be applied in a single pass on a single pixel buffer. Because in convolution kernel terms - it's only sampling from half of a moor neighbourhood, and that half can be from pixels not yet processed in the same buffer when moving through them in order.

It's so simple it fits in a dweet ;P

https://www.dwitter.net/d/24142

> I've been looking to build an engine/game that approximates this art style: https://obradinn.com

Killedbyapixel took the above dweet for inspiration in the style of some of his proc gen art, although I haven't dug into the how yet. I suppose deeper game/object awareness integration could produce better results than merely piping the output of the renderer into the dither algorithm, perhaps even the rendering could be optimized by targetting dithering specifically.

https://www.fxhash.xyz/generative/4686


> perhaps even the rendering could be optimized by targetting dithering specifically.

I was wondering about this possibility as well. I'm already up to my elbows in hand-rolled software rasterizer - which is already constrained to only using a luminance channel (8-bits).

I suppose I could still accumulate dither error and do a one-pass, 1-bit raster by combining the world-space 256-level luminance information with whatever error information is in the accumulator.


> I suppose I could still accumulate dither error and do a one-pass, 1-bit raster by combining the world-space 256-level luminance information with whatever error information is in the accumulator.

That would be interesting to see. I like the idea of renderers that give up things e.g resolution in exchange for something else. Pixel depth is just another. Would be interesting what gains in other areas might be possible if the rasterisation stage turns into 1-bit. Then again the cost of actually operating on single bit stored values might out weight any gains... unless this is done in hardware.


Think about it like this... In 1-bit dithered scheme, 1 machine word represents 64 entire pixels worth of values. You can access 64 pixels at a time and use bit-wise operations to manipulate the individual values.

Compare to a typical 24-bit RGB array, you just reduced your full-scan array access requirements from 3 per pixel to 1/64 per pixel.

For 720p (1280x720), you'd only be talking about 14,400 loop iterations to walk the entire frame buffer.

You could almost get away without even compressing the frames in some applications. At 30fps, totally uncompressed, 720p 1-bit dither would come in right at 30 mbps.


Surely you intended a Moore neighborhood and not moor? No relation.


yes


And for 1-bit, Atkinson dithering can produce good effect. Floyd-Steinberg and Atkinson are the only two I'd usually consider for 1-bit dithering, with a preference for Atkinson for most images.

Ordered dithering, even at 8-bit, is…not my cup of tea.


For static images, yes. For animation, not so much as a pixel moving can have huge implications on other sides of the image due to error diffusion.


One more relevant use case: I've found Floyd-Steinberg dithering to be somewhat useful in compressing scanned JPEG paper notes at 600dpi (~1.5-2.5MB per page) to 1-bit PNG (100-300KB).

At full scale, there is no loss in readability and in fact I think I prefer the aesthetic quality of the 1-bit PNG. However, at <50% zoom, the dithered version was way less readable than a 2-bit PNG of slightly higher file size, so I ended up not compressing to 1-bit.

Edit: I was wrong, my (ex-) image viewer was at fault at 50% zoom. Viewing in other apps, the dithered version is visually no different from the 2-bit version. I bet the difference is even less with a HiDPI screen.


This sounds really interesting. Can you share some screenshots for us to see?


Wow! I worked on digital camera chips and had coworkers writing demosaic algorithms. We wouldn't have even had those jobs if it wasn't for your father.


Must be such a good feeling to have this legacy. Your father did great!

This reminds of this great article about creating a modern ordered dithering algorithm, and its effect on animations: https://bisqwit.iki.fi/story/howto/dither/jy/


I really hope your childhood home had a stained glass window with a Bayer pattern!


now that is something that could be interesting. maybe change from squares to diamonds for effect, but i like the suggestion! a stained Bayer glass window makes it sound like it should be found in medieval cathedrals


Wow, I deeply remember seeing these patterns in my relatively early computing days. I think it primarily was in Windows 3.x, back when graphics modes with 16 colors were still normal.

It definitely was not Floyd-Steinberg or anything else using error diffusion (which however I also remember seeing in those days, from some Windows applications' splash dialog for example), because there were these very characteristic stable patterns.

During countless hours staring at the dithered blue gradient of InstallShield, for example.


I recall graphics programming books in the mid/late 90's about dithering techniques.

There's also some fun related work on ASCII art rendering.

This sort of thing is/will still be useful for low-cost displays. I'm sure we'll have Wifi-enabled e-ink displays on cereal boxes soon.


Seeing that brings back a lot of nostalgia for the old PC Paintbrush that was once ubiquitous on DOS machines. The images it turned out were really great for the time, and I sometimes miss the aesthetic of those dithering patterns.


Ordered dithering is great. I use it in my video player, it looks good and is a fast alternative to error diffusion which requires more hardware resources.


Immediately thought of the assembly-level hacking people did ~20 years ago on TI's graphing calculators to get reliable grayscale, for instance this on the TI 83 series: https://www.ticalc.org/archives/news/articles/7/77/77966.htm...


Haha, thanks for dredging this up, I made that :) It seems quite related to the OP article. It's a library for making grayscale games on the TI83 graphical calculator, which has a monochrome display. The main challenge was optimizing the interrupt routine, the z80 assembly code that performed the flickering/dithering that achieves the grayscale effect, to fit within the tiny amount of clock cycles available on the Zilog Z80 6 MHz processor. Even after optimization, it took up ~50%-75% of all available cycles. Some people managed to make some pretty fun grayscale games with it (e.g. https://www.ticalc.org/archives/files/fileinfo/331/33153.htm...). This was obviously in the pre-smartphone era, so the ti83 was quite popular for playing games in class, and hand written assembly code was the only way to make it fast.


I knew immediately that the link was to Desolate before opening it :) It blew me away the first time I played it, as I was used to my shitty turn-based TI-BASIC games.


Holy crap! This takes me back. I used this (or something based on it) on my TI84 SE+ to play around with grayscale over a decade ago. It's the first thing that came to mind when I saw this article. I never got into ASM on the TI calcs but I wrote a TON of TI-Basic. I spent a ton of time on those forums and posted a number of apps/games I wrote (though I can't find them now).


Cool to see that one of the authors of the most cited paper in machine learning also got their start hacking on TI calculators. What a great learning environment that was!


Not that anyone hadn't thought of it before, but I believe that I may hold the first claim of a grayscale TI calculator demo. I released a four-frame, 4-bit grayscale animation of beavis & butthead headbanging for the newly minted Zshell on TI-85 in 1992. I posted it to one of the listserves or usenet groups, but I have never been able to find a copy in anyone's archives. I'd love to see it again. It was not a fancy program. The frames were stored as 4 x 4 x 64x128 bitmaps = 16KB so it consumed like 2/3 of the calculator's memory. Fun times.

If anyone is a usenet archive ninja, my program was called 'BBANIM' and there was a TI-basic POC and zshell asm version released.

I recall that the first game using PoV grayscale was "Plain Jump" (sic) shortly afterwards which apparently continues to be a popular project to clone.


Here is a video by the author with explanations of the technique and later on video playback on the display.

Demo of "Sintel" playing at ~20min into this video:

https://www.youtube.com/watch?v=n7uxEaGB9t0


Nice final result but I would have skipped the PWM detour. PWM sucks and has known bad aliasing issues. Sigma-Delta (AKA "1-bit DAC") is error diffusing, cost the same to implement, and should always be preferred IMO.

A lot of "tricks" from the past were accommodating the slow processing powers, but if you are driving this from, say, an FPGA, you there's no reason for not just doing the best thing.

One thing I didn't see mentioned: the reason this all works is because the liquid crystals don't turn very fast so effectively they are providing a filter that implements the gray-scale output from a binary input. What I'm curious about is if this is a linear process. Based on the PWM result it looks pretty linear.


FWIW, as someone who doesn’t really know anything about LCDs but has messed around with circuits and LEDs, the digression into PWM bridged the gap nicely for me at least. It probably wasn’t necessary from a technical point of view but it helped tell the story.


PWM still has its place - it's not uncommon for high-end sigma deltas to have a multi-bit rather than one-bit output, which feeds into a PWM final output stage. It helps with numerical stability. I've used a similar technique to improve the audio output in FPGA projects where a simple 1st order SD feeds a 5-bit PWM. On the boards in question the output drivers seem to be somewhat asymmetric, making an SD quite noisy at higher clock frequencies. With the PWM final output stage the same effect manifests as a DC offset instead, since (when not saturated) the PWM has a fixed number of rising and falling edges in a given time period.


It's worth noting this person (Wenting Zhang aka zephray) is the same one that's also worked on the eink drivers for https://www.modos.tech/ - https://twitter.com/zephray_wenting/status/15346412997054996...

See also https://news.ycombinator.com/item?id=31674373


He's also the guy who pushed an eink display to 5bpp: https://news.ycombinator.com/item?id=16140284


> He's also the guy who pushed an eink display to 5bpp

didn't the original sony prs ereaders have 32 grays as well?


OP comments on 'error diffusion dithering in 3d' in a few places...

If OP did that, but measuring and taking into account the slow rate of change of color of these pixels, then I think he could get far better results for watching a movie on the LCD, because it should get rid of much of the blurring for fast motion.

If the error signal is allowed to propagate both back and forward in time (which would require preprocessing), then you could probably reduce the blurring even further - effectively a pixel changing from one shade of grey to another can start the transition ahead of time.


This reminded me of the fantastic dev blog of Lucas Pope, the creator of games like Papers Please and The Return of the Obra Dinn. He has an entry on dithering his 3D game and how he managed to "stick" the dithering to the 3D objects.

https://forums.tigsource.com/index.php?topic=40832.msg136374...


The PWM greyscale is commonly used in other other display technologies like LED where controlling current is not feasible as well as Texas Instrument's DMD which is a micro mirror that can only be on or off.


> LED where controlling current is not feasible

Voltage-controlled current source is a pretty basic OpAmp circuit actually. There's also some tricks to turn some common voltage regulators into current controls. If your "information" is already encoded as current, you can somewhat current-mirror with a very basic BJT circuit (its not the best circuit, but if you don't care about accuracy its fine).

I'd say that current control is _more expensive_ than PWM though. PWM is just software or a 555 timer if you're oldschool, both are cheaper than OpAmps / Voltage Regulators.

Or maybe you mean, "not feasible for the costs people expect", which is probably true. PWM is just so much cheaper in practice.


It's "feasible" to use dynamic current sources per pixel, and many LED pixel drivers do offer this for dot correction / colour balance / global brightness, but PWM is almost always used for pixel values; it's much easier to achieve the necessary resolution staying in the digital domain, and there's not really any downside. The other big issue with current control is that the simple ways to do dynamic current are linear, so effectively use constant power regardless of pixel state, and also burn a lot of silicon area and might start creating thermal issues in the driver. At high power levels, current regulated switch mode DC-DC drivers start to make sense, but doing that per pixel is definitely not feasible.


I really meant for large 4K (4096x2160) arrays of LED pixels like in this [1].

[1]: https://displaysolutions.samsung.com/led-signage/onyx


When I needed a beeper with a time delay for a fridge door, I looked at retail prices for a dual-555 IC (a 556) and for a dual-opamp IC (ended up going with a 6002) and was surprised to find the opamp was something like three times cheaper, even taking into account the hefty capacitor for the time delay. (Active buzzers wouldn’t run off a 3V cell so I did need a dual IC for the delay and the square-wave generator.) Is this just a retail-specific distortion?


All of the components you mentioned would be sub-cent in large volumes and surface mount parts.

I'd imagine the retail price is a combination of 'what the customer is willing to pay', and 'random'.


The last time I coded something along these lines was way back in the day for pulsing the Amiga's power/disk drive LEDs in time to music and sound effects, instead of their usual on/off/half brightness states. Not quite so useful, but I remember having fun coding it :D I was surprised at just how effective it was, without any noticeable flicker.


Hopefully not the power led, as the audio filter is switched with the same signal.

Some early games made that mistake. It sounds horrible in anything newer than the first A1000s.


Interacting with hardware is on another level. I remember taking some introductory hardware level classes, and we began to simulate our circuits, just combinations of gates and what-not and you hit a weird signal pattern....

"Ohh, that's just a gate-glitch, we'll discuss that in later courses!"

The way that physics interacts with hardware is incredible, and the dance that we all do back and forth between hardware and software really strikes me as magical. I love it.


Wow, this is really impressive. I would have stopped at various types of dithering, but these techniques are cool and the results look like they really work!

I'm not a hardware person, but now I sort of want to try to simulate this myself to see what it looks like.


Even modern color LCD's on laptops and stuff implement many of these tricks.


Since we have the whole video in advance (we can look forward in time) and we can also measure the response function of the pixels, it would be possible to preprocess the whole video to make it look even better (less ghosting).


Man it's been so long since I've seen an 88x31 on a website. I miss those.


Why are people doing this 30+ years to late? Imagine having this as kids.


Yep! I remember doing this on the HP 48G's in the mid-90's: https://www.hpcalc.org/hp48/graphics/grayscale/


I only ever got four colors to display reliably on my HP48, mostly through naive bank switching/mapping of GROB data.


I remember those! PSA: there's a HP48G emulator for iphone, I use it almost every day...


The computational and engineering effort for these hacks to get to those results in production would have made those price sensitive consumer gadgets far more expensive. It just wasn't economically feasible.

Also, if I'm not mistaken, those hacks look good on static images but could produce nasty artifacts on moving images.


TI-83+ calculators were doing grayscale like this 20 years ago, so a lot of people did have this as kids. It was a bit flickery but worked well.


They actually gave an example that was, the Gameboy. I think in that case the big issue for going beyond just 4 gray levels is RAM and ROM space, and also potentially the speed of the PPU and display driver in dealing with the extra data.


I'm trying to think of examples that used it on the gameboy.

The Gameboy Camera obviously did, any games though?


We were! When I was a kid 28 years ago I had an HP48 calculator that people used the same techniques with for some games and such.


There's a ton of monochrome screens available to EEs under $20, even under $10.

This is still quite relevant.


Another excellent article on dithering: https://news.ycombinator.com/item?id=25633483


Very impressive. But frustrating having to read this great blog post over a tiled background which obscured the text.


I thank the reader mode of my browser in case like this.


You know, I had a cool hardware hacking idea I really wanted to do but both TFTs and eInk screens are prohibitively expensive. I just wanted a low resolution/dpi cheap monochrome display. Lack of demand apparently meant they cost more than normal colored lcds.


Is it just me? I look at the screenshot below the sentence "Then bring in the noise-shaper." that shows the difference between the 1st-order noise shaper and the Sequence LUT and… i'm seeing only a very small difference!


It makes more sense in video: https://www.youtube.com/watch?v=n7uxEaGB9t0


Ah, thank you, the post says "But as you could see from the video progress bar, I am not finished yet" but doesn't embed the video or have a link to it anywhere that I could see.


Related ongoing thread:

Lid – Lo-fi image dithering - https://news.ycombinator.com/item?id=34358828 - Jan 2023 (27 comments)


Made a (very simple!) shader toy for temporal dither on your webcam: https://www.shadertoy.com/view/clf3Rj


AKA Bit banging PWM, never seen it working with this kind of displays, nice project, it can be done even with higher frequency signals too (requires an oscilloscope and a few tries to get a smooth signal).


Excellent write-up. I really wish the photos were apng/webm/mp4 though, it would have added a lot to the reader’s ability to really grasp the changes.


I thought it might be about dithering used in HyperCard, but nope, very different approach.

I wonder if Apple ever tried this with its early CRTs for the Mac?


I can't swear that nobody ever tried it, but such a demo would have been very impressive, and I can't imagine how it could have been done. Those original black-and-white Macintoshes had an 8 MHz processor, and the screen was 512x342 pixels; just performing a single cycle's worth of computation per pixel would limit you to 45 Hz. Even if you could somehow miraculously perform dithering in a single instruction per pixel, you'd have no CPU left to run the animation or do anything else.


Would it have provided any benefit? CRTs have continuously variable brightness. The bottleneck was having large enough video RAM to even store the higher pixel depth, the processing power to draw it, and the bandwidth to do the output.

Circuitry to convert a multi-bit value to a voltage doesn't seem to have been cost prohibitive. Even cheap devices (the predate the Mac) like the Atari 2600 could do that.

I'm actually not sure why they went 1-bit with the Mac. Maybe they felt having more pixels was so important that going monochrome was a reasonable trade-off.

The original NeXT had 2-bit greyscale (black, white, 2 shades of gray), and that looked pretty nice. It also had lots of pixels and was way more expensive.


There was some code allowing 12 bits RGB display on 8 bits palette screen mode, using very simple temporal dithering (PWM). For example, a demo for old versions of Allegro library. This blog post shows a way to improve on that.


Good article, but the background beneath the text (the small paw prints) makes reading difficult.


you can use (abuse?) multiplication to get a nice dither without any tables:

  bool dither(int time, int value) {
    time *= value;
    return (value + time ^ value ^ time) < 0;
  }
where value is in the range 0 ... (int_max/2)


PWM those 1 bits to get grayscale and add dithering.

I'm sure this is what the Palm Handspring did.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: