Hacker News new | past | comments | ask | show | jobs | submit login
How to draw ugly lines fast (cohost.org)
247 points by ibobev on Dec 20, 2022 | hide | past | favorite | 45 comments



Just an aside, but in addition to graphics, this kind of thing is also relevant to motion control applications, where you have some kind of trajectory that you need to follow and must discretize it into a series of servo motor coordinate positions for the axes of (e.g.) an XY gantry.

This needs to be correct, and very fast -- sometimes the computational burden can limit your machine's top speed before the mechanical limitations are reached.


Bresenham originally developed the algorithm for drawing lines with a plotter (servos).


On old computers, you could go even faster by precomputing tiles representing little segments at various angles and then copying tiles. The trick here is to realize that you don't need many tiles to represent all the lines. The downside is that it's not quite accurate. But it's much faster. See an example here :

https://github.com/wiz21b/lowtech#the-3d


Why 7x7 tiles? Is 8x8 not a much more natural size?


Not in that graphics mode on an Apple II. 7 bits of each byte gives you pixels, the eighth determines which bichromatic palette to use for those 7 pixels.

https://en.wikipedia.org/wiki/Apple_II_graphics#High-Resolut...

> The Apple II's Hi-Res mode was peculiar even by the standards of the day. [...] Each row of 280 pixels was broken up into 40 blocks of seven pixels each, represented in a single byte. Each pair of adjacent pixels generated a single color pixel via artifact color, resulting in an effective resolution of 140×192. The lower seven bits of each byte represented the pixels, while the most significant bit controlled the phase offset for that block of pixels, altering the color that was displayed.


This is one of those articles that could benefit immensely with a single picture or quick video/gif, but provides neither.

> The other thing I need to do is subpixel-correct lines, because they're so much more nicer in motion than non-subpixel-correct lines. Don't confuse this with AA lines - you can have chunky non-AA single-colour pixel lines that are ALSO subpixel-correct.

> Why this is important is hard to convey in a static picture, but in motion the quality difference is very obvious. And it turns out the speed difference is not actually significant, so why not go for the extra quality?

This claim is made almost immediately in the article, but then provides no poof in picture OR video!

The breakdown of the algorithm and general writing is great though.


I was a bit surprised by this. Unless this was originally meant to be printed I can't see why a simple animation couldn't have been included.

It's essentially pointing out the biggest issue with the article and then shrugging it off, while obviously putting a lot of effort into the writing.


That would be extra work, maybe enough the article would not be published. I will take what we got.

Or you could post GIFs yourself. Maybe that seems like too much work?


Here is an article that goes through triangle rasterizing in detail, with explainations of subpixel precision - along with animations that show the quality improvement.

https://kristoffer-dyrkorn.github.io/triangle-rasterizer/


Thanks, I had no idea what was meant by sub-pixel, starting from section 6 of your link will explain this and show the smoothed animation.

(By sub-pixel they mean more finely determining the endpoints of the lines on an arbitrary grid finer than the output pixels — NOT physical display sub-pixels.)


I am very happy with what we got, as I wrote in another comment. I still find it strange that the author would go out of his way to write the article, produce screenshots, write multiple times how it's hard to explain or show without an animation, yet not record one.

Asking whether I'm willing to provide my own gif isn't quite a fair measure. The author has the code running and he himself commented on the need for moving pictures. I have partial code and no personal investment in this.

I have however written a few articles about stuff I've done for my own work and wherever it seemed relevant I've taken the time to produce and include video. That's been a lot less effort than writing the text, even when it's required a number of changes in the scenes and code.


This was wonderful to read. Informative, well explained, and humorous (the reference to "a pixel is not a little square" took me by surprise and made me laugh).

I feel like I've gotten a better understanding of things I thought I knew already. The only question left in my mind is "why does he always need to write new subpixel-accurate line drawers?!"


Under his picure, you get the explanation.

    Tom Forsyth
    GPU designer and gfx coder

    Gfx coder and chip designer. Worked at Oculus, Valve, RAD, Muckyfoot, 3Dlabs, now back at Intel. Blade2, Larrabee, TF2, VR and many other atrocities.


Yes, I did read this. It was more a comment in appreciation of the humour of that opening. I suppose it makes sense if you take "every now and then" to be every few years rather than every few weeks, but it still sounded funny.


It would help if he defined what "subpixel accurate" means. I can't figure out why you'd care about subpixels if you're producing a bitmap image? Or is he doing something like cleartype but not antialiased?


It becomes important with low resolution framebuffers and once things start to move. Triangle outlines in early 3D games tended to wobble and jitter because the 'math' didn't work at subpixel precision (many PS1 3d games suffered from this).

With subpixel precision (e.g. you do all the math at a higher precision than the framebuffer resolution, and preserve the fractional part of pixel coordinates in all computations until you actually write to the framebuffer), the outlines remain stable even for small movements. IIRC Quake (with the software renderer) was one of the first games which paid proper attention to subpixel accuracy, and they showed this off with a very slight camera movement at the result screen after a multiplayer match. Triangle edges were properly 'crawling' instead of being all jittery.


Also the death screen in single-player, and I remember being impressed by the effect.


The standard bressenham algorithm expects the start and end point of the line segment to be perfectly centered inside the grid. The modified version lets you put the start and end points anywhere.

For example, the standard algorithm would always generate lines that are symmetrical:

    ####
        ####
But the modified version lets you position the endpoints different, so that you'd get:

     ###
        #####
or

     ##
       ######
etc


The endpoints of the lines have subpixel resolution, so although you’re only drawing whole pixels, the choice of which pixels to draw along the line may change based on the subpixel positions of the endpoints.


I was confused about this too. What he means is fractionally accurate endpoints of the lines. So it's just about line endings coordinates not being exact integers.


And the reason why fractionally accurate endpoints matter is that they affect what exact pixels get drawn, which in turn makes animation look much smoother. Eg. a line that "crawls" from 0 to 6 as the endpoints move fractionally upward looks much less jittery than a line that's always 3.

    0 #
       #######

      ##
        ######

      ###
         #####
 
    3 ####
          ####

      #####
           ###

      ######
            ##

    6 #######
             #


He's not doing anything like ClearType, but for smooth animation you'd still want your end points to have more than single-pixel accuracy.


I personally prefer the fixed-point algorithm for its simplicity and performance: https://news.ycombinator.com/item?id=9954975


This one will however slowly drift away from the correct line as it accumulates the rounding error of the slope in each iteration. I am not sure what the worst case is and how long of a line you would have to draw to make a visible difference.


As long as the accumulated error is deterministic (i.e. you get the same wrong output given the same line), and that error is continuous over the field of possible lines, rather than jumping around (i.e. a line that sags to the left a bit, if translated by one subpixel, will still sag to the left a bit, rather than now sagging to the right), that'd be fine for what the author is trying to do, no? The lines just need to stay stable under animation rather than doing the https://tvtropes.org/pmwiki/pmwiki.php/Main/LineBoil thing.


Well, it is deterministic and all but I like my lines to connect the points I specify and not end at nearby ones. A line from (0, 0) to (90, 70) has a slope of 7/9. If you represent that with one fractional digit as 0.8 and then on each step just add 0.8 to the y coordinate, you will end up at (90, 72). If you want to reach (900, 700) you will end up at (900, 720). Two fractional digits will get you to (900, 702), three will make you actually land on (900, 700), but not exactly but (900, 700.2). And even then some pixels might not be in the correct place.

In the right circumstances this might all be fine and work well enough if you have enough precision for the fractional part, but this is no algorithm that I would recommend to anyone.


Can you avoid the error — without needing to use higher-precision math for every step — by 1. using higher-precision math only to calculate some control points along the line; and then 2. partitioning long lines into smaller overlapping lines that connect at the control points; so that you can then 3. draw those short lines with the cheap algorithm?


You probably could, but why? The only advantage that the algorithm could maybe claim is conceptual simplicity and this will be gone the moment you start trying to control those errors.


Is there a theoretical ideal of what line-drawing and anti-alliasing algorithms are aiming for? Is there a perfect line drawing algorithm you can use if you have the time?


There is a very good signal theoretic framework to discuss this.

You can start reading from example Alvy Ray Smith's "A pixel is not a little square" http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf

But in the end, the sampling kernel you choose to sample your line, and, what sort of geometric primitive your line is in the first place, defaults back to matters of taste so there is a very good theoretical basis for discussing all of this, but, there is no ultimate absolutely correct answer that it will provide.


The signal theoretic framework depends on bandwidth limitations that often don't hold for signals we want to produce. Sharp lines and boundaries are reasonable things to want.

I'd say the sampling kernels aren't fully matters of taste, but matters of the display or sampling technology instead. Pixels on CRTs are indeed not little squares, but pixels on anything else pretty much are -- though they can be non-contiguous squares, with different patterns for different colors, even, which eliminates any simplicity that the little-square picture intuitively captures.

Fortunately, all this starts to matter less and less as pixel size and spacing gets smaller and smaller.


I’d say no because there are different goals that can be incompatible, i.e., “perfect” is not well defined. This article’s focus is on accuracy and speed but not visual quality. Antialiasing goes for visual quality and sometimes speed, but most techniques land in an area that balances those or leans a bit one way or the other, while very few techniques are achieving the known maxima of either, let alone both at the same time.

Sibling comment mentions the famous paper “A Pixel is Not a Little Square”. The implication being that using squares when computing analytic coverage for antialiasing is not “perfect”. The problem is we don’t have a single definition of perfect, because it depends on what display device you’re using. CRTs and LCDs and printers all have completely different ‘pixels’, so no single solution exists that works for all of them. Combine that with the fact that getting close to perfect for any given device is very computationally expensive, and you can see why we don’t usually even shoot for perfect.


If the author is here (or notices this post) I found a typo in the article:

> "But in practice you can do significantly better, especially for lines that are nearly horziontal..."

I must say though, "horziontal" does sound fancy!

I signed up to make a comment about it in the article but it "takes a day or two" for the account to be allowed to comment :shrug:


Love your observation! I hope the author never fixes his typo. I can always google this essay by "horziontal"!


That won't work. There are many pages containing that misspelled word :(


44200 hits on Google, for that particular misspelling.


Is there a name for believing that figure Google spits out when it guesses the number of pages? A trustgoogle? A googull? I've seen it in so many places from people I'd think would know better.

Go to page 15 of the results (the last page) and it says

> Page 15 of about 142 results

If I "include omitted results" it claims 331 results, but these include pages with the "horizontal" spelling only...


>It can be done entirely with integer arithmetic.

It's true when first and last points are within or not too far from the screen/clip, but if wanting to draw a long line (if coords are 32 bits ints) that's mostly out of it and don't want clipping to introduce half a pixel size errors/inconsistencies, unless running Bresenham for a long time out of the clip, floating points are a more accurate tool to do the clipping and the drawing.

See for example: https://github.com/jeffhain/jolikit/blob/master/src/main/jav...

[edit: it might actually be possible to jump to the clipped area while staying in integers and not loose accuracy, but I don't recall why I didn't try that]


The drawing actually needs arbitrarily high precision in the worst case. Consider a nearly horizontal line segment spanning the whole viewport where the endpoints' y coordinates are arbitrarily close to the boundary line between two pixels, on either side. You've got to decide (1) where to put the endpoints, and (2) where to split the line segment if the endpoints end up being placed at differently rounded y coordinates. Both steps might require high precision, and for either of them it might be computationally infeasible to get an exact result. (The latter step depends on a ratio of arbitrarily small subpixel deltas, and bounding such a ratio exactly is what might be infeasible.)


Is it possible to do a SIMD version?

I know compute shaders exist, but I have found that with huge datasets and a wide variety of end-user hardware, it tends to end up with many lines of code and brittleness. So I’m curious about high performance CPU-only implementations.


You'd probably want to process multiple lines at once and use scatter to do the pixel drawing, I guess? But if you have one long line and a bunch of short ones, the perf would tank due to the lanes all dying, like gpu shader divergence.


What's wrong with drawing (possibly resulting in thicker line) pixels everywhere the mathematical line intersects the pixel?


Performance. The point of line drawing algorithms like Bresenham is to not need to do anything like costly floating-point intersection tests for every pixel. Also aesthetics: a single-pixel width line with occasional doubled pixels just looks uglier than one without them. That said, the algorithm for finding every cell in a grid lattice that intersects a line is occasionally useful for purposes other than line drawing.


very funny


This is a nice technical dive into line drawing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: