Hacker News new | past | comments | ask | show | jobs | submit login
The Beauty of Bresenham's Algorithm (members.chello.at)
218 points by jacquesm on Aug 22, 2017 | hide | past | favorite | 41 comments



This kind of misses the point of the beauty where in his original algorithm worked perfectly with pure integer math and only needing addition/subtraction/comparison.


The only routines using floats are for bezier curves, right? I think the author actually tried to show the universality of the principles, and routines are seemingly similar enough to back this premise up. It will be of course possible to avoid floats in them, which may or may not simplify the routines, but I guess such changes can destroy the inherent similarity and the author was not brave to try that out. (Multiplications are more common but the same argument applies.)


Does this make the novelty of "real Bresenham's algorithm" a historical curiosity? Now that all we care about is cache misses (hyperbole), how much do we care much about a division per row of pixels?

More to the point: if you're doing anything but pure rasterising, something that requires anything more than a hash table lookup per cell, is it worth "doing it right"?


Bresenham's algorithm is very useful when building software that uses stepper motors to follow a particular path (e.g. robotics, plotters, 3d printers, CNC machine). It's especially nice if you're trying to use a wimpy CPU like an Arduino.

Note that this (and not raster display) was actually the problem Bresenham was describing in the 1965 paper.

Amusingly, the stock firmware for the Makeblock XY Plotter kit did not use Bresenham's algorithm-- and has horrible artifacts due to crazy rounding errors. Reimplementing Bresenham on a plotter does feel a bit like a trip to the past.


Bresenham's algorithm as originally designed lends itself well to implementation in hardware, where you don't want a more complex implementation than necessary, so I'd say it still matters…if GPUs bother to implement GL_LINES in hardware anymore.

(I don't know if GPUs bother with it any longer. The OpenGL spec allows but does not mandate a Bresenham-based implementation, so tessellating into triangles is theoretically an option if they can make the diamond exit rule work. Games rarely use lines, but it wouldn't surprise me if they keep the hardware around for CAD software or whatever.)


Nvidia's Quadro GPU offerings still have 'proper' GL_LINES hardware support for CAD. I'm not sure if they still use Bresenham under the skin though. But most of the consumer GPU offerings just resort to tessellation into triangles AFAIK.


I'd bet money on it tessellating under the hood, simply due to needing to support anti-aliasing and thickness.


Since OpenGL 3.0 lines with thickness larger than 1 are not supported in the core profile.


Though they remain in Vulkan as an optional feature, supported by all the desktop GPUs - so they're aren't going away anytime soon.

Of course, how much of that is driver fakery and how much is actual hardware support can be debated, but there's surely some hardware support, otherwise it wouldn't be exposed in Vulkan which is supposed to be a thin layer.


are you sure they are actually supported in modern GPUs . if you enable the core profile instead of the compatibility profile you get no thick lines. Whether those thick lines are gpu lines or software lines I don't know but usually when things are pulled out of GL it's becasue the are not actually supported in the hardware.


Another property is that because it holds an integer numerator & denominator, there's no round-off errors in drawing the line. Now, we'll have to get some pretty high pixel counts for that to matter, but it's another interesting property vs accumulating a fixed or floating point calculated slope.


> how much do we care much about a division per row of pixels?

EDIT: I just realised I'm not actually answering your pixels question, but a more generalised question of "when would we want to use this integer arithmetic method instead of floating point multiplication and division?"

As much as the rounding error of repeated addition matters - especially with cumulative addition. The comment by white-flame only regards this from the point of view of pixels, but there are many other applications where Bresenham's algorithm is the best solution. As described on Roman Black's website from 2001:

> Bresenham's Algorithm is a system where 2 imperfect periods can be alternated to produce an average that matches any "perfect" period.

https://www.romanblack.com/one_sec.htm

I probably don't have to convince you that this can still be a serious issue in embedded systems. The most obvious use-case would be long-running code where cumulative errors add up. A famous example of this going horribly wrong is the Patriot Missile disaster:

> It turns out that the cause was an inaccurate calculation of the time since boot due to computer arithmetic errors. Specifically, the time in tenths of second as measured by the system's internal clock was multiplied by 1/10 to produce the time in seconds. This calculation was performed using a 24 bit fixed point register. In particular, the value 1/10, which has a non-terminating binary expansion, was chopped at 24 bits after the radix point. The small chopping error, when multiplied by the large number giving the time in tenths of a second, led to a significant error. Indeed, the Patriot battery had been up around 100 hours, and an easy calculation shows that the resulting time error due to the magnified chopping error was about 0.34 seconds.

http://www-users.math.umn.edu/~arnold/disasters/patriot.html

Bresenham's Algorithm avoids this rounding error altogether (but it only works as long as you have a fixed numerator/denominator ratio - or if you really push it one denominator and a set of numerators).

EDIT2: Also, this does not just apply to repeat addition: calculating xp/k from scratch whenever x changes doesn't always work either. You can still get significan rounding errors if the intermediate calculation of xp is such a large number that it barely fits in a floating point.


> calculating x * p/k from scratch whenever x changes doesn't always work either. You can still get significan rounding errors if the intermediate calculation of x * p is such a large number that it barely fits in a floating point.

Too late to edit that accidental markdown, so replying to myself


Firmware programmers don't have the luxury of floating point calculation, L2 caches, etc.


Not sure why this is being downvoted?


It's a subtle point and it may be taken as overly argumentative.


...and I always thought that subtility was a positive value!


Dr. Bresenham was my graphics instructor. When I later talked to other people who took graphics in college - their school had them transforming & scaling images, doing image recognition, animating scenes and so on. We drew lines and circles -- very very quickly. :)

It's also where I learned that compiler maturity matters. Everyone else in the class used Turbo Pascal. I used the then-new Turbo C (yes, plain C), and ended up with 1/3 the performance of their code.


This brings back memories of a series I used to write about game programming when I was 17. One of the articles was about Bezier curves and Brezenham's and DeCasteljau's algorithms!

http://www.flipcode.com/archives/Theory_Practice-Issue_03_Cu...

http://www.flipcode.com/archives/Theory_Practice-Issue_00_In...


I loved this series when I was around that age. I'm surprised to read it was written when you were 17!


Thank you, it made my day :)


wow, those archives have a lot of interesting-sounding articles. Tagged to peruse later.


As an aside, I always think it's such a shame that no one seems to mention Bresenham's updated, and quicker, run-length slice line algorithm.

http://www.phatcode.net/res/224/files/html/ch36/36-01.html


I'm pretty sure I've seen it implemented in a number of TI-83 z80 ASM programs back in the early 2000s, but I can't find a link to anything now. I recall seing a "bresenSlice" implementation in Z80 at some point though.

EDIT: Was it ever patented? Might be why people didn't use it.

EDIT2: I googled it, couldn't find any patents (except for other patents mentioning Bresenham's algorithm). However, the Run-Slice algorithm was published in 85, so it hasn't had as much time to become famous. Also, I found this article describing a newer algorithm from 1999 which looks pretty interesting too:

> Nowadays, most of research papers suggest improvements of the DDA method that was first presented by J. Bresenham. This paper proposes a new algorithm based on a careful analysis of the line segments’ properties some of them previously unused.

http://www.ai.univ-paris8.fr/~boyer/Articles/1999_cgf.pdf


Bresenham's algorithm holds a special place in my memory. Learned about it in Christopher Lampton's "Flights of Fantasy", as a kid and worked through duplicating it in assembly for a little graphics library. Was a great introduction to optimization since it had to be all integer and run well on a 486. Used it most recently to generate 8 bit saw/triangle waves for the an arduino soft-synth, which also rules out floating point and per-sample division. Multiplication also on the tinier MCUS.


Yeah, this was a time when I learned that compilers are smarter than me. With all available optimization guides in mind, I implemented Bresenham line drawing in assembly and also in C. It was pretty depressing when Watcom C compiler shown ~2.5x speedup on 486 with listing that I could barely understand.


Yes! Flights of Fantasy was so great! I loved those Waite Group books!


For lines, fixed-point wins in extreme simplicity and can be even faster than Bresenham, because there's no branching in the inner loop:

https://hbfs.wordpress.com/2009/07/28/faster-than-bresenhams...


On modern machines, sure, but Bresenham will smoke it on say a 6502 which has no arbitrary size shift and 8 bit words.


You don't need arbitrary shift if you make the fractional part a multiple of the word size. But yes, replacing Bresenham for fixed point kind of misses the point, also because you can write the Bresenham inner loop in a branchless way. For example, inverting the sign of "d" compared to the parent's blog post lets you do this:

    ext = -(d<0);
    d -= 2*dy;
    y -= ext;
    d += ext & (2*dx);
where ext is really just a sign extension of d so it's very cheap to compute.

On the other hand, fixed point is better than Bresenham for the anti-aliased case present in the original article.


Even when you're happy to use floats for calculation, counting the error when rounding / ceil()ing to integer values is great for things like layout where you want to distribute X pixels across N columns.


Things get a lot more complicated than the standard bresenham in the real world.

For instance, if you plot (x0,y0) to (x1,y1), will it visit the same pixels as plotting (x1,y1) to (x0,y0)? Probably not, unless you take care.

If you need to consider clipping, then setup requires multiplication and division (or you can step N times until you are inside the clipping region, but that can be slower).

Snapping to integer endpoints produces very noticeable artifacts for animated graphics (yes, noticeable even if the lines aren't antialiased).


> For instance, if you plot (x0,y0) to (x1,y1), will it visit the same pixels as plotting (x1,y1) to (x0,y0)? Probably not, unless you take care.

Well, the simplest way around this is performing a x0 < x1 test and swapping (x0,y0) and (x1, y1) before running the algorithm if that doesn't hold true.

You'd still get artifacts due to snapping, but they would be consistent at least.

Alternatively, if you don't mind a bit of redundancy, wouldn't it suffice to turn the original function y(x) (which is what the standard Bresenham algorithm does) into two that go x(t) and y(t), and set the step-size of t to a small enough number to never skip a pixel?


In the first case, then yes, you are taking care to avoid one of the problems a naive approach has.

In the second case, you'll end up with blobby lines. For an x-major line, the standard algorithm would plot one pixel for every unit in X. With your approach, some x's would get one pixels, and some would get two (at adjacent y's on either side of the ideal y)


Ah true, you need some form of anti-aliasing for that, which would basically end up at something like Xiaolin Wu's algorithm.

https://en.wikipedia.org/wiki/Xiaolin_Wu%27s_line_algorithm


Bresenham's algorithm is also the best way to compute Euclidean Rhythms (rhythms where N hits are maximally-evenly distributed among M possible steps).


Circles can do 8 fold symmetry by reflecting along the diagonals, i.e in addition to (cx ± x, cy ± y) also (cx ± y, cy ± x)


Every single description I've seen of bresenham uses integer co-ordinates, which is not enough in modern pipelines that all use subpixel resolutions.

The modifications required to make it work properly for subpixels are not hard; the question is still what axis the next step is; but I've never seen them described in an article.


If it works with integers, it'll likely also work with 'Fixed-point' co-ordinates. A Fixed Point version is fairly trivial if you know how the integer algorithm works.


Back in 1999, some of us implemented Bresenham's algorithm in an FPGA directly connected via a VGA connector to a monitor, and had it do the mystify screensaver in hardware. Didn't we, pjc50? Fun times.


Bresenham was the center of many of my Amiga demos, drawing lines, zooming/rotating images.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: