Hacker News new | past | comments | ask | show | jobs | submit login
The Beauty of Bresenham's Algorithm (free.pages.at)
118 points by arlog on Aug 7, 2012 | hide | past | favorite | 46 comments



Bresenham's Algorithm is a useful argument when someone suggests that software patents are a good thing. How far back would the progress of computer graphics have been set if Bresenham or his employer had claimed ownership of this idea?


I think you do Bresenham a great disservice, by underestimating how far ahead of its time this really was. This was published in 1962, so any patent would have expired in 1982. So any patent would probably have had almost no effect, because it would have expired before the market advanced to the point where it was generally applicable.


It's useful for a lot more than just graphics. E.g. linear interpolation with CPUs that are integer-only (most of them at the time) and have small registers (most again) so fixed point is of limited usefulness. That has a huge number of applications.


I first learned of Brensenham's line drawing algorithm from Michael Abrash in his Graphics Programming Black Book. I've yet to come across a more thorough and easy to understand discussion:

http://downloads.gamedev.net/pdf/gpbb/gpbb35.pdf


Here's the abstract to Bresenham's Algorithm from Wikipedia, if anyone wants to know a little more about it, generally:

"The Bresenham line algorithm is an algorithm which determines which points in an n-dimensional raster should be plotted in order to form a close approximation to a straight line between two given points. It is commonly used to draw lines on a computer screen, as it uses only integer addition, subtraction and bit shifting, all of which are very cheap operations in standard computer architectures. It is one of the earliest algorithms developed in the field of computer graphics. A minor extension to the original algorithm also deals with drawing circles."


Bresenham's, at least as implemented there is not the most efficient algorithm.

There are 4 checks per loop in that version. The faster impl recognizes that in 1 direction or the other one coordinate will always increment or decrement by 1 while the other coordinate will or will not increment on each iteration.

That will remove 2 of the 4 checks. The loop just becomes N where N is max(abs(dx), abs(dy))

I don't know if that's still considered Bresenham's or not.

    int sign(int v) {
        return v > 0 ? 1 : ((v < 0) ? -1 : 0);
    }

    int abs(int x) {
        return x > 0 ? x : -x;
    }

    int max(int a, b) {
        return a > b ? a : b;
    }
    
    void drawLine(int x1, int y1, int x2, int y2) {
        int deltaX = x2 - x1;
        int deltaRow = abs(deltaX);
        int rowInc = sign(deltaX);
      
        int deltaY = y2 - y1;   
        int deltaCol = abs(deltaY);
        int colInc = sign(deltaY);
        
        int rowAccum = 0;
        int colAccum = 0;
        int rowCursor = x1;
        int colCursor = y1;
        
        int counter = max(deltaCol, deltaRow);
        int endPnt = counter;
        if (counter == deltaCol) {
            rowAccum = endPnt / 2;
            for (; counter > 0; --counter) {
                rowAccum += deltaRow;
                if (rowAccum >= endPnt) {
                    rowAccum -= endPnt;
                    rowCursor += rowInc;
                }
    
                colCursor += colInc;            
                setPixel(rowCursor, colCursor);
            }
            
        } else {
            colAccum = endPnt / 2;
            for (; counter > 0; --counter) {
                colAccum += deltaCol;
                if (colAccum > endPnt) {
                    colAccum -= endPnt;
                    colCursor += colInc;
                }
                rowCursor += rowInc;
                setPixel(rowCursor, colCursor);
            }
        }    
    }
This is an optimized version of an algorithm I found in the Atari 400/800 Technical Reference Manual page 218.


About a week ago I came to know Bresenham's Algorithm is also important in mobile robotics.

Basically, you model your map using a grid, after firing a sensor such as a LIDAR or sonar and finding something blocks its cone of sight you use Bresenham's to see what cells in your map the new reading provides information about. (i.e. something bouncing 2 meters from where your robot is not only tells you about a block at 2 meters but also about no block in that trajectory, all those cells you now suppose are free and the one you suppose is not are the result of Bressenham's Algorithm)


Could you please elaborate on how this works or point to some resources on this application of the algorithm (if you know of any)?


Essentially it's ray-tracing but in a grid. Bresenham's algorithm let's you do the ray-tracing efficiently.

This is done because a LIDAR/sonar/whatever scanning range sensor returns the angle and range to a reflective object. In most cases, there's an implicit additional piece of information - namely that there's nothing in between the sensor and the object, since the EM radiation was able to get there and back. Bresenham's algorithm is used to tell you the grid cells in which you can assume free space.


This enabled and was core to many demos from the demo scene of the 80s (c64, Amiga).


I doubt anyone here really needs it but here's a C program that demonstrates all three functions in a terminal. Just making a global array and a macro for setPixel is enough to play with these and is really pretty much all I did.

https://gist.github.com/3291916


To save you guys the time of running the code yourself, here's the output in codepad.org:

http://codepad.org/y7fIoHlm


I remember implementing this in ask! Fun times.

it's very funny how this sort of stuff comes back at opportune times to magically solve really hard problems for you with a single pass. Sort of like having Knuth on your bookshelf.


Are you sure about the Bezier curve algorithm? It has "xx *= xx;" and xx is never reset (unless I misunderstand the code).


Why do people write non-conditional loops as "for(;;)" instead of "while(1)"?


According to the C99 standard [0], an omitted optional expression-2 in a for loop is replaced by a non-zero constant. Thus, for(;;) is equivalent to for(;1;) which is effectively equivalent to while(1) as there are no other declarations or expressions. And, for those who are counting, it also saves a character.

[0] http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1124.pdf (See 6.8.5.3)


Why would anyone wreck legibility to save a character?


I don't think it wrecks legibility, since it's a C idiom. People reasonably experienced in C will read "for (;;)" as an infinite loop just as well as "while(1)". That said, it might be a little confusing for beginners (or others who don't use C very regularly).


Ahhh, the ole' semicolon debate.


Because while(1) evaluates to a constant, which can sometimes be a bug and thus, some compilers emit a warning.

http://msdn.microsoft.com/en-us/library/6t66728h%28v=vs.80%2...

http://stackoverflow.com/questions/3490823/why-msvc-generate...


So that you can do tricks like this:

    #define ever (;;)
    for ever {...}
;-)


Not applicable to the original article, but the reason I do it is because Go has no while loop. It doesn't require all those extra semis, though:

  for { 
    ...
  }


Because while(1) still has a condition that is checked on every loop iteration when optimizations are disabled. for(;;) can be read as "forever".


FWIW, GCC and Perl generate the same code for both.

http://stackoverflow.com/questions/885908/while-1-vs-for-is-...


Interesting. Looks like they'd be the same performance wise, since

for (;;);

is the same as

{ while (true) { ; ; } }

I originally asked this question cos it's used this way in first C program in the link. Bit of a tangent though :P


That issue disappeared before the 1980's. Check your compiler.


Any compiler/interpreter worth its salt will notice when an expression being tested in a conditional is constant and generate code that doesn't actually test it at run time. for (;;) is arguably more "idiomatic" C, though.


for(;;), being equivalent to for(;1;), also has that condition that has to be checked. So, it all boils down to how compilers treat the two.

And to add oil to the fire, here are some even 'better' ways to do this:

  do {
    ...
  } while(1);
At least that fits the common macro pattern "do{...}while(0)" (http://stackoverflow.com/questions/257418/do-while-0-what-is...).

Of course, all of these are inferior to:

  considered:
    ...
  goto considered;


I do doubt that even for a for(;;), a jmp (x86) is use instead of a conditional jump like jz.


Compilers are smart! For common things like this, they will easily optimise away any conditional jumps.


It is, you can easily check that yourself. for(;;) compiles to exactly one jmp in GCC (tested with gcc 4.7.0)


VC++ generates a warning with while(1) but not with for(;;). I just can't remember what warning level you need in order to see that distinction.


Why do people write while(1) instead of for(;;)? Are you saying that "while(1)" is superior in some way? Why?

I prefer "for(;;)" because it is explicitly defined as an infinite loop, but "while(1)" needs an implicit cast of "1" to "true", and even then there's an implicit test for "true".


True and false are actually defined as 1 and 0 respectively. I don't have a copy of the standard but this specifies true and false as macros.

http://pubs.opengroup.org/onlinepubs/007904975/basedefs/stdb...

Edit: Of course I'm talking about C here. Also, I prefer for(;;) because it's an easy to recognize idiom that means "infinite loop."


There's no casting involved in while(1). 1 is "true" in C, in fact, any integer != 0 is "true".

I just did a quick check and gcc -S produces the same ASM output for both of them, which doesn't include any comparisons whatsoever, just a jmp. So both of them are really compiled to an actual infinite loop.


I guess it's just personal preference. "for(;;)" looks really messy to me, and doesn't seem obvious that it signifies an infinite loop. I think that even the standard format for for loops (for (initialisation; condition; thing done at end of loop)) is quite unintuitive, and I have to think a bit to remember what each part does if one of them is omitted. It's also the only place I can think of where semicolons aren't expected at the end of a line (conventionally). I don't find a problem implicitly casting "1" to "true", I sort of do this automatically after programming in C for a while. Obviously if I were using java or something, I'd use "while(true)". Basically, I don't have to think much to know that it's an infinite loop :)


You could just say while(true). And I'm not sure that the test for true is implicit, since that is what while() explicitly does.


C doesn't have true / false as language constants, only as override-able macros.

The good thing is modern compilers can evaluate a loop and if the break condition is constant it can just do an infinite loop without checking.


It's 1 character shorter.... ;)


I remember coding this up in ARM7 assembler (Gameboy Advance) a while back. Clever algorithm.


By the way, I had a teacher (J.P Reveilès) who had invented the theoretically fastest drawline algorithm ever: he simply figure out that, for a line, there is a pixel pattern which repeat itself (unless the variation is a transcendental number http://en.wikipedia.org/wiki/Transcendental_number) so you only have to compute that pattern once and copy/paste it. If I remember well, for example, a line which has a variation of 1/4, has a 4-pixel wide pattern only!

His discrete line equation is relatively beautiful (it is easily extractible from Bresenham's algorithm however): 0 ≤ ax − by < ω


Wouldn't the exception be for lines whose slope is irrational? (i.e. that can't be represented by a ratio of integers), so the repeating pattern is analogous to the repeating pattern in a decimal representation of a rational number.


How does a computer distinguish an irrational number from a good, (necessarily rational) floating-point approximation of one?


The algorithm only works with points with integer coordinates, so the slope is always rational.


I think you have the last words because line drawing algorithms become irrelevant if you use a Floating Point Unit.

just:

{ x++ and y+=slope } or { y++ and x+= 1./slope }

Then slopes are always rational in line drawing algorithms.


If y and slope are floating point in the first block then might not y end at other than the desired endpoint because of accumulative error?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: