Hacker News new | past | comments | ask | show | jobs | submit login
How to Draw a Circle (banu.com)
180 points by hendi_ on Oct 14, 2011 | hide | past | favorite | 50 comments



In the blog post, he links to this Slashdot comment about circle-drawing at Apple on the Macintosh project here: http://apple.slashdot.org/comments.pl?sid=2462124&cid=37...

Which links to http://www.folklore.org/StoryView.py?project=Macintosh&s..., which talks a little bit about algorithms for drawing circles, ellipses, and RoundRects, and in particular what was used in QuickDraw.

QuickDraw source code is available at http://www.computerhistory.org/highlights/macpaint/ and I've posted what I think is the relevant file at https://gist.github.com/1287861. I think the relevant routine is BumpOval, starting around line 1000. But I don't understand it at all, partly because it's been 18 years since I looked at 68000 assembly. Can anyone help explain what's going on in there? I'd like to know if the algorithm being used there is the same one in Mukund's awesome blog post.

One bug: Mukund, if you're going to call malloc() without checking its return value, use xmalloc()! It's like five lines of code.


I don't know what the deal is on Linux, but in professional code, I disagree: don't both checking malloc, just rig malloc to blow up when it fails.

You'll do more harm trying to use something like xmalloc and failing to use it consistently (obviously you need to "x" more than just "malloc") than you will by taking steps to guarantee a swift failure to make sure any malloc failure ends the program.

Obviously, in any code where "malloc(3)" is the way you get more memory, I think it's pretty silly to check malloc's return value. You aren't going to get recovery right.


Maybe you're thinking of a different xmalloc than I am? I'm talking about the one from libiberty, which is exactly a way to "rig malloc to blow up when it fails". (edited:) Or do you mean that xmalloc isn't a reliable way to do that because some piece of your code might call malloc instead? Is it better to depend on some nonstandard behavior from the standard library?

I think there are only very rare cases in production code where it makes sense to check malloc's return value and try to recover, but they do occasionally exist.


No, I'm thinking of the exact same thing you're thinking of, and I'm saying the "xmalloc" solution looks like it does the same thing as simply rigging malloc to blow up, but does not do so consistently (because "x" malloc isn't going to be used consistently).

I think we agree about explicit malloc return value checks.


It's good to agree with you about something.

I don't think I understand what you mean by 'obviously you need to "x" more than just "malloc"'. If you use xmalloc() instead of malloc(), and either abjure realloc() or use xrealloc(), haven't you covered all your bases?

It's true that a bug could creep in somewhere, but that hardly seems like a particularly damning criticism. Every place you use xmalloc() instead of malloc() is one less bug, one less potential Mark Dowd Flash exploit. And you can use nm(1) or the Visual Studio equivalent to find out if your program still has any calls to malloc() or realloc(), and then you can remove them.

From my perspective, the problem with rigging malloc() itself to blow up is that there's no portable, standard way to do that. And, if someone fails to do it when they're porting your code to a new platform, everything will appear to work perfectly, but the port will have introduced a subtle bug that may result in remote-root vulnerabilities. That would be really bad.


Try to list all the other functions that need "x" equivalents. Here's a free one: xcalloc.


True! It's already in xmalloc.c, but xstrdup isn't. And if you're using third-party libraries that call malloc(), you're probably screwed unless you have a way to change malloc()'s behavior. So you've basically convinced me.


Yay! Another convert! We're gradually taking over the world.

It is seriously soooo much easier to write C code when you don't have to explicitly check allocation returns. I shaved FOUR THOUSAND LINES out of not- particularly- huge- to- begin- with- server in a previous job simply by getting rid of ridiculous chains of "functions that returned errors that ultimately boiled down to malloc failures", and "call sites to those functions that checked their errors and propagated them". I'll never explicitly check malloc() again.

(There are allocators you do need to check; they just tend not to be called "malloc").


Is there an overview somewhere of the (necessarily nonstandard!) ways of getting malloc() to crash your program instead of returning 0?


So you are saying that adhering to memory bounds and limits (and possibly SLAs) should be ignored in favor of having your software crash because its easier? Sure, I can see it as a possible strategy in some cases. But to go ahead and cite this as a rule is kind of short sighted and a disservice to people who put thought into their memory allocation strategies.


No. I favor not kidding yourself.

Almost no software of any significant size has been written in the last 10 years that can honestly claim to gracefully and reliably handle out-of-memory conditions.

Programming regimes that ostensibly cover out-of-memory cases are usually delusive; they provide for some superficial handling of out-of-memory issues (which usually just devolves to exiting the program anyways), but do nothing to address the myriad instances of malloc calls happening behind their backs in libraries or temporary allocations.

Fuck that. These people are going through extra work which (a) provides no greater user experience and (b) actually harms their program by creating opportunities for missed checks that propagate NULL pointers (which, when offset against, are actually exploitable!) through the rest of their code.

Just have malloc terminate your program for you when it fails and be done with it. You seriously aren't going to get anything else right, and it's silly to waste your time trying anyways.


The exceptions, I think, require that you aren't using tons of buggy third-party libraries. In AAA games and in bare-metal programming, for example, exiting the program is not an option, so you don't use libraries that might do that.


Obviously, in any code where "malloc(3)" is the way you get more memory, I think it's pretty silly to check malloc's return value. You aren't going to get recovery right.

Yikes. And you work in security?


Yes, and my certainty that I am right on this point is clad in iron. You clearly don't understand what I'm saying.


How to draw a circle (HAKMEM 149 through HAKMEM 152) reproduced below.

ITEM 149 (Minsky): CIRCLE ALGORITHM Here is an elegant way to draw almost circles on a point-plotting display:

NEW X = OLD X - epsilon * OLD Y NEW Y = OLD Y + epsilon * NEW(!) X

This makes a very round ellipse centered at the origin with its size determined by the initial point. epsilon determines the angular velocity of the circulating point, and slightly affects the eccentricity. If epsilon is a power of 2, then we don't even need multiplication, let alone square roots, sines, and cosines! The "circle" will be perfectly stable because the points soon become periodic.

The circle algorithm was invented by mistake when I tried to save one register in a display hack! Ben Gurley had an amazing display hack using only about six or seven instructions, and it was a great wonder. But it was basically line-oriented. It occurred to me that it would be exciting to have curves, and I was trying to get a curve display hack with minimal instructions.

ITEM 150 (Schroeppel): PROBLEM: Although the reason for the circle algorithm's stability is unclear, what is the number of distinct sets of radii? (Note: algorithm is invertible, so all points have predecessors.)

ITEM 151 (Gosper): Separating X from Y in the above recurrence,

X(N+1) = (2 - epsilon^2) * X(N) - X(N-1) Y(N+1) = (2 - epsilon) * Y(N) - Y(N-1).

These are just the Chebychev recurrence with cos theta (the angular increment) = 1-epsilon^2/2. Thus X(N) and Y(N) are expressible in the form R cos(N theta + phi). The phi's and R for X(N) and Y(N) can be found from N=0,1. The phi's will differ by less than pi/2 so that the curve is not really a circle. The algorithm is useful nevertheless, because it needs no sine or square root function, even to get started.

X(N) and Y(N) are also expressible in closed form in the algebra of ordered pairs described under linear recurrences, but they lack the remarkable numerical stability of the "simultaneous" form of the recurrence.

ITEM 152 (Salamin): With exact arithmetic, the circle algorithm is stable iff |epsilon| < 2. In this case, all points lie on the ellipse

X^2 - epsilon X Y + Y^2 = constant,

where the constant is determined by the initial point. This ellipse has its major axis at 45 degrees (if epsilon > 0) or 135 degrees (if epsilon < 0) and has eccentricity

sqrt(epsilon/(1 + epsilon/2)).

For the full MIT HAKMEM (a report everyone who programs should read at some point) http://www.inwap.com/pdp10/hbaker/hakmem/hacks.html


Looks like this would be awesome but I can't seem to get it to work (ITEM 149)...I tried for lots of different values of epsilon. Maybe I'm misunderstanding what a point-plotting display is?

http://jsbin.com/umurar/edit#source

EDIT: err - nevermind, I had a silly bug that didnt update the old values. Kinda working now!

http://jsbin.com/umurar/6/edit#source


I just took your 6th revision and started playing around with the parameters, and it's interesting how much variation you can get with simple manipulation: http://jsbin.com/umurar/11/edit


Your code doesn't work because x and y values don't change ever. Try adding

    oldX = x; oldY = y;

after "p=parseInt ..." and surprise, it'll draw circles :)


I thought this was going to be about the World Freehand Circle Drawing Champion: http://www.youtube.com/watch?v=eAhfZUZiwSE


> Circles are invisible. When we say we want to draw a circle on the screen, we want to set pixels (light up equally sized rectangular bits of the screen) at integer coordinates to represent an approximation of the circle.

Priceless.


Circles are not always invisible. A circle made up by a closed-loop line source of photons would be visible. (They're kind of hard to make, though.)

It's sort of the question of what is the smallest thing a human can "see". Stars, for example, are extremely small, far below the resolution of the human eye (outside of the atmosphere, of course). Yet seeing them is not a problem, you just get an unresolved image.


Then it's the photons that are visible.

A circle is a set of points, and a point is an abstract concept. Having a photon centered on a point does not make the point visible.


Also, a circle is an infinite set of points, so even if points were visible, you couldn't have a real circle without infinite points.


By that definition, nothing is visible. You never see anything except photons.


Visible means "able to be seen." You can see things that emit or reflect light. Abstract concepts don't emit or reflect light.

Real objects can only approximate a circle. The pedantic definition of a circle in this context is being used to make the point that you can't make a perfect circle with pixels.


No. If you made an abstract object that emitted light, like the "linear light source" I hypothesized, it would be a perfect circle, and visible. It doesn't exist, of course, but if it did, it would be visible... ;-)


That's true; and sometimes, that definition is useful, as it is here.


Richard Feynman spoke well and humorously on this topic: http://http://www.youtube.com/watch?v=XpYhM4x4PmY


Ceci n'est pas un cercle?


I don't think that paragraph is of much value, and I think all the replies you got kind of address a non existent discussion.

I don't think it makes sense to discuss if a mathematical concept is 'visible' just because it happens to be something that is often represented on a 2d plot.

This discussion is not visible, what's visible is the group of pixels that form the text on the display device you're using to read this webpage.

Honestly, is this kind of observation of any value?


When I spend all day hopping up and down through different levels of abstraction in the service of building some particular program, it's all too easy to start confusing the map with the territory. If I'm writing some code to draw a circle, I might also have the circle stored on disk in a file that lists a (center, radius) pair, and I might be passing it around in memory as an IDrawable object for someone else's code to consume and I'm putting it on the screen wit pixels and so on and so on and at a certain point it does become useful to have someone remind me that none of those things are really the same circle, or really circles at all, but all representations of some Platonic circle that doesn't actually exist in my code. This problem gets worse when more than one programmer is involved. How many arguments have we seen between two people who agree with each other, but can't tell because they're stuck arguing over whether (center, radius) or a pixel group is "the real circle"?

So no, it's not of much value in that it doesn't tell us anything that we don't already know. But it can be of value to frame our understanding or discussion of the problem.


I think it is valuable because it is funny.


CG pioneer Jim Blinn wrote an excellent article which provides a myriad of ways to do it.

http://dl.acm.org/citation.cfm?id=1435623.1436274

It was republished in his excellent collection of essays "A Trip down the Graphics Pipeline"


Thanks, as a programmer (re)learning math and trig this will be a good exercise.


For me, at least, graphics was the way to learn trig in the first place. I wanted to make a thingy on the screen go at a specified angle at a specified speed, so that I could change its angle without changing the speed, and I vaguely knew that this had something to do with sine and cosine, so I took a trig textbook off the shelf and puzzled it out. Didn't need things like the double-angle formulas, though, so I didn't learn those until I took a class that covered them.


Nice to hear. Go ahead and do write a program. If you have any issues with it, or have trouble following anything that is written in the article, send me an email and I'll be happy to help you out.


Totally agree. I recently had to do the same thing (relearning H.S. trig) using the HTML5 canvas.

It would be an interesting exercise to take this example and add a draw_arc() function.


I couldn't tell in the third example, wouldn't it be rather efficient to make a filled-in large black circle with radius r and then make a concentric filled-in white circle with radius r - epsilon? Epsilon would be the thickness of the circle (not exactly one pixel, but could be close).

I couldn't tell if he thought of this method, but it is a quick way to draw a circle.


What's discussed in the article depends on the capabilities of hardware you are targeting. :) Filling is typically a bottleneck as you are writing to a lot of memory. Square roots can also be rather slow, esp. when there's no hardware implementation of it. Square roots, divisions, multiplications, additions and subtractions, and shifts perform in order of slowest to fastest. Some processors have no floating point hardware. Some processors do shifts, additions and subtractions much faster than multiplications and divisions.

Think of the article as a bunch of ways to draw circles, and you can make up something based on what device you write your code for. :)

Drawing the circle as points using the last function in the article, rather than 2 concentric fills would be faster unless you have some unique hardware for it. :) Disk filling can be implemented in a fragment shader by testing each point against the implicit equation. Such testing is easily parallelized.

You are a clever thinker nevertheless. :)


All the talk about optimizing but not a single performance measurement or even estimate.


Fun article, but what about anti-aliased circles and thick circles?



In image_new, doesn't he want to allocate enough memory for an Image rather than for whatever the sizeof an image* is?


No, image is an Image, so image is an Image, so sizeof image is the amount of memory you need to allocate for an Image. Saying sizeof image rather than sizeof Image allows you to change the type of image in one fewer place, should you want to do that.


Above comment indented with two spaces to preserve asterisks:

  No, image is an Image*, so *image is an Image, so sizeof(*image)
  is the amount of memory you need to allocate for an Image. Saying
  sizeof(*image) rather than sizeof(Image) allows you to change
  the type of image in one fewer place, should you want to do that.


Thank you!


Fucking piece of shit Arc. I'm very sorry that this comment I wrote above is completely fucking wrong because of the deletion of asterisks.


I knew what you meant. I tend to use ∗ when I can't get * to work.


It is de-referencing the image pointer, *image, and taking its size.


Well, yet another CS (under)graduated used to reason in term of optimisation in the code based on false misconceptions : - a circle is drawed differently if you change the geometrical context (what if we asked to draw a circle on a sphere or any others non euclidian/cartesian geometry ?). Cartesian/euclidian definition of a problem can be the problem. - a circle is a stupid singular case, it can be seen either as a peculiar case of rectangle with rounded corner, or an ellipse. Will he make distinct algorithm for rectangle, circle, rounded cornered rectangle, ellipse ? - Its algorithm lacks the divide and conquer aspect : for instance, he could create the bondaries, and then fill in the form. Thus mutualizing much more code than what is shown, thus respecting the DRY imperative. - his explications does not suits godel stick up your nose from a problem to solve it I was expecting a model oriented problem solving. - I am astonished at the level of computer illeteracy : GGI, gimp, SDL and a lot of open source code have circle drawing algorithm implemented wich worth man * years of effort, I guess they already have smart algorithm. CS is about standing on the giant's shoulder, and learning to learn. He clearly fails on both this points which are important in our craft.

Well, I was clearly disappointed this lousy article got so much points. What is important for me in CS is not the algorithm and the code, but how the code elegantly express the general understanding of a real problem into code. How one can fluently translate from a natural language problem into code, and swicth from reformulation of the problem to coding.

These kind of developpers should not be hired at any costs at my opinion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: