Hacker News new | past | comments | ask | show | jobs | submit login
Apollo 11 implementation of Trigonometric functions (1969) (fermatslibrary.com)
212 points by BerislavLopac on July 7, 2021 | hide | past | favorite | 40 comments



I'm a fan of Margaret Hamilton, but this is fake. This code is not by Margaret Hamilton and it's not from 1969.

Margaret Hamilton has stated that her first assignment was the abort code FORGETIT in the Apollo 5 program Sunburst, and this exact single-precision sine/cosine routine already existed in Aurora [an earlier version of the software] earlier than March 1966 -- FORGETIT wasn't added to Sunburst until later that year (maybe October-November).

Source: I talked to people who have researched the Apollo code in detail. For more information on the Apollo software releases, see: https://www.ibiblio.org/apollo/Luminary.html


The Command Module system test code Sundial includes a slightly earlier version of these same routines: https://github.com/thewonderidiot/sundiale/blob/master/sundi...

This is a work-in-progress disassembly of the core rope modules we dumped at the MIT Museum, so apologies for the less friendly formatting!


I don’t read a claim that she wrote that code, only that she submitted it somewhere.

I read that as some indication as to the correctness of the claim that this is what was used (a bit like the difference between an email from Linus claiming something about early Linux and a thread on Reddit where X claims his neighbor knows somebody whose nephew worked with Linus, and had this on an old floppy, believing it to apply to early Linux)


Everything I have read about Margaret Hamilton makes me happy; such a wonderful engineer/developer, so seeing her name on this as well (not unexpected though) just enforces that.


Indeed. This was a completely new architecture, and as such, there was probably very little "prior art" to go on. Maybe there were similar functions written for other machines, but this one would pretty much guarantee dead astronauts if it weren't implemented properly.


Note at the beginning it handles overflow conditions.

Funny how these pioneers wrote bulletproof raw machine code and assembly, while today we have a slew of code analysis tools and "safe" programming languages, but everything still crashes six ways from Sunday.


Master carpenters vs. people assembling Ikea furniture (and using Agile to figure out which pieces fit together).


>The reason why the Apollo team chose these terms instead of the Taylor ones was because these were actually optimized for minimum error in the interval −π/2<y<π/2

How would these coefficients be derived in the first place?


They used the Remez exchange algorithm to find the polynomial with minimax relative error. https://en.wikipedia.org/wiki/Remez_algorithm

If you want to play with this sort of thing and you have access to a copy of Matlab, check out Chebfun, https://www.chebfun.org/examples/approx/


There is a blogpost linked in the annotations: http://jeanmariechauvet.com/papers/curttrig.html


Here are the graphs [0] for both functions from the article:

+ 1/2 * sin(x * pi / 2)

+ 0.785313 * x - 0.321615 x^3 + 0.0363551 x^5

[0] https://www.desmos.com/calculator/x7tm7i2qtj


Also here is the same graph with the classic taylor expansion of sin with 3 terms (x + (x^3)/(3!) + (x^5)/(5!)):

https://www.desmos.com/calculator/emfz64hjhv



We did this in my quantum mechanics class, 8.05x on edX. Consider the functions f_n(x) = x^n to be basis vectors in a vector space. Define the dot product of two vectors (aka functions) to be the integral of their product between -pi/2 and +pi/2. You'll see that your basis vectors aren't orthogonal. So, use Graham Schmidt to make them orthogonal, and normalize them while you're at it.

Now you just take the dot product of these new basis vectors with your target function.


Worth noting that this minimizes expected error. IMO, it's much more common to want to minimize maximum (worst-case) error, which requires an algorithm like Remez exchange. You can modify this algorithm to work with relative error (as suggested in the article) instead of absolute error.


Nitpick: Gram-Schmidt.


If one understands linear algebra (and has Haskell output of matrices with nice eigenvalues), then reviewing this spelling is the only preparation needed to teach a semester course.


Also, something doesn't add up here because they were calculating sin(pi*y/2). I suspect it's optimized for -1 < y < 1, so the argument to sin is in [-pi/2, pi/2].


In a similar vein, I can't go without linking to the Fast Inverse Square Root function. Similar kind of thing, a "machine-friendly" way to approximate a different mathematical function.

https://en.m.wikipedia.org/wiki/Fast_inverse_square_root


Fast inverse square root is an ultra-fancy hack for a very particular processor. Apollo's trigonometric functions are the most natural optimal approximation that could be done.


Fast inverse square root isn't that ultra fancy. It just happens that the IEEE representation lines up the exponent to the left, allowing you to do the obvious multiply-the-exponent thing, but in integer mode, followed by a standard Newton-Raphson round. The only surprising part was that this was faster than calling the CPU's native floating point instructions.


I'm not sure about "very particular processor", but the magic number does rely on IEEE floats.


I mean that modern cpus have already a fast instruction to compute that. The apollo code does not seem to be ultra-optimized, but it is a natural implementation of the polynomial approximation.


Can someone actually translate this into a simple formula? What polynomial approximation were they using?



That looks very short. Is that really the full implementation? Is there any annotated version of it?


AGC assembler uses unusual mnemonics, so it can be difficult to understand for someone who doesn't know the architecture.

The CPU has onle accumulator which instructions act on. The argument is normally an address. The relevant instructions for this code snippet is:

AD - Add the value pointed to by the argument to the accumulator

TS - Transfer to storage. Write the accumulator to the argument

TCF - Transfer control to fixed memory. Jumps to an instruction in fixed storage

DOUBLE - Assembler macro compiling to AD A. I.e. double the accumulator.

XCH - This exchanges the value in the accumulator with the argument

INDEX - Adjust the address used in the next instruction by the argument

COM - Complements the accumulator. Note that the AGC uses one's complement arithmetic, so this negates the value.

EXTEND - Indicate that the next instruction is an extended instruction

MP - Multiply the accumulator with the argument

The final instruction is TS Q which jumps to the location stored in the Q register. This is the return address which is updated by the TS instruction. In other words, TS can be used both as a regular jump as well as being used as a function call, depending on what the destination does with the Q register.

I hope this helps you analyse the code further. I haven't spent much time looking at the code itself, so I am not going to attempt an analysis right now.


Also note that the constants (HALF, LIMITS, C3/2, C5/2, C1/2, etc) are defined in another source file: https://raw.githubusercontent.com/chrislgarry/Apollo-11/a135...


Thanks fur the details! What kind of format does SINGLE PRECISION refer to here?


Single precision is a 15-bit fixed point number in the range between -1 and 1. Apparently in most cases this was enough for moon flights. There was also double and triple precision that were used sometimes.

Of course, in many places in the code you need values outside of this range and the actual number represented is scaled by some factor. However, the CPU has no internal representation of the scale factor (i.e. the exponent bits in a floating point number) so the programmer had to manually manage this.

I can strongly recommend this book, if you're interested in the AGC: http://www.apolloguidancecomputer.com/


Thanks, I was already searching for what the opcodes meant.


Fermat's Library is an annotation platform, so there are annotations on the linked version - click on the circles (or rectangles, depending on whether the side panel is open) on either side of the document to see the annotations.


Ah nice. Although a line by line walkthrough would be nicer.


This is how big the full printout looked compared to the author:

https://news.mit.edu/2016/scene-at-mit-margaret-hamilton-apo...


Each one of those bound 'books' is a complete full printout. That photo shows a bunch of different revisions / programs stacked up. Here's one complete listing in a book printout being discussed by one of the original authors Don Eyles. https://www.youtube.com/watch?v=H0ITFbDuJz0

Scan of Luminary - Lunar Module source code https://archive.org/details/Luminary99001J2k60/page/n979/mod...


If you follow the link to github, there is much more.


Apparently the lunar rover used CORDIC[1].

[1]: https://en.wikipedia.org/wiki/CORDIC#Hardware


If folks are interested in their own custom approximations, I can highly recommend Sollya [1].

Sollya has a great Remez implementation as well as a proven supremum norm (maximum error). I combined it with more traditional “loop over all 32-bit float values to also test my range reduction”, but being able to choose your error and number of coefficients is nice!

[1] https://www.sollya.org/


There is a (non-free) book on the architecture of the guidance computer. Goes into great details on the instruction set, memory layout, multitasking, virtual machine.

"The Apollo Guidance Computer: Architecture and Operation"


Why do they have assembly code with modern highlighting as image (FWIW, in PNG format yet with JPEG blurring)?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: