Hacker News new | past | comments | ask | show | jobs | submit login
The 25¢ Apple II Real Time Clock (1981) (atarimagazines.com)
66 points by lisper on May 26, 2019 | hide | past | favorite | 62 comments



Author here. This was the first article I ever published. I was 16 years old when I wrote it, and reading it today makes me cringe a little. AMA.


Why? I wouldn't have guessed your age from the article at all, in fact. I wrote several submissions for COMPUTE!'s Gazette when I was in high school. Yeah, I could have improved the prose and tone a bit, but the tech was sound or it wouldn't have gotten through the editorial process. It's a shame type-in mags and the old hobbyist journals have died, because it was a great way to encourage writing, it was fun to see your name in print, you had a coding standard that was public to aspire to and you made some okay money.


Why what? Why does it make me cringe? Mainly the section headings. "All about interrupts" followed by "More about interrupts." If the first section was all about interrupts then there shouldn't be any more to say about it.

Yeah, I know, I can be pretty hard on myself :-) (And I did say it made me cringe a little. I'm mostly pretty proud of what I did back then.)


I thought the headings were a pretty good humourous addition


Thank you! :-)


I take that as meaning all of the first section was about interrupts, not that the first section was about all of interrupts.


Wow, congratulations. I was 11 years old when I read this article, and many others like it.

I learned to hack in a shop called Computerland, in Perth, WA. Immediately after school, I'd march through the heat from school down and into the shop, which had great A/C of course, insert my floppy (most valuable thing in the schoolbag), and continue typing in code, reading mags, and so on. As long as I was actually doing stuff (i.e. not just playing games), the sales guys were cool with me hanging out, every day for weeks, hacking code.

So, BYTE and all the mags I could buy, pretty much got me booted up - and for that I have to say: a) very well done on a great article, b) thank you for showing up here and now, and what have you been up to since?

:)


Thanks! I had a very similar experience, except mine was at a local science museum. They had a bunch of computers in the back that no one was using so they let me hang out. I taught myself to code on a Sol20. Eventually they hired me, so that was my first job. Then I went on to college, working for NASA and Google, half a dozen startups, and am now semi-retired in silicon valley. It's been a fun ride. In a way, the project described in that article stared it all.


Well-written article.

But professionally, although most Real Time Clock module does generate interrupts, it's misleading to call this simple circuit a "Real Time Clock", there is no "Real Time". It's better to use the term "counter", or "timer".


"The 25c clock" seemed like a snappier title than "A simple hack that lets an Apple II keep track of real time (while it's powered on) and do some basic multitasking" even though that would have been more accurate.


Given that it's a second off after every 16 seconds (interrupt tate: 59.939Hz, assumed rate: 60Hz), it's not even really keeping track of time while powered on.


You need to check your math. It loses about 1.5 minutes per day on a naive implementation. Very few people left their computer running all the time in those days. (Also, it's pretty to fix the problem in software.)


You're right that my math is off, it loses 0.06s every minute, so a second every 16 minutes, still not 'real time'.


In those days, having a clocking mechanism which produced consistent time - i.e. a monotonic clock - was of quite high value. On boot up, it was typical to ask the user what time it was, and continue with the program.

But then how do you measure time since boot? With simple hacks like this of course. Also, this hack allowed a form of multi-tasking, an 8-bit feature quite well targeted back then.

Consistency was key in such clocks, not necessarily accuracy. The fact of the drift is not important, since it was consistent and could be accomodated for in software - as many peripherals of the day, indeed, required.


It's not a RTC though. it's just a 59.939/60 second tick.


Grabbing the mains to get a 60hz clock is a mutch older hack, pdp11s did that before apple 2s did this,and I'm sure others did it before DEC did


This good technique - I hate to call it a hack because it's been either a legal or aspirational standard for decades - is on its last legs in the US as the NERC has or will retire the error corrections that ensured corrections for long term accuracy. Our principal engineer with a history in the power transformer industry tells stories about better times pre 9/11 when utilities would give tours to engineers and describe how the operators would run line frequency corrections overnight to match the dual clocks present, one showing official time, the other showing the usually slightly slower time reflecting the errors accumulated over the day.

http://www.nerc.com/pa/Stand/Project%2020101422%20Phase%202%...

http://www.msnbc.msn.com/id/43532031/ns/technology_and_scien...


We used to get power from a local hydro project, at the power station they used to (ie prior to ubiquitous computers and all that) have two clocks on the wall, one on the local grid and one on the national grid, periodically they would manually speed things up locally to keep the clocks close together as possibly


I would not call that a hack: many timekeeping systems rely on this principle. There was a great write up on Hackaday: https://hackaday.com/2018/03/29/ask-hackaday-is-your-clock-t...


Fun things happen if you do this and run your hardware outside the US: most countries use 50hz mains AC.


Fun things also happen if that frequency isn’t what you think it is. EU clocks derived from the 50Hz signal, for example, ran up to 6 minutes slow in March 2018 (https://www.theverge.com/2018/3/8/17095440/europe-clocks-run...)

And of course, it also is “fun” when you accidentally do not only pick up the frequency of mains power, but also its voltage in your electronics.


That's why PDP computers come with a 50 Hz version and 60 Hz version.


NTSC uses 30 FPS because it's half of it. The same reason is why PAL uses 25fps as on Europe we have 50 Hz on the grid


well technically they use this to reduce hum based interference (the alternative is a slowly rolling hum bar through the screen)


>a lot more than just a clock, it's a cheap way of doing a lot of expensive things, right in line with Apple tradition.

Ah, for the Apple of 1981.


Was the Apple really cheap in 1981 compared to its contemporaries? By 1985 it was the most expensive 8 bit computer being sold if I remember correctly. I had a //e.


Like many (but not all) of Apple's premier offerings since then and today, it was cheap taking into consideration all the factors, and they sometimes capitalized this in their marketing. For all it offered out of the box, with the market penetration it had, the number of users/popularity it enjoyed, the ease-of-use in its class, and level of support delivered...it was "cheap" for certain values of that term that all placed a premium upon getting that entire combination of factors.

There is no doubt it is expensive, but if you fit their target market, there was a fair probability you would get pretty enamored with the fit and find the budget. There are tradeoffs (some significant) to that approach, and Apple had and continues to possess a cultural tendency to be pretty purist about it for some products some of the time, but it has served them well so far (though I see eerily uncomfortable parallels of the ][ era to today's iPhone era).


No, the Apple was expensive, and you still had to wire up this cheap junk to get a working clock.

He's literally taking the piss out of Apple, because it was expensive even then.


In europe the //e was considered one of the most expensive 'home' computers, to the point that it was only really colleges that had any.


My first (and best) 8-bit machine cost me $400, but I had to literally drive across the nullabor to get it (Perth->Melbourne). I suppose the fuel costs could have been put into a full-blown Apple system, but then I wouldn't have driven across the nullabor with my first computer (Oric-1), plugging into every TV I can find along the way ..


WWVB (the NIST atomic clock signal) encodes at 1 bit per second, it would be easy to hook a receiver up to the cassette port and decode this -- I'd guess someone even tried this back in the day. Though you have to wait 60 seconds for a full timestamp, and doing it in the background might be tricky.


The Apple II cassette port had layers of awesomeness to it that still go unappreciated today, even by the platform's hardcore devotees.

For one thing, it was about as fast as a Commodore 64 disk drive, although that was due more to incompetent engineering on Commodore's part than brilliant engineering on Woz's part. (There was plenty of unpicked low-hanging fruit in Woz's own Disk II system.)

For another, you could connect it to an old telephone hybrid and get a free 1200-baud modem, albeit a nonstandard one. Perfectly adequate for allowing two Apples to talk to each other.

And yes, I don't see why you couldn't rig it up to decode WWV(B) transmissions with a minimal amount of external hardware, especially in conjunction with an interrupt-driven timer like the one described in the article.


> There was plenty of unpicked low-hanging fruit in Woz's own Disk II system.

Do you have more details about this? Usually when the Disk II is brought up it's praised for its simpleness and utility.


I'm guessing what could be considered low-hanging fruit would be that the original 4&4 encoding was later surpassed by much more efficient 6&2 encoding a few years later. And even then, the 6&2 encoding scheme chosen wasn't ideal as it took too long to decode. Later intrepid programmers were able to decode an alternate scheme of 6&2 in time to read an entire track within the revolution of the disk and get closer to ideal performance and data density.


The floppy interface was indeed an incredible piece of work, but the big forehead-slapper is the ~5x speed increase that you get from reading sectors in descending order, e.g. from 15-0 instead of 0-15. This makes the difference between having to wait for an extra spindle rotation and being correctly positioned to take advantage of the next one.

How he managed to overlook that is one of life's great mysteries, at least to me.


There was a module for the Commodore 64, made in Germany, that used the atomic clock signal sent from Frankfurt, the Auerswald ACC-64:

https://forum.classic-computing.de/forum/index.php?thread/92...


Using the NMI (non-maskable interrupt) on an Apple II is so wrong! The disk software relied on precise timing of instruction sequences to read and write bits on the disk. If you have interrupts going, reads will fail and writes will corrupt the sector being written and possibly the next sector.


It's mentioned in the article.


I feel like the article is written from the perspective of an embedded software/hardware appliance developer, or the hobbyist equivalent.

It makes a lot of sense to talk about saving $100 on an RTC board if you only want a clock to for some one-off script you’re writing. Things can be as fiddly as you want in that case—e.g. load the program from disk to memory, then enable the clock using the little switch you built into it and let it tick while the disk is spun down.


Yeah, with just a XOR checksum this would be pretty dangerous even for reads. Of course, in those days you could probably sell someone a $50 FIFO buffer to work around the disk controller problems :)


I don't know much about the Apple II but I'm surprised that external circuitry is required to effectively count the number of frames. Isn't there already a way to count the number of frames without hardware hack on this computer?

Beyond that, an RTC without battery backup to maintain the time across reboots might not be extremely useful. Given the low precision I'd consider this more like a hardware timer than an RTC, but I'm probably splitting hairs now.

It's still a cool hack nonetheless, it's always refreshing to see how simple and straightforward it is to hack these old machines, both in hardware and software. Can you imagine doing something like that on a modern system? I mean, it's doable but it'll take more than three lines of assembly to get it to work correctly.


Nope. You can't even write a busy loop that syncs with the frame rate because the ][ lacks a VSYNC flag. Some modern demos manage to read the beam position, though -- they look at values on the bus left over by the video circuitry when the bus is floating (the "vapor lock" hack)


The technique is from 1982 by Bob Bishop. http://rich12345.tripod.com/aiivideo/softalk.html


Ah, thanks -- I must have missed that issue! :)


It's an 8 bit system; you need explicit hardware to do everything.

The nearest things I can think of in terms of modern hacks would be tapping into the SMbus on the motherboard, or the I2C bus exposed on the VGA connector, e.g. https://dave.cheney.net/tag/i2c


Good post. I was working on a video card driver for the Linux kernel and I was often wondering if I can configure the i2c pins on the VGA connector as a generic i2c device and connect my sensors to it. So the conclusion is clear now, a lot of people have tried it with success. The possibilities are endless, you can even connect a I/O extender... This is impressing, even when the serial and parallel ports have all retired, most computer still has a "User Port" like a 8-bit computer ;-)


> traditionally require boards full of parts to implement, but are done with only one or two inexpensive chips.

I’m surprised there’s nothing about Wozniak in this. The guy used to look at circuit boards as a kid and redesign them to use less chips... for fun! As a kid!


> For instance, the analog to digital conversion for the game paddles would normally cost at least $25, but is done on the Apple with a single inexpensive timer chip.

I discovered this back in the day when I modified a breakout clone to use keyboard input instead of a paddle (joystick) which I didn't own: The game ran much much much faster, and because I had never ran it using a paddle I didn't realize for quite a while that it wasn't supposed to run that fast, and I thought I was just bad at the game.


Not what I was expecting, normally I would expect a real time clock to keep running when the system was turned off, so you would be able to have accurate current date/time.

I'm also very surprised to see that apparently the Apple II didn't have something like this built in already, were all delays implemented as busy loops?


Yes. The CPU ran at 1 MHz and instruction timings were deterministic and well-documented, so it wasn't hard to write a delay loop for any number of microseconds.

There were sound synthesizers that could play chords, all based on toggling a digital output connected to a speaker with the right timing patterns.


You could even make intelligible speech to text by toggling the one bit speaker at the right frequency.

https://en.wikipedia.org/wiki/Software_Automatic_Mouth

And I found a web version that sounds just like it.

https://simulationcorner.net/index.php?page=sam



the Apple II didn't have something like this built in already

I don't think any of the popular 8-bit machines did. It might make more sense if you consider it from the other end - what would you do with an accurate date and time on such a machine?


Ah, by "this" I meant the articles 60Hz timer, not the proper date/time peripheral I imagined the article to be from the title.

For a 60Hz timer, just being able to have a delay without needing to count all the instructions in between when programming.


The Apple //e had a way to track vertical retrace but that's a fairly late model. It was a sort of 'interrupt-free' system anyway so given that and its overall slowness, cycle counting ends up being the practical default.


> I don't think any of the popular 8-bit machines did.

The Commodores had timers in their 6522/6526 chips, where an RTC was set up. In Basic, it could be accessed via the TI or TI$ variables (integer and string respectively). Not having timers or VBlank interrupts were unusual. The Apple ][ was an early and hacked-together design by Woz, not at all made for a mass market. Just look at the graphic formats :-/


Not quite. TI$ on the C64 was maintained by the 60Hz Timer A interrupt (in fact, remarkably similar to this method, but using IRQ instead of NMI and built into BASIC). You are correct that the CIAs have Time-of-Day clocks, but BASIC doesn't leverage those. However, some apps like GEOS do.


I mean -- timestamp your files in ProDOS? Control your sweet X10 home automation setup? Clock your employees in and out? Run your BBS?


If you were that fancy, you bought a clock card, discovered it didn't actually work with ProDOS, tried to hack it or sent it back, etc.


It didn’t? I never had a real clock card for my Apple //e, but I had a simulated one with my Apple//e card in my LCII. I never knew the real ones didn’t work with ProDOS.


Oh I meant hypothetically - some of them did and some of them didn't since some of the cards pre-dated ProDOS. What I was trying to say was, if you wanted to do something where an RTC made sense, you would almost certainly have to buy extra hardware to do the something itself as well - it's not like the machine had as much as a serial port. I imagine the //e card gave a view of a nigh-impossibly tricked out //e, as would make sense for a //e that has an entire Mac as its 'peripherals'.


Yep

https://en.wikipedia.org/wiki/Apple_IIe_Card

Also, it would translate ImageWriter output to QuickDraw and you could print to any printer connected to your Mac.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: