Hacker News new | past | comments | ask | show | jobs | submit login

that's something i think about a lot, and would love to find a good solution for. pasting in a post i made to the pyret list once, on the difference between the BASIC model and the current event-loop-based GUI canvas:

From an imperative programming point of view, it does "leak" the fact that the universe is running at toplevel and pulling updates from your program, rather than sitting there passively and being written to. I was thinking of things more in terms of where a canvas lies on gui/output stream split - an imperative program can simply emit text, audio, network packets, etc, and trust that there is a universe out there to act on them, and of course if you have a gui you know that there are reactive components with two-way interactions that you have to monitor in some sort of event loop. But despite being conceptually one-way, a graphics canvas needs to run in an update loop more akin to a gui than to an output stream. I miss the days of single-application computing where the running application could say "print 'hello world'" and hello world would appear at the text cursor, or "line 0, 0, 100, 100" and a line would appear on the display in an entirely analogous manner. It made graphics something you just did, rather than having to structure your program around.

On the positive side, the big bang model works out a lot better for games, sprite-based stuff, and anything that wants live objects on a vector canvas. The microcomputer model was nice in its day but I guess it doesn't really reflect the way computers work any more :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: