There's something so satisfying about a virtual machine that fits in a ~106K WASM file, that can play hundreds of classic games like Teris and Super Mario Bros (via ROM collections on Internet Archive). I don't usually play games, but this emulator is so cute and fun, I keep coming back to waste time on it.
Actually, PICO-8 was the last time I felt this kind of child-like joy about a computer.
I get a similar feeling from this Infinite Mac project.
It's so pleasing to see a running Macintosh in the browser. That interface feels like an old friend. The underlying VM, BasiliskII, is a little less than 1MB WASM file. Amazing!
From the entertaining article, I learned about "retrocomputing". Now I know, that's what I'm into, haha. I especially like it when it's virtualized - like a microcosmos.
If you thought that was a time sink, just search for "retrocomputing" on youtube. We'll see you back here once you resurface in a year or so :)
Its a huge hobby with enormous amounts of cool projects. The one that most impresses me is https://misterfpga.org/ where the community has built FPGA implementations of almost all common game consoles and home computers from the 80s and early 90s.
Thanks for the link--this is amazing. I'm still clinging to hope someone releases a single-board computer of classic Mac architecture. I love my old Macs, but I know eventually they will pass-on and having a 68xxx based SBC that could run System 7 would be awesome.
Wait until you see IOCCC entries on emulation. 8086 emulator under 43 KB. PDP11 in a few lines, too, running the OLD IOCCC entries from the 80's inside that emulator.
Kitty terminal: https://sw.kovidgoyal.net/kitty/ ... It took a while to make Kitty play nice with GNU Screen because the author has a hate-on for terminal multiplexers, but it was totally worth it.
I'm curious about that too, kitty + tmux has been a solid replacement for my long use of urxvt + it's tabs plugin, and it was no trouble at all to get setup
I swear. What happened here? Like why do things feel sluggish than software over 20 years ago? I just can't wrap my mind around it. We have SSDs and infinite CPU and RAM now compared to those days, yet there is always this feeling the computer is doing something more than I asked for behind the scenes. Even on websites, you can feel when you click on a link, it is doing something more. It's a very uncomfortable feeling.
We transitioned from a "high-trust society" (where software, Internet, etc, did not need to take into account so many security considerations, protect itself from enemies everywhere) into a "low-trust society" (encryption everywhere, hardware mitigations for Spectre-kind attacks, Javascript-based tracking, etc).
The overhead is similar to the one people face in third-world countries: you pay once for the government (taxes), then again for basic services (private education, private security, private health, private transportation), and you still end up in a worse situation than in a (high-trust) first-world country.
I tried using a Core 2 Duo laptop recently, but the simple fact that HTTPS is mandatory nowadays, and Core 2 doesn't have AES-NI, makes the laptop spin its fan much more than it should, even if I try to use an old Linux/Windows.
You just don't remember 20 years ago. The modern OSs at the time were sluggish too, as they were pushing the limits of their hardware. Your computer today would be pretty quick if it only did what your computer of 20 years ago did and in only 1024x768.
OSs today do a lot more than 20 years ago at about the same speed as 20 years ago.
Now, we can debate if all the extras are necessary, but that seems like a different debate than speed.
> You just don't remember 20 years ago. The modern OSs at the time were sluggish too, as they were pushing the limits of their hardware.
I would venture to guess that you will not find anyone whose experience with Windows 2000 included any slowness comparable to what we see today, if they got it on a new PC of the time or if they installed it themselves on a newish PC of the time. It’s tempting to blame input and display latency on the increased amount of work an OS does versus wherever we happen to sit on the curve of Moore’s law, but that discounts the fact that the values, priorities, and abilities of the engineers and designers involved in OS development might have changed.
This is the sort of thing that can be tested and measured. I’d wager if you benchmark a good-spec twenty year old PC and any Windows 11 PC on common user tasks in the UI, the twenty year old machine will be faster while using a higher percentage of its CPU time performing the simple tasks you’re testing against. It’s more or less impossible to use a significant fraction of the CPU’s time dragging the mouse pointer around and opening and closing menus, or even opening and closing normal user apps, on a modern desktop PC, as you might very well see on the old one, so one might want to contemplate why the new PC’s UI has so much latency built in compared to the old one…
That’s extremely good stuff, but hopefully it’s obvious from my previous comment that keypress latency isn’t really what I was talking about testing. With some good software and a modern USB 3 keyboard you can engineer a program that on a windows 11 pc puts a character on the screen at some respectable time multiple (sigh) of an Apple II, but how much unavoidable UI and other OS slowness would a user have to navigate through just to start up such a program? (and do we really think hardware performance is the limiting factor there, or poor software engineering and design?)
I tested NT 3 something on a DEC workstation on Alpha CPU, at a trade show in the early 90s.
Already then, it didn't seem like dragging things around would eat a significant amount of CPU. I started a bunch of AVI videos playing at the same time, while dragging windows around, and everything was smooth.
The interesting question is: if opening up and playing all those same videos in a similar fashion is a less smooth experience today with windows and its built in stuff, on a PC that’s a thousand times faster, why?
Watching videos isn’t a great test of what we’re talking about, but the “everything else” involved in getting to them, opening them and navigating them might be revelatory?
Right but a 4K monitor only has about 10.5 times more pixels than a 1024x768 one. In 2000 a high end GPU might have had 32MB VRAM and these days even Intel's worst stuff has access to over 1GB over 30 times as much and even low-end gaming GPUs having 4GB with higher end stuff with 8 or 12GB. Plus the GPUs themselves have gotten much faster and OSs have moved more GUI rendering from the CPU to the GPU.
This is very far off topic, but a user of an Alpha workstation might have had much higher resolutions than 1024x768 available to them with, for example, Intergraph-produced video hardware. The workstation in question (we've actually wandered pretty far past the "20 years ago" mark with this particular machine) might have been ancient enough that an MPEG-2 video decoder would not have been a thing, but just playing a bunch of AVIs on a desktop with 1920x1200, or 2048x1152, or something of the sort resolution would have been available, for a price.
Not only do I remember 20 years ago but I collect vintage hardware and I run Windows 2000 on several Pentium III machines. It's faster on that old hardware than windows 10 on my i7's. File explorer is especially a culprit here, office applications a close second, back then it just does what it needs to do, and does it efficiently. What the hell does Windows 10 and 11 do in explorer I don't know but there's a lot of bloat.
As a Windows early-adopter — v2.01 — and a user for over 30 years at this point (which incidentally is precisely why I avoid it these days, run & work in Linux and am always investigating alternatives), Windows 2000 was my favourite version of all time.
But the NT 4 desktop, before the bloated sluggish abomination of Active Desktop™®, was the high point.
And NT 4 was significantly more sluggish than NT 3.51, despite the mistake of moving the GDI into the kernel in an attempt to improve performance.
2K with Active Desktop removed would be ideal, but I don't think it's possible.
I think that’s what GP was saying. It feels like everything is doing way more behind the scenes, and it feels sluggish. I almost can’t type anything long in Slack because the key input latency is so awful. What is it doing? Firing off a cascade of JS event hooks that pre-query reaction gifs, fetching away status for every user I might have mentioned, or what?
Slack is obviously a pathological example, but it illustrates what’s feeling more and more common in modern GUI applications.
>What is it doing? Firing off a cascade of JS event hooks that pre-query reaction gifs, fetching away status for every user I might have mentioned, or what?
Probably not sluggish for any of that, just that some incompetent dev didn’t want to deal with the built in input control or the inconsistency between OSes so they rolled their own and coded it badly.
Number of times I’ve had to deal with devs doing things like deciding we must create a custom color picker only for them to code it and it glitches if you drag your mouse off the edge of the color wheel. Or coding their own text controls that break OS spellcheck or autocomplete and have input delay for some reason.
Just isn’t that much talent about up to the task of building inputs yet all the low talents rush to roll their own for the smallest reasons.
That sounds more realistic, and it's something you can observe in sites like Facebook Marketplace. It's a SPA with all custom-built controls. About ten years ago I sneered at "jQuery-sprinkle" frontends, but I never imagined how bad thick SPA apps would be.
Win2k and XP definitely fept snappy at that time on new hardware. It's easily verifiable by installing OS' of the time on machine of the appropriate time. Functionality-wise I have no idea what is it exactly that modern OS' do that's 'a lot more'. There are a lot more of abstractions going on which is for sure.
Nope, I have video recordings from my Windows 2000 days. Things load up as soon as I double click on the icon. Like fully ready for me to start typing. It's not a false memory.
The speed to complete operations is different than the feeling of snappiness, i.e. there is less latency in classic Finder even if the file copy takes longer. Sure there are valid reasons like a lot more animations nowadays and sometimes network latency (any web app), but that doesn't change the fact it's more sluggish.
1) people saying (and believing) that CPU, RAM, and disk are fast enough and cheap enough to be effectively infinite, and no longer worthy of concern. put another way: developers feeling that their target platforms are no longer constrained.
2) many software developers simply stopped caring about performance and began implementing all kinds of things extremely naively. new developers do this continually for many years early in their careers.
3) JavaScript. the places this language is used today, where it should never, ever be used. excruciatingly slow language. and JavaScript is EVERYWHERE, even on extremely constrained platforms, doing work, when almost any other language would have been a better choice.
> many software developers simply stopped caring about performance...
Little "conspiracy" theory here if you can call it that, and not one that a lot of people will agree with I imagine, but I sometimes think this was caused by the mass adoption of "premature optimisation is the root of all evil".
The problem is, for a lot of devs, quite possibly myself included, its either premature or an afterthought. With mass adoption of that phrase, it becomes an afterthought. For me, for better or worse, I can't get out of it. It doesn't mean I never write inefficient software, especially in my day job I come into conflict with that, but its always the first thing in my mind.
Maybe I'm way off base there and just trying to justify bad habits, but I can't say it feels bad to me.
Premature optimization is time spent not optimizing things that actually matter.
In my own circles, it's the accusation legitimately levied against the clowns who simultaniously write macros instead of functions out of fears that the compiler could theoretically ignore a __forceinline keyword - or if you're lucky, optimizing some microbenchmark while pessimizing overall program performance - and in the next breath writing O(scary) code that'll be invoked with large enough N to demolish any savings from micro-optimizations.
That is, despite optimizing, these are people who, despite an obsession with performance, are still treating the performance of their overall programs as an afterthought. Because they don't actually care about performance as much as they say: they care about showing off and flexing their coding skills by making a mess.
Well, the phrase itself doesn't help guide these people towards real performance engineering, but they're typically stubborn enough to ignore the phrase, and must instead be goaded into using a profiler or generating benchmarks, which can then be picked apart when they optimize away the actual execution of everything at compile time, teaching by way of concrete example at least a little bit about generating better microbenchmarks, which they can use to show off and flex more effectively instead.
That doesnt sound like premature optimisation to me. A problem certainly, but premature optimisation should be exactly what it sounds like; optimising something before it needs optimisation.
Or perhaps it frequently manifests in that way, but I wouldnt say thats what it is. I know that in my case its much more general, usually because my approach (when Im working on personal projects of course) to optimisation is to change the approach rather than change the code. As you say, overly optimised code is difficult to read and maintain, which in my view is a problem of optimisation itself, so I just try to find compromise and do something thats more efficient.
But I'd still call that premature optimisation, because its rarely triggered by a problem.
Honestly you don't even have to think about optimization to make your software not horrifically slow on modern hardware, instead you have to think about non-pessimization. Simply put, avoid needless complexity and gratuitous waste in the first place.
See Refterm Lecture Part 1 - Philosophies of Optimization by Casey Muratori:
Oh absolutely, you can write perfectly fine software just by being considerate to what you're writing and never have to even consider optimisation. This is one area where premature optimisation becomes a problem for me; I know this, and I prematurely optimise anyway (though I am working turning that down a bit).
For me, another widely held belief (although far less so than the root of all evil) is also somewhat the cause of this problem, particularly the issues you're talking about, this idea that if a library exists for something you shouldnt waste time rewriting it, which ends up with projects stacked to the brim with dependencies. Of course sometimes it is more efficient to use a library, I don't intend to rewrite perlin noise for example, but it shouldnt be your first choice I don't think.
I don't believe software developers stopped caring about performance per se myself. There's a tradeoff between performance and productivity being made. If you can build a cross-platform app in a JS / React Native app at half the time it would take to make two native apps (+ maintenance over years), a lot will quickly opt for productivity, because a working and launched app always beats a fast app. usually.
Personally I'm of the opinion that companies should just spend the money and build native apps, accepting that feature parity may be lagging between one or the other app, but that's me.
Anyway the other issue is that developers will be working on faster machines (on average) than end users, so they won't feel or notice the perf issues so much. And they get used to it really fast.
Re: powerful dev machines "hiding" poor performance
For this reason I hold the belief that software should be developed for (if not on) 10 year old machines. Thanks to this thread I might update that mark to 20 or 30 ;)
That might sound absurd, but whatever you're making, there was probably something similar already there a few decades ago, ran fine on the hardware of the day, the binary was 100x smaller, it used hundreds of times less RAM... so that it can be done is evidenced by the fact that thousands of people have done it already!
Now of course, modern CPU features, advanced use of the GPU and using more RAM can make things much faster. Such optimizations are fine, to be sure, but non-pessimization needs to come first!
(Of course there's also many strong arguments to be made for actually supporting older and weaker hardware, mainly it's needlessly wasteful not to, and people will actually be able to use your software if you do!)
Javascript is one of the fastest dynamically typed languages. People use it instead of other languages because the productivity of dynamically typed languages is generally higher.
I’d be interested in hearing what are those other languages that would be faster to run and not slower to program in.
"Not slower to program in" is a cost savings to the business, but let's be honest about that being what it is. It's a cost savings for the business that results in a crummier product for customers. Just like trimming a single tomato from the salad on in-flight meals is a cost savings to Delta. Doesn't mean it's a "better" salad.
As far as I can tell, it's a question of having or not having whizbang bloated apps like Slack, but we had perfectly acceptable word processors, spreadsheets, 3D modeling tools, etc a long time ago. The current gen, JS-powered versions of those tools (Google Drive, Quip, Notion) all feel slower. Sure, there are fewer BSODs now so it probably evens out, but the extremely low latency UX of those tools back in the Windows 2000/XP days is lost.
Yet Figma blows Illustrator and Sketch out of the water.
A lot it’s really just down to developer talent and how much they care about it. Examples exist in the world proving it’s possible. Illustrator is slow for the same reasons, no one working on it cares about what they’re building. But still it would need 10x speed improvements in multiple areas to beat Figma now.
And Office360 is about as bad as Google Docs. It's honestly probably not a JavaScript problem. Some mixture of businesses not giving a damn about quality because it's more profitable to play the acquire-integrate-exterminate game than make quality software, lower barrier to entry meaning we get more developers who are productive because of better tooling, but probably in over their heads making performance-critical things like input controls in JavaScript, and piles of new abstractions without the old abstractions ever going away.
> the productivity of dynamically typed languages is generally higher.
This is the assumption, yes, and I am not sure that it's true at all; people simply believe it to be true rather than actually check to see if their assumptions are correct.
I believe this to be a very false assumption, in reality. The number of lines of JS that I see which do a particular thing is very high compared to other languages.
This is not a JavaScript example, but it goes to my point that our assumptions are wrong: For example, can someone explain to me why a the Dropbox Desktop client 3 years ago was 4 million lines of Python?[0] If Python is such a high-level language, why are so many lines of code needed to do such a simple thing?
The "4 million lines of Python" in that article is referring to the number of lines that are type checked (I don't see any mention of how much code remained unchecked, although the graphs might imply some 20%), but it also includes Dropbox' server side code base; the shown graphs imply that the server side is 4x larger than the client side.
Assuming the above implications are true, the client side app would be 1 million lines of code. I guess someone with the client could go and measure?
I don't mean to say that 1 M loc isn't still large, but you seem to be drawing conclusions from using numbers incorrectly.
how high-level are "high-level" languages? I would say that it would take far fewer lines of C or even Assembly to accomplish the same goals. 50k lines of C, probably. XTerm is ~70k lines of C, if I recall (I am certainly within 0.5 orders of magnitude), and it is extremely complex and does much more.
if you want a more direct comparison, look at rsync or syncthing. I haven't looked at the size of those, ever, but I'd be willing to bet that both are FAR smaller than 1M lines of code.
So, how it acceptable that a high-level language requires more lines of code than a "lower-level" language like C or Go? we're supposed to GAIN SOMETHING in the tradeoff by using high-level languages, and we're not. certainly not as much as we assume we are.
we need to pay attention to these assumptions that we're making to ensure that they're true, periodically. we're not. it's just accepted as fact that "high-level" languages require fewer lines of code and less work to produce high-quality software, and I don't think we're seeing the benefits of the tradeoff anymore, if we ever really did, outside of tiny example programs.
Who's to say? The interesting bit in this discussion is why the code size is large, and looking in detail what the programs do would be a start.
Of course, one could suppose that using a high-level language makes it easier to write code, hence, more code is going to be written. Or you or I could really conclude anything we want, if we're willing to pretend to know. It would be interesting to know, but that takes work to find out. And I'm a lazy bastard.
PS. not to argue any more :), but I have checked the size of rsync 3.2.3 now, it's 51 Kloc of C code (including the 6300 loc of the included copy of zlib). You'd need to add a cryptographic library, a GUI library, and GUI code to make it a viable replacement for Dropbox, without comparing features. Git annex 8.20210223 is 70 Kloc of Haskell and 9100 loc of JavaScript, but it depends on Git to do its work, which (version 2.30.2) is 327 Kloc of C code and 225 Kloc of sh and Perl code. Thunar 4.16.8 is 73 Kloc of C code, sshfs is 4700 loc and fuse is 22 Kloc of C, openssh is 113 Kloc of C, so maybe you could build a Dropbox replacement in 212 Kloc of C without looking through the feature set. Maybe that's still 4-5 times smaller than the Dropbox client, but your number of 50 Kloc is again off by a factor of 4. :) (I guess one could say that the truth is "somewhere in the middle", looking at those number on a logarithmic scale...)
I'm sure you could also come up with a much smaller implementation, just like you could write a music demo in 256 Bytes[1]. But why would you want to?
so, 1 billion lines of code is acceptable for a file sync tool, then, so long as other pieces of software also have that many lines? is that what you're saying? how many is too many, then? 10 billion? 1 trillion?
have you ever written software?
1 million lines of code for the Dropbox client is absolutely insane, and this is obvious to anyone who has written anything of substance. graphical libraries, security layers, it's all very simple to write stuff if you just take the time to do it. there is just nothing there worthy of 1 million lines of code, and the only way you would reach that line count is to liberally adopt hundreds or thousands of libraries so you can use a function or two from each of them (meaning: be a terrible developer who writes nothing, and only strings together the work of others)
Your comment is currently downvoted, I didn't do that (I don't even have the necessary karma yet). I'm feeling somewhat supported by that, given it seems to be questioning my judgment or my ability to write software, although maybe that's not how you've meant it.
Yes, I've written software, a few hundred Kloc over two decades. That won't look like much. I think most of my code is pretty dense, though, yes, I'm taking my time usually, and I've largely worked alone, which supports that sort of activity. I've always wondered myself how you arrive at those huge code bases. Some people (like Torvalds in earlier projects) are just much more prolific, I don't have any illusions about that. I've written both in low-level languages (C, C++, some assembly) and high-level ones (Scheme, Perl, Python, Bash, some Haskell and JavaScript), more in the latter area.
I've been able to experience that the same code written in C++ is several times (IIRC 3-4x) larger than written in Scheme (both code bases were written by me, first in Scheme to flesh out the algorithm then rewritten in C++ identically except for the memory handling). And takes even rather longer than that to write. I can also see how having a language that's easy to work with will probably lead a developer to spend their time just producing more "meat" in the same time frame, leading to the same or possibly even larger numbers of loc. I can also see how needing to care a lot about how memory is managed might lead to caring a lot about other people's code, hence, to more re-use. Code being costly to create might also give management a sense of urgency to re-use it, or reduce the amount of it being written. But all of those points are just conjectures, I don't have any data on them. But it's interesting conjectures and I'm with you on them.
What I was, or meant to be, saying, is that we need more details and be careful about conclusions. Ranting about bloat is common, it seems fashionable, and there surely is truth about it having real downsides, and it may be right to try to get the world to care, but if one wants to influence anything, one needs to know the details. Already the label bloat itself is up to interpretation: is e.g. embedding a framework to serve ads bloat (a user would likely say yes) or essential (some businesses will say yes)? Is the feeling of drowning in bloat coming from the recent tendency of Windows doing too much undesired activity (again, probably warranted)?
I didn't mean to ridicule you by posting the link to the demo. I was making fun of our often present (shared?) desires of producing beautiful, small code--and the applicability to real-world money-making work. I might be proud if I can produce small code, but the business may very well not care at all (rightly so if it's at such extremes). As you say it takes time to write shorter code, and that has to pay off. Doing it any other way would not be smart. So, there must be a value to reduce code size, and what that is exactly is one of those details we're not talking about. So I've posted the link to say, hey, you could write such code, if you wanted--the question is just, do you want to? Does the business want to? Why, or why not? That's the interesting question, I think. And I was just correcting you on the numbers that you were posting.
> Even on websites, you can feel when you click on a link, it is doing something more
It is, analytics hooks which require network round trips. Otherwise disagree and this is false memories and/or very superficial observation of "snappiness".
It really is bad.
Somehow I brought an 8 core desktop to its knees because a combination of heavy disk+network IO and using pycharm caused windows to just stop servicing pycharm at all, until I stopped the download. I had free memory, disk, and cpu time, I think interrupt processing was stalled or filled? But the hardware was idle.
Most apps connect to the internet for 'updates' 'content synchronization'. Every time you open a new tab in your app there's some spinning wheel you have to wait for.
I had (still have, actually) a Quadra 840AV which was the fastest 68k Mac they ever made and it never felt this fast. If they emulated the actual performance of a 68k (and associated hardware) you wouldn't long for the performance it offered. But sure, running a much simpler (and more efficient out of necessity) OS on modern hardware, even emulated, screams on today's hardware. On the other hand, it didn't pack anywhere near the amount of functionality so the majority of the ROM and OS will fit in the L3 cache of today's processors so of course it's fast.
I agree with you, and I think that part of the issue are the animations that run when doing practically anything on a newer OS, like maximizing or minimizing a window.
Edit: in some cases animations can be disabled, which does a bit to bring back the snappy feel, but there are probably additional reasons why the modern stuff might not be/feel as snappy.
Try an Amiga with plenty of disk space and RAM. It took the PC world 10 or more years to catch up with the Amiga in responsiveness. Part of the reason was the Amiga OS was designed for user interaction, and the keyboard, mouse, and display got plum choice of IRQs.
The next time I would see something as snappy as the Amiga was an M1 Mac.
PC never really felt as solid as an Amiga. It just feels like a single coherent system and PC feels like hacks upon hacks upon hacks and the seams show when it moves.
Well, first of all it "is running inside modern OS " and being snappy while the OS does all kinds of other things.
What is often slow are desktop applications build with total disregard for performance. Everything must be done using web stack common attitude does not help much either.
It's striking how quickly all the apps load and respond to user input. Nonetheless this in-browser version suffers from layers of abstraction; the mouse cursor feels noticeably sluggish (high latency), and audio is delayed by over half a second, desynchronizing entirely from video.
Wait.. Dark Castle actually runs?!
I can't get the controls to work, can anyone help.. I desperately want to show this to my son so he can see what I used to play. :)
Not sure for PC or Linux, but the keyboard works in the newest Chrome on my older Intel Mac Mini. I just had to go to Apple -> Control Panels -> Monitors -> scroll up -> Black & White before launching. For new players, the keys are under the Options button at lower-right: AWDS (move), Q (action), E (duck), Space (jump), using the mouse to aim the arm up and down and clicking to shoot. The Info button explains: keys 1-4 to choose the door in the beginning, Tab (pause) and Command-Q (quit, but quits the browser..), some keys can be held down.. good luck!
https://github.com/binji/binjgb
There's something so satisfying about a virtual machine that fits in a ~106K WASM file, that can play hundreds of classic games like Teris and Super Mario Bros (via ROM collections on Internet Archive). I don't usually play games, but this emulator is so cute and fun, I keep coming back to waste time on it.
Actually, PICO-8 was the last time I felt this kind of child-like joy about a computer.
I get a similar feeling from this Infinite Mac project.
https://github.com/mihaip/infinite-mac
It's so pleasing to see a running Macintosh in the browser. That interface feels like an old friend. The underlying VM, BasiliskII, is a little less than 1MB WASM file. Amazing!
From the entertaining article, I learned about "retrocomputing". Now I know, that's what I'm into, haha. I especially like it when it's virtualized - like a microcosmos.