Hacker News new | past | comments | ask | show | jobs | submit login
C64 OS: make a Commodore 64 feel fast and useful (c64os.com)
204 points by cpeterso on Dec 10, 2021 | hide | past | favorite | 109 comments



I love these projects that show what can be done with 1mhz cpu and mere kilobytes of ram, really makes you wonder what we’re doing with the other 99% of hardware performance we’ve achieved since then


It's closer to something like the other 99.9999999%.

What we've done with it is things like: when you type text into what looks like a simple edit field, each keystroke launches a cascade of javascript.

Layers and layers of API's cause latency. To get this to happen, you have to talk to this broker, which calls this proxy, which delegates to this manager, which queries this store, ...

You will probably almost never see call stacks 27 frames deep when debugging anything in C64.


"the other 99.9999999%" would imply that computers now are a billion times faster than a C64. For some tasks, that's true; a GeForce RTX 3090 is claimed to be 36 teraflops, which is a billion times 36000 flops, and the C64 is more like 10000 flops if we figure we need 100 clock cycles per floating-point operation.

But, for other tasks, it's not a billion times faster; single-threaded 16-bit integer code might get 0.125 operations per clock cycle on the C64 and 3 operations per clock cycle on a modern CPU with a 4000 times faster clock, making the modern computer only about 96000 times faster. The way I look at things, 96000 is closer to being 100 than it is to being a billion.

In cases like that, it's only making you wonder what we're doing with the other 99.999%.

Once your program is using more data than will fit on a 1541, though, everything totally changes.


> single-threaded 16-bit integer code might get 0.125 operations per clock cycle on the C64

Merely adding two 16-bit integers on the 6510 takes 14 cycles on fixed zero page locations for operands and destination. You'll easily spend 150 cycles on a general 16x16 multiplication using look-up tables. Not even measuring the juggling of values into and out of these fixed zero page locations via three 8-bit registers or the stack, we're talking about something like a tenth of your estimated op/s for an even mix of additions and multiplications. So I'd say much closer to a million times faster for a use case like this than 96000.

There may be special cases where the 6510 achieves 0.125 16-bit operations per second, for example multiplying by constant two and adding constant one (10 and 6 cycles, respectively)

16-bit singe threaded integer code seems like a rather contrived example as well. We're after all typically not running 16-bit applications over MS-DOS on our monster machines. Just booting into my OS will result in all cores being used to execute a bunch of 32/64 bit operations.

It would be interesting to see something like a modern cryptographic hashing algorithm implemented on a 6502 and compare performance both on long messages and on many smaller messages. This should give us an idea of how much slower a 6502 is at integer operations.


I very much appreciate the corrections, particularly from someone who knows the architecture so much better than I do. Most days I regret commenting on HN, but today is not one of those days.

To clarify, by "16-bit integer code" I meant code that doesn't use floating-point, in which most of the arithmetic is done on 16-bit values, not code for a 16-bit integer machine like the 8086 or code consisting entirely of 16-bit arithmetic operations. My reason for picking 16-bit was that most of my integers fit into 16 bits, but often not 8 bits. Usually arithmetic that needs more than 16 bits of precision is address arithmetic, and on the 6502 (or 6510) that's often handled by the 8-bit X and Y registers. Even multiplies are much less common than addition, which in turn is less common than MOV. And of course jumps, calls, returns, and 8-bit-index loops (inx; bne :-) suffer comparatively less slowdown than the actual 16-bit operations in the 16-bit integer code, and they usually constitute the majority of it.

I agree that cryptographic algorithms routinely do very wide arithmetic. They want as many ALU bits as they can get their little anarchist hands on. But I think they are atypical in this.

When I look at the applications running on the computers sitting around me, most of the things they're doing seem like they would fit well into the 16-bit integer arithmetic bucket, so I don't think it's contrived. The way they're doing those things (in JS, with JIT compilers, using floating point, dynamic typing, and garbage collection) is tailored to the bigger machines we usually use, but the applications (text editing, spreadsheets, networking, arguing with strangers who know more than I do, text layout, font rendering, previously futures trading) are not. The big exceptions here are 3-D graphics and video codecs, which want ALUs as wide as possible, just like crypto algorithms.


Also, modern xeons can do 32 dual precision floating point ops per cycle, per core. Since they have dozens of cores, that’s another factor of 1000, with your 10x overhead estimate for the C64, that brings the speedup to ~ a billion. (96,000 * (10 to 150x) * 1000 ~= 1-10 billion)


Yes, SIMD operations on CPUs are in many ways similar to GPU computation.


My phone is better than a SGI Indigo, a Lisp Machine or Connection Machine, yet it gets used as a container for web apps disguised as native.


Burn it all down.


Look ma, no pipelines.


> really makes you wonder what we’re doing with the other 99% of hardware performance

That 99% is dedicated to running Chrome


Encryption for communications to include HTTPS and TLS 1.3. If it wasn't for all that, you could take an early 90's computer on the internet still.


An expansion card is recommended for this OS, so there are hundreds of kilobytes or even several megabytes, not "mere kilobytes".


Even then, I think the point would still hold.


Uh... watching 4k video, for example?


You certainly don't need modern PC power even for 4k video, though you may need it for doing realtime decompression of some very high compression codecs (but that isn't "watching 4k video", it is "doing realtime h265 decompression for 4k video" or something along these lines).

Though codecs are one of the few cases where you see hardware being taken advantage of because that is basically their entire purpose.


Playing video is very expensive task. Just updating 4K screen at 60 frames per second requires transferring of 3840×2160×60 ~ 497 million pixels or 1,49 Gbytes per second (if you skip every fourth byte). It will take at least several instructions (in the best case) to calculate value of every byte so the CPU should be able to perform billions of instructions per second.

I doubt you can do it without a "modern PC" with hardware graphic accelerator.

Also modern codecs require lot of computations and lot of memory. For example, codecs like VP9 require to buffer up to 8 frames for reference. That would take 8×3840×2160×4 ~ 265 Megabytes of RAM. The program will need to extract bits, decode arithmetic coding and calculate lots of IDCTs.

I did some research to see if it is possible to play Youtube video on 8-bit CPUs. What I found is that there is little hope for this. It's better to develop your own video compression format.


> Playing video is very expensive task. Just updating 4K screen at 60 frames per second requires transferring of 3840×2160×60 ~ 497 million pixels or 1,49 Gbytes per second

Or an analog signal and you do it the way TVs used to do. There were analog HDTV standards before we all moved to digital.


Analog is a whole lot more lean.

Modern digital transfer protocols are generally 10x the pixel clock in analog terms. For a given clock speed, analog video can be several multiples of effective resolution delivered on time.

Analog standards seem to have stopped at 1080p. Nothing prevents signalling faster, but there are no capable displays to read it. I suppose that might be a neat FPGA project. Take 4K analog signals and stream them to a digital display. Really would only need a scanline or two of buffer for the simpler "just push all the pixels" use case. Frame buffer.


> There were analog HDTV standards before we all moved to digital

8-bit computers likely wouldn't be able to keep up with the bandwidth requirements of those systems either.


Not the original 6510, but there are some very fast 8-bit processors out there. If you compress the color space, they may be able to flip every pixel 30 times per second.

Also, nothing stops you from implementing an 8-bit CPU with a modern process and push it into the multi gigahertz range except perhaps the fact it’ll be too small for current machinery to manipulate.


> Playing video is very expensive task. Just updating 4K screen at 60 frames per second

Nobody mentioned anything about 60 fps. A ton of video is at 24 or 30 fps.

> requires transferring of 3840×2160×60 ~ 497 million pixels or 1,49 Gbytes per second (if you skip every fourth byte).

You do not need to update the entire screen all the time, you can do partial updates (since very frequently not 100% of the screen changes - it is the most basic fact on which video codecs rely on) and you can "heal" the output over time after partial updates (also important for video playback where you can't guarantee a fast stream).

> I doubt you can do it without a "modern PC" with hardware graphic accelerator.

Why the limitation for hardware graphics accelerators? Hardware accelerators exist for decades now and even hardware video decoders exist for more than a decade. If anything not using those is "not taking advantage of the hardware" that i mentioned.

> Also modern codecs require lot of computations and lot of memory. For example, codecs like VP9 require to buffer up to 8 frames for reference. That would take 8×3840×2160×4 ~ 265 Megabytes of RAM.

Sure, but that is RAM computers also had for more than a decade now - the PC i used in the early/mid-2000s had 2GB of RAM - hell, even the cheapo EeePC netbook from the late 2000s had 1GB of RAM.

> I did some research to see if it is possible to play Youtube video on 8-bit CPUs. What I found is that there is little hope for this. It's better to develop your own video compression format.

Sure, but i never mentioned anything about 8-bit CPUs, YouTube or even existing video compression (or specific framerate for that matter). What i wrote is that you do not need "modern PC power" to do 4K video. You do need more processing power than what would be found in 8-bit CPUs (or at least common 8-bit CPUs you'd find in 80s home computers), but i'm certain you can have a 10 year old PC play 4K video without trying that much. Perhaps even a 15 year old PC with a bit of effort.

FWIW what i had in mind when i wrote that comment was the "8088 Domination" demo which used a custom codec and player to do fullscreen video and audio playback on an original IBM PC and honestly i do not for one second buy the notion that if you can do that on a 40 year old PC you wont be able to have 4K video on a 15 year old PC - if anything i might be too generous here.


> You do not need to update the entire screen all the time, you can do partial updates

You forget that you are expected to keep up to 8 reference frames. And you need to update them too.

> You do not need to update the entire screen all the time, you can do partial updates

This would work only until first keyframe.


> You forget that you are expected to keep up to 8 reference frames. And you need to update them too.

You refer to VP9, i explicitly mentioned i do not refer to any specific codec but just to playing back "4k video". The rest were never something that i referred to, implied or even mentioned in the message i replied to.

> This would work only until first keyframe.

This depends heavily on how the video is encoded and that is only assuming you can't find a way to occasionally update the full screen.


that’s why laserdisc players have serial control :)


jazzyjackson may be referring to effects such as, oh, that your multi-gigabyte-and-gigahertz computer is less responsive to your keystrokes than a C64 while merely enter some text into a field.


Programmer productivity. As long as this is the bottleneck in determining market success of a product, this tradeoff will continue.


Comparing Delphi and C++ Builder with modern SPAs, or dealing with the Kubernetes boilerplate doesn't really sound that productive to me.


Programmer productivity has gotten worse. None of the development tools in widespread use today is as productive as Visual Basic or Delphi were back in 1996.


Have you ever wondered why?

It's because today isn't about programmer productivity.

It's about enabling teams to work together without tripping over their own feet and killing the business.

So much complexity to solve essentially what is a human problem.

It takes 5 people to do the job of 1 back then, and that's what the industry wants. No one would trust an individual.

Too much money flowed into this industry, leading to waste, as it became clearly a pillar of the future.


It took many years to realize the role of software is to be flexible to maintain and modify at the code level.

That's the entire point of "soft"ware. Otherwise there would be no point on doing on a CPU what could be made on an integrated circuit 1000 times faster.

A single guy writing whatever code they decided to do, may create a good program. But as requirements change, and the ownership gets passed on, it quickly becomes a nightmare to modify or refactor.

Good code made in SlowScript® by a team based on consensus on architecture and coding style has more potential for success as a product than the fast and optimized spaghetti code made by the best hacker in the world.


This needs a lot of qualifiers. I can see that for web development maybe (I don't know, haven't done that for almost 10 years, but I can imagine given the constraints), but for system level programming for example I don't want to go back to the 90s. Not that Visual Basic or Delphi worked for that anyway.


Also, as the internet (and technology) grew in influence and popularity, the amound of usecases for them grew exponentially.

In the "golden era" computers were more or less advanced calculators. Now we (as a population) expect to give entertainment and give advice about our lifestyle, like we do with smartwatches or fitness devices.

That's a lot more complicated than writing text or inputting numbers into a grid


> Now we (as a population) expect to give entertainment and give advice about our lifestyle, like we do with smartwatches or fitness devices.

I always think about Lindy effect [1] in this context: the life expectancy of a technology, idea or (in our case) a societal habit is proportional to its current age. It follows, then, that the idea of a multi-purpose computing device may also wear off in certain areas of life, or for certain types of people. Not always, not everywhere is there a need for complex or complicated devices or systems. There will always be people who seek or prefer a simpler, quieter life that involves as little technology as possible.

What's really interesting is that many of these people seem to have a remarkably deep understanding of technology, and great skills [2]. Possibly because of this knowledge, they're really clear (or, pedantic :) about what they don't need.

In the end, though, I've come to think that this may be more of a simple psychological preference (for less stimulation) than a rationalized choice. Who knows.

1: https://en.wikipedia.org/wiki/Lindy_effect

2: Some of the people I really look up to: http://collapseos.org/, http://joeyh.name/, https://sassenrath.com/


Technically Delphi could have worked but you'd need to use your own RTL and aside from a few additional language feature, you'd be losing 99% of what Delphi offered anyway.


Lazarus and FreePascal would beg to differ:

https://www.lazarus-ide.org

Everything you ever loved about Delphi 7, but with a ton of modernization. LGPL, too!


Serious question: Can you make "Netflix money", er, "FAANG money", as a Lazarus developer?


No, but all those people making "Netflix money" aren't really changing technology, are they? I'd argue FAANG is stagnating it, and telling themselves differently.

I guess if you want to make gobs of cash, fine. You could also just go into finance and make more than FAANG money if you're talented enough.


Facebook is a major contributor (sometimes the primary one) to many famous open-source projects: Linux, mercurial, MySQL, React, jemalloc, PyTorch, GraphQL, and probably several others I’m not thinking of.

They’re also one of the top AI/ML research institutions in the world with a huge chunk of the papers published at top conferences.

Similar statements are also true of Google.


I'd argue that Microsoft, Google and Apple have enabled massive growth in technology. New uses for OSs, new places for them to run, and ways to make starting far more accessible/collaborative.

Even if there is 2 step forwards, one step back, the overall benefit of options available due to the vast number of people contributing their creativity is only possible due to these companies.


They enabled new users for computers, and killed desktop computing in the process. I don't think the trade off is with it.


Skype was originally written in delphi.

"Microsoft will acquire Skype, the leading Internet communications company, for $8.5 billion"


No but that isn't because of Lazarus itself but because "Netflix/FAANG" doesn't use it as what it does isn't really in their business interests. Lazarus is mainly about making desktop applications. It can do web, etc but it isn't really a focus and you lose most of the visual functionality anyway (yeah ok, you can set up URL routing via the object inspector and connect DB components together, but 99% of the work is done via code anyway so it doesn't provide something better than what you'd find in more popular tools).


There's a JS transpiler in the works for FreePascal, iirc.


AFAIK pas2js doesn't fully support the full runtime library yet and i think the language isn't completely implemented. IMO the wasm target will be more interesting and was merged into the main branch earlier this year, though i don't know how usable it is.

Either way those are things that allow you to use Free Pascal code in the web on the client side, but they aren't really "Lazarus" things. Once the full RTL can be used via pas2js or wasm, it could be possible to make a web-based backend for LCL but that would feel kinda alien as you'd essentially be embedding a desktop application inside a web page.

It is also possible to make new form (object) designers for a new type of form (object) that acts more like a Flash page or something like that, but that would require a lot of work. On the other hand it might make making mobile apps better too.


Pas2JS already exists in the Free Pascal world. You can make applications with it. https://wiki.freepascal.org/pas2js

Also there are other Pascal to JavaScript transpilers, which use other dialects of Object Pascal, such as DWScript and Smart Pascal.


Maybe everyone trying to make FAANG money is what drove tech into this mess.


google developed golang instead of improving pascal, so you probably can't. the two languages have surprisingly similar goals.


> the two languages have surprisingly similar goals.

Not so surprising, given the backgrounds of the key designers of Go, I feel. Griesemer even got his PhD under Mössenböck and Wirth.


Griesemer PhD's is related to Oberon.

"A Programming Language for Vector Computers"

https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42....


I've read it. I "binged" on pretty much everything published by Wirth and Mössenböck's groups at one point. His work on Oberon really underlines the point that it's unsurprising that Go has similarities in terms of design goals.


Indeed, although the pursuit for minimalism is where my appreciation for Wirth's work kind of winds down.

As a user on an hypothetical Oberon workstation, I would rather have Active Oberon than Oberon-07 as official systems language.


I do think he sometimes went too far, but I think the attitude was admirable in as much as it was driven not so much by opposition to these features but in recognition of his own lack of understanding of how to make various features efficient. Which made the languages less viable in their "pure" states for day to day use, certainly, but also great starting points.

I wish more language designers would at least think about simplicity and implementation consequences, even if I too would prefer to work in a language where after thinking about that they still implement hard to optimise features.

My favourite language is Ruby, pretty much the anti-thesis of a Wirthian language, and certainly not Go, so I'm not saying all of the parts carried over to Go are necessarily things I agree with either.

At the same time, Ruby is full of things I wish had been better specified or left out, and often I get the feeling that if Matz had been forced to at least consider simplicity more when Ruby started out, it'd be a cleaner, better language for it. E.g. for the most part the Ruby grammar makes for a language that is nice to write, but it has so many dark, and horrible corners that does nothing to make Ruby pleasant that are simply there because nobody cared enough simplicity. Favourite example: "% x " may look like a syntax error, but it's the quoted string "x" - space is a valid quote character; if that isn't messed up, "%\nx\n" is also valid and produces the quoted string "x"... LF is also a valid quote character... Contrived Ruby hello world:

    x=%
    hello world
    
    puts x
(mind the whitespace - put a space after that "%" and it's suddenly a syntax error because the string suddenly ends at the space after "hello"; fun times)


Your evidence for this? It took a whole damn team to build a single website with "HTML programming" back then. Software was still shrinkwrapped. Modern platforms on web frameworks with CI/CD I think make today's dev a hell of a lot more productive.


Well in between the ads and google spying I often do not feel productive.


The Luddite sentiment common among programmers today really surprises me. The software we’re building today is vastly more powerful and capable than anything from the C64 era.


Luddite is someone who is opposed to new technology, which doesn't apply here as the sentiment isn't about being opposed to the new tech but about the new tech not being taken advantage of.

The software we build today is "vastly" more powerful in the sense that, strictly speaking, you can do more stuff, but it is way less powerful - and often broken - than what it could be, considering what you can see being possible on older hardware and what seems to be done in modern hardware.

Also it isn't really something that happens today, here is a nice (and funny) talk from Joe Armstrong on the topic from almost 8 years ago[0]. Though this sort of sentiment goes way back, e.g. Wirth's classic "Plea for lean software" article from 1995[1].

(Joe's talk is more about the broken part and IMO he misidentifies the issue being about trying to make things fast when the real issue -that he also mentions- is that pretty much nobody knows what is really going on with the system itself)

[0] https://www.youtube.com/watch?v=lKXe3HUG2l4

[1] https://cr.yp.to/bib/1995/wirth.pdf


This is because people don’t want to pay the cost of software carefully developed to take maximum advantage of modern hardware. In specific niches like audio production where this is desired then prices run into hundreds of dollars for a single plugin to cover the development costs.


It's always this way. Just like most people are happy with Ikea furniture, so most people are happy with the equivalent of "Ikea software". It's good enough. For folks who _are_ willing to pay, you can buy everything from low latency audio gear/software to dedicated Internet bandwidth to high reliability SBCs.


And even then, quite a lot of that audio production software is very, very inefficient, primarily due to poor GUI development. There are a few popular choices of software that are very well-known for being CPU hogs, even relative to more complex software.


Particularly the business often doesn't want to take on the goal of improved performance when good-enough will suffice. Which can often make sense when you factor in increase development costs, reduced flexibility/maintainability, and reduced ability to recruit for people with the skillset to work on such things.

Then again, performance is often a feature in itself. In some cases it can open whole new areas of potential business. Often times it isn't even particularly hard to achieve, it just requires decent engineering practices.

Unfortunately good engineering practices can be hard to find/hire for, especially among a development community/culture that hasn't had to bother caring about performance for a long time.


As I am writing this comment my 2017 macbook is spinning fans, as if it is doing protein folding.. and it has like 3 tabs open on chrome.

And for what? to edit a text box?

There are new capabilities around encryption and encoding, things like h265 and etc, which of course help with medical diagnostics and such.

I would take c64 software any time, look at this https://www.youtube.com/watch?v=ROr8JhilPhI its from 1983.

Imagine what kind of software would the people from 1980s write if they just had a raspberry pi 4 to work with..


On the C64 text was PETSCII [1]. On your Mac in Chrome it's Unicode. Running a basic multilingual plane text editor on the C64 is probably impossible due to RAM constraints. Even with a RAM expansion and imagining it had a framebuffer I'm doubtful that the C64 could edit Unicode text at interactive speeds.

In the 1980s only a small portion of the developed world's population used home computers. Today the majority of people use computing devices. Most of them use languages that cannot be represented with PETSCII or ASCII. It's amazing what motivated people can do with low power machines, but let's not forget how going back to the 1980s would discard valuable capabilities as well as bloat.

[1] https://en.wikipedia.org/wiki/PETSCII


Maybe C64 is out, but do you really think we need all the bloat of today to support Unicode text?


People from the 1980s are still alive and they’re not making anything more impressive than anyone else.

These people weren’t magicians only held back by a lack of computer power, they were (again, “are”) regular people trying to maximize their output given their resources.

Let’s say you go back and hand a powerful computer to them. What would Mac Paint look like? I’m sure it would have colors (assuming you also have them a monitor), probably a lot more of those goofy and useless textured fills nobody wants, layers, probably some filters, higher resolutions, and probably simple brush strokes. But do you really think they would manage to implement any of the seriously impressive features that modern Photoshop has? Content aware fill? AI upscaling? Of course not. It would probably take them a long time to get smart selection working in a halfway decent manner.


> And for what?

Honest question here: have you actually tried to find the answer to that question?

As I'm writing this comment, my 2019 Pixelbook Go is quiet. It's a fanless design, so it being quiet is normal, but it is also not running hot. I've got around 30 tabs open, a few terminals, Emacs, and some other stuff running.

So back to my question - have you looked into what is actually going on in your machine? Dust in the fans?


"my 2017 macbook is spinning fans, as ... like 3 tabs open on chrome" Chrome might be your issue. Doing the same on on a 2015 MacbookPro and it's silent... Firefox.


I don't think it's "Luddite sentiment" to ask why one computer feels as productive with the same perceived performance as one that's literally a billion times faster.

I believe the sibling saying that it's programmer productivity is correct. That same programmer can and does now spend a week whipping together a program in Python that took a year under the older constraints. And users expect more out of them, with whizz-bang animations and app-store integration.

But if that same programmer _did_ spend the year instead of the week, what could they do? The quality and performance would probably be a lot better. Unless they depend on externalities that don't also scale up their quality game, like basically anything involving the web.


It’s Luddite in that it rejects modern tools and methods in the belief that there’s an older, more artisanal method of doing work.

If you believe this is the case then you can put your belief to the test. Build software the way you think it should be built and see if users are willing to pay for it.


> If you believe this is the case then you can put your belief to the test. Build software the way you think it should be built and see if users are willing to pay for it.

"users willing to pay for it" is a very bad and simplistic metric because there is way more to user willingness to pay for something than how that something was made (not to mention that it excludes everything the user doesn't pay for) - it may not even have to do with the software itself.

Also the software in question may not even be "free" to use the better approach: imagine, for example, a client for a chat service that allows embedding images, videos and audio in messages but the way the protocol works is for the server to provide those as iframe and/or html content. At that point the client will have to use a browser component in one way or another with all the baggage that entails even if the features themselves (images, videos, audio and text) could be implemented directly by the client.

There is only so much you can do when you have to interface with a world built on tech that doesn't care about efficiency.


I'd argue that most python programs couldn't do the same in a year under older constraints. It's objectively more difficult to write 6502 assembly than it is to write python.

Modern computers have opened programming up to more people by making it simpler but with the tradeoff of more runtime execution cost.


Ugh, this thread made me go down a rabbit hole and I ended up bidding on a C64C on eBay. (I had a C64 as a kid but sold it to be able to afford an Amiga 500. No regrets.) Amazed at how much of a market there still is! Now I need to figure out how to connect it to a modern display, but this same web site has a pretty cool buyer's guide for that kind of stuff.


I'm a bit bummed out that it's closed source - I would've liked to look through the code.


Cool project. But much of that functionality was available with GEOS in 1986.

https://en.wikipedia.org/wiki/GEOS_%288-bit_operating_system...


The author mentions GEOS among those efforts that, and I quote:

> [...] had good intentions but pushed the machine in ways it wasn't designed for, compromising on speed and usability in the pursuit of features available on more powerful computers.

I must be honest and say the statement feels a bit puzzling, since C64OS certainly wasn't something the C64 was designed for...


Maybe they are alluding to the fact that GEOS was using graphics mode for the GUI (which makes drawing the UI slower and requires more memory, thus arguably would require a more powerful machine) where C64OS is using text mode (plus the graphics split mode) which arguably is more suited to the C64.


I remember switching floppy discs rather a lot when trying to use GEOS back in the '80s.

My C64 now has a USB interface 8)


Same. GEOS was really cool, I was blown away by the 80 column mode. But it felt more like a gimmick and I never ended up really using it for anything real. But then again, I was 10 years old and more interested in playing Raid on Bungeling Bay.


Past related threads:

Shared Libraries for C64 OS - https://news.ycombinator.com/item?id=26590376 - March 2021 (1 comment)

Rethinking the Commodore 64 Memory Map (2018) - https://news.ycombinator.com/item?id=20317767 - June 2019 (17 comments)

C64 OS: A Commodore 64 OS with Modern Concepts - https://news.ycombinator.com/item?id=17997911 - Sept 2018 (42 comments)


dang, I see you do this on a regular basis. Is it automated or are you doing this manual?


Kind of in-between. Here's a pointer to past explanations: https://news.ycombinator.com/item?id=29370676


I see lots of instructions on how to set up a system on the website, but nothing which tells me why this software would be useful.


"Useful" in what sense? This is retrocomputing, a hobby. Nothing about it is "useful", it's about playing with old computers and emulators for its own sake.


The first thing on the page is:

> C64 OS has one goal. Make a Commodore 64 feel fast and useful in today’s modern world.

So I guess whatever "useful in today's modern world" means. What does this new OS allow me to accomplish on a C64?


Oh, somehow I missed that sentence. I think it's tongue in cheek to be honest.


Looks to me like a couple things:

File / Disk management. The file manager program shown looks spiffy, and is definitively a nod toward today.

Write utilities and apps that have a modern workflow.

While not much, the included tool kit has the features needed to make applications and utilities people might recognize today.

The author has put quite a bit into it. Nice work.


That would be a good question to ask the author, although, to be fair, he only claimed that it feels useful.


Its not meant to be useful, its meant to be nostalgic.


It's a GUI OS, not command-line as I expected:

http://c64os.com/c64os/usersguide/ (screen shot)


You can see a bit more of the use interface here http://c64os.com/c64os/usersguide/userinterface


It's still a text mode GUI, though. C-64 has support for custom character sets, it uses those to have some uncommon symbols.


You might want to have a look at http://lng.sourceforge.net/


I saw part of a John Wick movie on TV last week and recognized the C64's that the crime syndicate used. Seeing them made me smile.


Interesting. I know some have tried to make faster versions of old 8 bit computers from the hardware side instead.

Here's an FPGA-based drop-in replacement for 65C02 that runs at 100MHz: http://www.e-basteln.de/computing/65f02/65f02/

Not tried in a C64 yet, but he did get it working in a Commodore PET.


You can’t just "drop in" a faster 6502 (or 6510) into something like a C64. For one, all the serial i/o routines are software bit-banged so parts of the "ROM Kernal" need to be modified. More importantly, the VIC-II chip is the DRAM memory controller and it is tightly coupled with NTSC or PAL signal timings.

Most accelerator boards for the C64 need quite a bit of glue logic to overcome these difficulties.


It has some of that accounted for, like:

"Upon power-on, the 65F02 grabs the complete RAM and ROM content from the host and copies it into the on-chip RAM, except for the I/O area. Then the CPU gets going, using the internal memory at 100 MHz for all bus accesses except for any I/O addresses"

There's also some soft cores for a Z80 that can work at the stock clock speed, but do more work per cycle, which is interesting.

Edit: Some, not all, things are covered.


The reason why many 8 bit home computers put the memory controller in the graphics chip was to give the graphics subsystem "first dibs" over bitmap data in RAM. If the microprocessor has its own copy of RAM then the graphics subsystem may never update the display unless the cpu "writes through" all memory write operations (resulting in a significant performance slowdown).



If you are interested about the OS: http://c64os.com/c64os/


Man I really wish people would stop writing javascript to load in images only as a scroll to them. It makes web browsing so much slower.


Now if only MSFT would release a new OS that made a PC feel fast and useful! [/JOKE]


I remember the first time I span up a Linux distro on a PC. The console felt so bloody quick in comparison with the Win98 that was shuffled out of the way to make room for something called ext2. I compiled my first kernel - 1.98something I think - with something called "eggs" (egcs - ooh controversial!) and the text flew up the screen smoothly and fast, really, really fast.

This machine had DOS and W4WG 3.11 before that and prior to that I had a 80486 and before that a 80286 plus a '287 co pro with just DOS 3-5ish so I had a preconception on what streaming text consoles should look like. I'd also used some green or amber screens with something large and complicated behind them. That's what IT looks like to most people (something complicated) - don't forget that! My memory grows a little dim but I think it was a Pentium II box that I first slapped Linux on.

25 years later.

I update my various Linux boxes in a minute or two at most. I update Windows boxes in a few hours at most - normally around 15 mins but several hours is not unknown.


I liked the post, but why not emulate on faster, more portable hardware?


There was an ad for an extreme engineering show on cable decades back that had a dude say something like, "Why would we put a jet engine on a motorbike? Because we can!"


Love the design of the website, despite the difficult readability of the fonts. Simple easy layout, happy vibes, reminds me of the old days.


Lovely.

I do have 2 C64 but, too bad, this isn't open source, so I won't be going anywhere near it.


Commodore DOS was never formally made open source either, was it?


AIUI it was not.

But if you've got a C64, you've got a license for it.

"C64 OS", you would have to buy separately. Whether it would make sense to invest time and effort into writing applications for is debatable.

Personally, as a developer who enjoys working on vintage hardware, I'd rather put that effort into either making my software not require "C64 OS" or in writing a suitable, open source replacement to that system.


This is a beautiful landing page for what looks like a beautiful project. Nicely done!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: