Hacker News new | past | comments | ask | show | jobs | submit login
New Linux port for the Nintendo 64 (kernel.org)
526 points by MegaDeKay on Dec 25, 2020 | hide | past | favorite | 195 comments



> "But why", I hear from the back. Having Linux available makes it easier to port emulators and fb or console games.

> Most importantly, because I can.

Good. All I can say is at least it is something different to what I keep seeing on HN. (React, Rust, Kubernetes, JAMstack, JavaScript, etc.)

Probably present this somewhere in a conference (CCC, FOSDEM, etc). Who knows who could be looking at this.


Yes, I love seeing this stuff over the standard JavaScript lib of the month.


It feels like most of the Founder-type people are taking a break from what they usually do at work -- which probably includes checking hacker news. So it's left room for the more interesting things to surface over the "What happened to <javascript framework>" or "How to raise your yearly gains by five dollars" or whatever.

edit: Interesting is a personal taste, and while I have noticed articles on hacker news becoming more repetitive lately, it's probably not fair to put it entirely down to "Founder-type people".


Former founder checking in. Guess I'm still interested in tech, even on holidays.


Do former founders count? ;-)


Well, hn is basically made by a venture capitalist with the target audience of founders. A certain amount of bias should be expected.


Same, but lisp.

We had a new hire who was huge into HN a while back. Was very hard to rein him in from suggestions of haskell, random frameworks, etc. I've dealt with the "shiny things" mentality many a time, but this was something else. Made the "academic do it right" approach look liberal in terms of risk.

Haskell is cool, but yeah sorry keep that out of my prod stack.


Given the amount of users using JavaScript, it shouldn’t come as a surprise that many posts are about it because it’s relevant to so many.


Don't forget the many ways in which Machine Learning will revolutionize X.


Could it finally revolutionize porting Linux to new architectures?


I never thought about it but one day AI will port itself to another device. That’s a crazy thought.


We'll probably see that happen in professional malware first.


TBH, if I recall correctly, something similar happened already in the past just by pure coincidence. I'm pretty sure somebody somewhere already experiments with self-porting AI malware.


The only instance somewhat like this that I am aware of is that two malware samples sort of 'recombined' accidentally.

One was a worm that packaged itself up and sent itself around, the other did some sort of injection or binary patching and happened to piggyback onto the worm.


So basically one virus infected another virus in such a way that they became symbiotic? Cool.


“AI” malware? Is this like calling any internet enabled thing “smart”? Any cool features are AI or is this something else?


It refers to things like software that reads documentation, reverse engineers code running on the target system, works backwards from the target CPU manual and compiler toolchain to make object code that runs on the CPU, and generally does what a human does when porting code to run on another system possibly without a compiler and take advantage of flaws on the target.

I think we're half way there already, because of the number of tools for decompiling and code scanning for security vectors these days, the increasing use of fuzzers to discover unintended or undocumented features, including hardware features like undocumented instructions, and increasingly clever and subtle side-channel attacks for finding device state that is not supposed to be visible to software.


Also, if it can happen by accident then I don't really think it should be called "porting". You don't find that a linux-centric malware has "accidentally" "ported" itself to windows, likewise Intel bytecode doesn't suddenly execute on ARM chips.

The difference between "porting" and "modifying" has (at least in everything I've read in "hacker" culture) always been that porting is rarely a trivial act.


I suppose in theory malware that targets dev tools and source to infect compiled code could end up getting ported along with it’s target. More hitching a ride than porting itself though.


This is how you get Skynet.


We'll surely need to pivot to a blockchain solution first.


Now you’ve got two problems.


I realize you’re joking, but it’s possible. Learning compilation targets is an active area of compiler research with some promising results in the past year or two.


machine-learning powered linux ports?


And possible (not practical) uses for crypto.


> and fb or console games.

Btw., what are "fb games"?



Possibly a typo of "hb" for "homebrew"?


I find this to be more likely than “frame buffer”.


In a Linux context? "fb and console games" = games that run on either the framebuffer or the text console


It is indeed frame-buffer as suggested by many and not Facebook.

See https://www.phoronix.com/scan.php?page=news_item&px=Nintendo... for more information.


Farmville, coming to an N64 near you


Frame buffer


Facebook.

Wow, four people guessed (or trolled) framebuffer. Interesting.


It's absolutely framebuffer here. Games that draw directly to the framebuffer. I assume they meant sprite-based games that don't require 3D drivers, for example.


Facebook makes absolutely zero sense in the context of the sentence.


One could argue Facebook makes zero sense in any context.


Why did you mention Facebook? That makes zero sense


What about you? :p


Faecebook? What's that?


I goofed, indeed.


+1 HN point for a complaining top comment in a thread where there's literally nothing to complain about.


Have you noticed any change in those patterns since the Christmas vacations began around Dec 19th?


> Most importantly, because I can.

Are the Linux tests deterministic, and the tests for this change will not run unless something directly affects it? If not, then I’m not a huge fan of these commits, which seem more about vanity.


Did you even read the post? The author said:

RFC because I'm not sure if it's useful to have this merged.


To be clear, this is a tech demo; we're long past the point where running anything useful on a system with graphics and 4MB (8MB with expansion) of RAM under Linux is reasonable. You can barely get the kernel itself that small. There's a reason why even cheapo consumer routers have at least 64MB of RAM.

So it's for fun; you won't be able to run any existing "normal" Linux emulators and you'd be better off porting stuff to run on bare metal than trying to squeeze it in under Linux on an N64. Cool project, fun, great that it's getting upstreamed... just keep in mind it's not really useful for end users for anything (unlike, say, Wii Linux, where with ~80MB of RAM you can actually get somewhere, though still with many limitations).

OTOH, as other people have mentioned, it makes a good test case for N64 emulators. If they can run this, that's good validation.


A microkernel would be a good fit for such constrainted environments.


You assume microkernel means small, which is a bad assumption. Microkernels can be fat too, There are dozens of monolithic kernels smaller than the average microkernel.


There are always exceptions to the rules. As from Wikipedia: "In terms of the source code size, microkernels are often smaller than monolithic kernels. The MINIX 3 microkernel, for example, has only approximately 12,000 lines of code".

But to satisfy your definition needs, I refer to small microkernels.


The kernel is small, because all the extra stuff is outside the kernel. It's still stuff that exists and consumes RAM.


No, because e.g. the monolithic linux kernel is filled with many things that you do not need (but others do) in very constrained environments.

If you have a very small microkernel, then you can very explicitly only install the software in user space that you need, without having them pre-installed via kernel.


Really, the term "micro" is about size, not functionality.

The fact is, the Linux kernel can be compiled with features to suit constrained environments. It is just that these features are determined at build time, not run time.


Yes, that is true but there are limits as I've explained here: https://news.ycombinator.com/item?id=25544271

Also quite interesting this question already came up on Quora https://www.quora.com/What-is-the-smallest-in-size-Linux-ker... and also on many other websites: https://superuser.com/a/370588


Your argument doesn't make any sense. You are saying microkernels are more modular in feature selection than monolithic kernels. That is not an inherent property if microkernels, nor is it true in practice. Nothing says a monolithic kernel can't have much more fine grained build time feature control than a microkernel where only entire services can be disabled (by not running them). Any reasonable monolithic kernel targeting embedded systems, including Linux but also anything smaller, will have a ton of build time options to slim down the system as needed.

A microkernel with feature control at the process/service level would actually have much worse feature control than Linux, because Linux build time configuration options are often quite a bit more finer grained than that. For example, you can build Linux for uniprocessor systems, which makes global build-time changes that disable certain kinds of locks, which makes it smaller globally. A microkernel could or could not have the same build-time feature; it is not guaranteed to.

As others have pointed out, microkernels have performance overhead and that reason alone makes them unsuitable for an N64. On a game console you need all the performance you can get.


The problem is that you can only slim down linux so much. If the tiniest possible linux does not fit on your chip, then it simply does not fit, end of the story for linux in this case. However, there are tiny microkernels e.g. with less than 50.000 lines of C code that can fit on such a device + leave some room for a small application e.g. also smaller than 50.000 lines of C that also fits. This is all I want to say. And yes, I agree with all the other points/drawbacks including performance that people mention here.


OS design methodology doesn't really have anything to do with its suitability for constrained environments. There are plenty of small monolithic and microkernel designs around.


I would argue that a microkernel would be a good fit because: "Traditional operating system functions, such as device drivers, protocol stacks and file systems, are typically removed from the microkernel itself and are instead run in user space." - https://en.wikipedia.org/wiki/Microkernel

You want to throw out as many functionality as possible to get your kernel down in size to fit into e.g. 1 MiB of memory.


Those operating system functions still need to exist and still consume RAM. It doesn't matter whether they run in userspace or kernel space.

If you don't need those functions, you can remove them from kernel space in a monolithic kernel. It's possible to build Linux without TCP/IP support and with no on-disk filesystems.


Yes, it is possible to strip out subsystems from the linux kernel, but there are limits. For example: https://weeraman.com/building-a-tiny-linux-kernel-8c07579ae7...

They achieved a compressed linux kernel size of just 749 kB which additionally requires at least 12 MB of RAM to boot. This is very impressive, but there a constrained systems with 1 MB or less of memory.


That’s a different argument to the one you opened with. The OP had already made the point that Linux isn’t really practical. But that doesn’t mean that a micro kernel OS would be any better than GNU/Linux. Ultimately you still have the same data stored into RAM (as others have said).

What nobody has (yet) mentioned is that micro kernels typically run slower than monolithic kernels because user space / kernel space memory swapping is expensive compared to running everything in kernel space. This overhead would kill any performance you might get from a system with the hardware specs of an N64. So a monolithic design is absolutely the way to go (in facts that’s how N64 games are actually written —- one monolithic code base with shared memory).

Pragmatically the only way to write software for the N64 is to go bare metal. As the OP said, this Linux port is a fun technical challenge but no OS would be practical (that is unless you’re just using it as a boot loader for other software).


> That’s a different argument to the one you opened with.

"A microkernel would be a good fit for such constrainted environments."

No, that's exactly my argument. My argument is that the linux kernel (even if you strip everything out and create the most tiny linux kernel) is still too big for many constrained environments, e.g. a old router with 512 KiB memory (there are many devices that cannot run linux). However, it is possible to run a small microkernel on such a router, that's my point, nothing more nothing less, that's why I consider using microkernel a good fit.

Then people argued that mircokernels can be as big as a linux kernel and that you can strip the linux kernel down in functionality and I agreed with them, but this does not contradict the point I made.


> "A microkernel would be a good fit for such constrainted environments."

As the GP said, micro kernels have a performance overhead swapping data between rings. That overhead would bite hard on something running a NEC VR4300 clocked at 93.75 MHz.

A monolithic kernel is the way to go. Just not Linux specifically.

> My argument is that the linux kernel (even if you strip everything out and create the most tiny linux kernel) is still too big for many constrained environments

That was the OP's point. Yours was that a micro kernel would be a better fit. It would not.

> a old router with 512 KiB memory (there are many devices that cannot run linux)

Those devices wouldn't be running code written to programmable chipsets. They wouldn't be running an operating system in the conventional sense. Much like laumars point about how games are written for the N64.

Also nobody is suggesting Linux runs everywhere. We are just pointing out that you massively misunderstand how micro kernels work (and embedded programming too by the sounds of your last post).

By the way, you wouldn't find any routers running a meger 512KB of RAM. That wouldn't be enough for multiple devices connected via IPv4, never mind IPv6 and a wireless AP too. Then you have firewall UI (typically served via HTTP), a DHCP & DNS resolver (both of which are usually served by dnsmasq on consumer devices) and likely other stuff I've forgotten about off hand -- I have some experience building and hacking routers :)

> However, it is possible to run a small microkernel on such a router, that's my point, nothing more nothing less, that's why I consider using microkernel a good fit.

Most consumer routers actually run either Linux or some flavour of BSD. All of which are also monolithic kernel designs. Some enterprise gear will have their own firmware and from what I've seen from some vendors like old Cisco hardware, those have been monoliths too.

I know micro kernel has the word "micro" in it and the design requires loading the bare minimum into the kernel address space of the OS, but you're missing the bigger picture of what a micro kernel actually is and why it is used:

The point of a micro kernel isn't that it consumers less memory. It's that it separates out as much functionality from the kernel as it can and pushes that to user space. The advantages that brings is greater security with things like drivers (not an issue with the N64) and greater crash protection (again, not really an issue with the N64). However that comes with a performance cost, code complexity and any corners you do cut to try to bring those costs down ultimately end up eroding any real world benefits you get from a micro kernel design.


> By the way, you wouldn't find any routers running a meger 512KB of RAM.

This is not true in general, although true for modern devices. There are older router models that cannot run linux. A few years back I unsuccessfully tried to flash a very minimal <1 MiB linux on an old TP link router. I was able to flash the rom but I couldn't boot, because there was not enough memory available, it wasn't 512KB but only a few MiB IIRC, still not enough.

> We are just pointing out that you massively misunderstand how micro kernels work

If someone points out, that linux kernel can be reduced in size and that there are some big microkernels, then I do agree and there is no misunderstanding, as far as I can see. Same holds true for the performance argument.

> As the GP said, micro kernels have a performance overhead swapping data between rings.

I agree that performance will be problematic, but this does not render microkernels useless in general for constrained devices. See for example: https://technik.community/2014/07/micro-kernel-the-best-choi... https://www.scirp.org/html/3-9301550_27477.htm


> This is not true in general. There are older models that which cannot run linux. A few years back I uncessfully tried to flash a very minimal <1 MiB linux on an old TP link router. I was able to flash the rom but I couldn't boot, because there was not enough memory available.

How long ago was "a few years ago"? What model number was that? DD-WRT has been ported to the Archer series but if you're talking a ZyNOS based router then you're probably out of luck. Those ZyNOS devices are the real bottom end of the market though. Even the ISP routers here in the UK are generally a step up from those. Particularly these days now that households have an expectation to have kids playing online games, streaming Netflix and such like (even before COVID-19 hit ISPs have been banging on for ages about how their routers allow you to do more concurrently). And with TP-Link, the Archer series are all Linux based or Linux compatible and they start from ~£50. So you'd be really scraping the barrel to find something that wasn't these days.

> I agree that perfromance will be problematic, but this does not render microkernels useless in general for constrained devices.

Any OS designed around kernels, memory safety etc would be useless in general for constrained devices. This isn't an exclusively Linux problem. On such systems the whole design of how software is written and executes is fundamentally different. You don't have an OS that manages processes nor hardware, you write your code for the hardware and the whole thing runs bare metal as only one monolithic blob (or calls out to other discrete devices running their own discrete firmware like a circuit). That's how the N64 works, it's how embedded devices work. It's not how modern routers work.

In 2020 it's hard to think of a time before operating systems but really that is the way how the N64 works. Anything you run on there will eat up a massive chunk of resources if it's expected to stay in memory. So you might as well go with a tiny monolithic kernel and thus shave a few instructions from memory protection and symbol loading (not to mention the marginally smaller binary sizes due to any file system metadata, binary file format overhead and other pre-logic initialisation overhead (such as you get when compiling software rather than writing it in assembly). If you're going to those lengths though laumars point kicks in: you're better off just writing a "bootloader" menu screen rather than a resident OS.


> How long ago was "a few years ago"? What model number was that? DD-WRT has been ported to the Archer series but if you're talking a ZyNOS based router then you're probably out of luck. Those ZyNOS devices are the real bottom end of the market though.

This brings back memories :) http://www.ixo.de/info/zyxel_uclinux/ Sure, we are talking about low-end (real bottom) devices and date models here. I cannot recall the model number, but I think we both agree that routers that cannot run linux exist, although not very common (anymore).

> Any OS designed around kernels, memory safety etc would be useless in general for constrained devices.

How about QNX then?

"QNX is a commercial Unix-like real-time operating system, aimed primarily at the embedded systems market. QNX was one of the first commercially successful microkernel operating systems. As of 2020, it is used in a variety of devices including cars and mobile phones." - https://en.wikipedia.org/wiki/QNX

They are "aimed primarily at the embedded systems market", their latest release is from "7.1 / July 2020; 5 months ago" and they are operating their business model since 1982.


> This brings back memories :) http://www.ixo.de/info/zyxel_uclinux/ Sure, we are talking about low-end (real bottom) devices here.

So not just low-end, but a decade old device that was already low-end upon it's release. That's hardly a fair argument to bring to the discussion.

> How about QNX then?

QNX wouldn't run on something with <1MB RAM. Nothing POSIX compliant would* . The published minimum requirements for Neutrino 6.5 (which is already 10 years old) was 512MB. Double that if you want the recommended hardware specification.

Sure, if you want to strip out graphics libraries and all the other stuff and just run it as a hypervisor for your own code you could get those hardware requirements right down. But then you're not left with something POSIX compliant, not even useful for the N64. And frankly you could still get a smaller footprint by rolling your own.

The selling point of QNX is a RT kernel, security by design and a common base for a variety of industry hardware. But if you're writing something for the N64 then none of those concerns are relevant (and my earlier point about a resident OS for the N64 being redundant is still equally valid for QNX).

Also smart phones are neither embedded nor "constrained" devices. I have no idea what the computing hardware is like in your average car but I'd wager it varies massively by manufacturer and model. I'd also wager QNX isn't installed on every make and model of car either.

* I should caveat that by saying, yes it's possible to write something partially POSIX compliant which could target really small devices. There might even be a "UNIX" for the C64. But it's a technical exercise, like this N64 port of Linux. It's not a practical usable OS. Which is the real crux of what we're getting at.


> So not just low-end, but a decade old device that was already low-end upon it's release. That's hardly a fair argument to bring to the discussion.

Fair enough, I agreed that I should have come up with an better example. But before going down another rabbit hole, just replace router with any modern embedded chip you like, that cannot run linux as example.

Regarding QNX, I don't know their current requirements but what impresses me:

"To demonstrate the OS's capability and relatively small size, in the late 1990s QNX released a demo image that included the POSIX-compliant QNX 4 OS, a full graphical user interface, graphical text editor, TCP/IP networking, web browser and web server that all fit on a bootable 1.44 MB floppy disk."

> I should caveat that by saying, yes it's possible to write something partially POSIX compliant which could target really small devices.

Yeah, I think here's a interesting overview of some http://www.microkernel.info

I wonder how many of them are POSIX compliant (or partially) and what their requirements are. GNU/Hurd certainly is.


>QNX wouldn't run on something with <1MB RAM. Nothing POSIX compliant would* . The published minimum requirements for Neutrino 6.5 (which is already 10 years old) was 512MB. Double that if you want the recommended hardware specification.

An older QNX ran from a floppy with very few MB. With GUI and a browser with limited JS support.

>QNX wouldn't run on something with <1MB RAM. Nothing POSIX compliant would* .

You could run Linux under a TTY for i386 with 2MB with some swap about 24 years ago.


> An older QNX ran from a floppy with very few MB. With GUI and a browser with limited JS support.

It did and it was a very impressive tech demo....but it's not representative of a usable general purpose OS. Chrome or Firefox alone comes in at > 200MB. So there is no way you'd get a browser that would work with the modern web to fit on the 1.4MB floppy. And that's without factoring in fonts, drivers, a kernel and other miscellaneous user land.

The QNX demo was a bit like this N64 demo. Great for showing off what can be done but not a recommendation for what is practical.

> You could run Linux under a TTY for i386 with 2MB with some swap about 24 years ago.

That's still double the memory specification and yet Linux back then lacked so much. For example Linux 24 years ago didn't have a package manager (aside from Debian 1, which had just launched and even then dpkg was very new and not entirely reliable). Most people back then still compiled stuff from source. Drivers were another pain point, installing new drivers meant recompiling the kernel. Linux 1.x had so many rough edges and lacked a great deal of code around some of the basic stuff one expects from a modern OS. There's a reason Linux has bloated over time and it's not down to lazy developers ;)

Let's also not forget that Linux Standard Base (LSB), which is the standard distro's follow if they want Linux and, to a larger extent POSIX, compatibility wasn't formed until 2001.

Linux now is a completely different animal to 90's Linux. I ran Linux back in the 90s and honestly, BeOS was a much better POSIX-compatible general purpose OS. Even Windows 2000 was a better general purpose OS. I don't think it was until 2002 that I finally made Linux my primary OS (but that's a whole other tangent).

I mean we could have this argument about how dozens of ancient / partially POSIX-complient / unstable kernels have had low footprints. But that's not really a credible argument if you can't actually use them in any practical capacity.


> I mean we could have this argument about how dozens of ancient / partially POSIX-complient / unstable kernels have had low footprints. But that's not really a credible argument if you can't actually use them in any practical capacity.

There are modern microkernels that are POSIX compliant and have a much lower footprint than linux. That's not the problem. I think the most prominent issue, people points out here is performance. However, it's very obvious to me that the extra abstraction of having a kernel vs having no kernel on an constrained device costs performance, and it's always a trade-off, both solutions can be found and both solutions are valid.


> There are modern microkernels that are POSIX compliant and have a much lower footprint than linux.

There are... but they're not < 1MB. Which was the point being made.

> I think the most prominent issue, people points out here is performance.

That's literally what I said at the start of the conversation!

> However, it's very obvious to me that the extra abstraction of having a kernel vs having no kernel on an constrained device costs performance, and it's always a trade-off, both solutions can be found and both solutions are valid.

Show me a device with the same specs as the N64 which runs an OS and I'll agree with you that both solutions are valid. The issue isn't just memory, it's your CPU clock speed. It's the instructions supported by the CPU. It's also the domain of the device.

Running an OS on the N64 would never have made sense. I guess, in some small way, you could argue the firmware is an OS in the say way that a PC BIOS could. But anything more than that is superfluous both in terms of resources used and any benefits it might bring. But again, if it's a case of "both solutions are valid" then do please list some advantages an resident OS would have bought. I've explained my argument against it.

Let's take a look at what was happening on PCs around the time of the N64's release. Most new games were still targeting MS-DOS and largely interfaced with hardware directly. In a way, DOS was little more than a bootstrap: it didn't offer up any process management, the only memory management it did was provide an address space for the running DOS application, it didn't offer any user space APIs for hardware interfaces -- that was all done directly. And most of the code was either assembly or C (and the C was really just higher level assembly).

Fast forward 4 years and developers are using OpenGL, DirectX and Glide (3DFX's graphics libraries which, if I recall correctly, was somewhat based on OpenGL) in languages like C and C++ but instead of writing prettier ASM they're writing code based around game logic (ie abstracting the problem around a human relatable objects rather than hardware schematics). It was a real paradigm shift in game development. Not to mention consoles shifting from ROM cartridges to CD posed a few new challenges: 1) you no longer have your software exist as part of the machines hardware 2) you now have made piracy a software problem (since CD-ROMs are a standard bit of kit in most computers) rather than a hardware one (copying game carts required dedicated hardware that wasn't always cheap). Thankfully by that time computer hardware had doubled a few times (Moore's Law) so it was becoming practical to introduce new abstractions into the stack.

The N64 exists in the former era and the operating system methodologies you're discussing exist in the latter era. Even the constrained devices you're alluding to are largely latter era tech because their CPUs are clocked at orders of magnitude more than the N64 and thus you don't need to justify every instruction (it's not just about memory usage) but in many cases an OS for an embedded device might just be written as one binary blob and then flashed to ROM, effectively then running like firmware.

It's sometimes hard to get a grasp on the old-world way of software development if it's not something you grew up with. But I'd suggest maybe look at programming some games for the Atari 2600 or Nintendo Gameboy. That will give you a feel for what I'm describing here.


>It's sometimes hard to get a grasp on the old-world way of software development if it's not something you grew up with.

I lived through that, the first PC I used had DOS with 5'25" floppies.

On 3DFX, it was a mini-GL in firmware, low level. Glide somehow looked better than the later games with DirectX, up to Directx7 when games looked a bit less "blocky".

>For example Linux 24 years ago didn't have a package manager

Late 90's Linux is very different from mid 90's. Slackware in 1999 was good enough, and later with the 2.4 kernel it was on par on w2k, even Nvidia drivers worked.

And I could run even some games with early Wine versions.


In fairness, you did say “24 year old Linux” which would put it in the mid 90s camp rather than late 90s.

I wouldn’t agree that Slackware in 2000 was on a par with Windows 2000 though. “Good enough”, sure. But Linux had some annoying quirks and Windows 2000 was a surprisingly good desktop OS (“surprising“ because Microsoft usually fuck up every attempt at systems software). That said, I’d still run FreeBSD in the back end given the choice between Windows 2000 and something UNIX like.


Nintendo actual devices use microkernel based designs.

We are way past the usual FUD against microkernels.


It’s believed Nintendos consoles use a micro kernel but that’s a result of hacks and reverse engineering. However Nintendo themselves have given limited information. While I think your point is more likely than not, that caveat I’m making is still worth noting; ie things aren’t as certain as you’re boldly claiming.

Now on to your point about the complaint the GP and myself made being FUD; it’s really not. The closest to a monolithic kernels performance any micro kernel has gotten was L4 and those benchmarks was running Linux on top of L4 vs bare metal Linux. While the work on L4 is massively impressive there is still a big caveat, the actual workload was still effectively ran on a monolithic kernel with L4 acting like a hypervisor. So most of the advantages that a micro kernel offers were rendered moot and there was still a small performance hit for it.

Why doesn’t that matter for the Nintendo Switch? Probably because any DRM countermeasures in user space would have a bigger performance penalty and a micro kernel offers some protections there as part of the design. That’s just a guess but as I opened with, Nintendo are quite secretive about their system software so it’s hard to make the kind of conclusive arguments you like to claim.


You just need to get the information the same way I did,

https://developer.nintendo.com/

Other than that I can only point out the CCC related

Also given the amount of hypervisor and container baggage that gets placed on top of Linux to make for the lack of microkernel like safety, it doesn't really matter if it happens to win a couple of micro-benchmarks.


Nintendo don’t publish detailed schematics of their systems to the level that you’re claiming. Not even on their developers portal. (I’ve had developer account with Nintendo since the Wii days).

And with regards to your point about Linux vs micro kernels, it does make a massive difference when you’re talking about hardware like the N64 which wouldn’t want any of those features which micro kernels excel at and which every instruction wasted is going to cost the user experience heavily. This point was made abundantly clear at the start of the conversation as well.

Look, I have nothing against micro kernels. There’s an architectual beauty to them which I really like. It’s the functional programming equivalent of kernel design. But pragmatically it wouldn’t be your silver bullet in the very specific context we were discussing (re N64). And to be honest I’m sick of you pulling these pathetic straw man arguments in every thread you post on.


laumars, I think what I'm missing to hear from you is why microkernels are such a bad and horrible idea and more importantly why Nintendo itself is mistaken if they had used them on their devices. Otherwise, I still think they are a good fit.


They’re not a bad and horrible idea. I never once said that. Micro kernels are, in my opinion, the future of OS development because they offer a bunch of guarantees which are much harder to achieve with a monolithic kernel (like memory safety, stability (eg a segfault in a driver doesn’t bring down the entire kernel) and so on and so forth.

The problem worth micro kernels is that abstraction isn’t free. That’s less of an issue with modern hardware running modern work loads because you’d need to put that memory safety in regardless of the kernel architecture and chips these days are fast enough that the benefits of security and safety far far outweigh the deminishing cost in performance. However on the N64 you don’t need any of the benefits that a micro kernel offers while you do need to preserve as many clock cycles as you can. So a micro kernel isn’t well suited for that specific domain. The case would be different again for any modern low footprint hardware because they’d still be running on CPUs clocked at an order of magnitude more and modern embedded system might need to take security or stability concerns more seriously than an air gapped 90s game console.

In short, micro kernels are the future but the N64, being a retro system, needs an approach from the past.

This is why it doesn’t help how modern and 90s hardware have been conflated as equivalent throughout this discussion.


> However on the N64 you don’t need any of the benefits that a micro kernel offers while you do need to preserve as many clock cycles as you can. So a micro kernel isn’t well suited for that specific domain

How come that Nintendo decided to use them (according to reverse engeneering finds)? If they are not suited, then Nintendo should know that right?


I’ve answered this question probably half a dozen times already now....

The N64 doesn’t run any OS. It’s just firmware that invokes a ROM which runs bare metal.

The Switch, however, does have an operating system.

There is around 20 years difference between the two games consoles. That’s 20 years of Moore’s law. 20 years of consumer expectations of fast processors and fancier graphics. And 20 years of evolution with developer tooling and thus their expectations.

You cannot compare the two consoles in the way you’re trying to. It’s like comparing a 1920s racing car to a 2020s F1 car and asking why they are so different. Simply put: because technology has advanced so much in that time it’s now possible to do stuff that wasn’t dreamt of before.


Ah okay, I somehow thought that Nintendo had used microkernels on other devices too, not just the Switch. The Nintendo Switch is certainly not a constrained device.


It’s theoretically possible they may have done on other devices too. It’s believed the Switch system software is derived from the DS system software. I’ve not seen any breakdowns on what kernels are running in the DS nor on the Wii family of devices either. But there still orders of magnitude more powerful than the N64 too.

I don’t think there’s much to be gained in speculation about proprietary operating systems running on newer hardware though.


No one forces you to read or reply to them, so whatever.


Even if unconfirmed, I think that Nintendo might use Microkernels according to reverse engeneering finds shows their massive potential for constrained devices. Although certainly not for performance reasons.


> Even if unconfirmed, I think that Nintendo might use Microkernels according to reverse engeneering finds

That was literally what I said :)

> shows their massive potential for constrained devices. Although certainly not for performance reasons

Games consoles are about as far removed from a constrained device as you could possibly get.


> Games consoles are about as far removed from a constrained device as you could possibly get.

It seems to me that you are always ignore low-end devices and very old devices.

If we are talking about an PS5 then yes, this and similar devices are not very constrained, even a full blown Kubuntu might run on some.

But, again there are low-end gaming devices with a tiny black and white screen for 10 dollars and old gaming hardware with very tight constrains. The N64 is certainly one of those constrained old gaming devices.


N64 wouldn’t have been considered “constrained” when it was new though. To be honest it’s not really constrained even now, not compared to the sort of hardware you were discussing earlier. And it’s rather disingenuous how you keep rocking back and forth between current generation consoles and 20+ year old tech as if it’s all current hardware. It makes it rather hard to reply to your points when you then when the goal posts constantly get shifted.

PS5 would easily run Linux considering the PS3 had a few Linux distros ported to it (back when Sony endorsed running Linux on their hardware via the “Other OS” option, which they later removed). Linux is pretty lightweight by modern hardware standards anyways. It’s just not suitable for every domain (but what OS is?)

On then topic of consoles running Linux, pretty sure I have a CD-R somewhere with Linux for the Dreamcast. That was the era when consoles really started to converge on a modern-looking software development approach.


> And it’s rather disingenuous how you keep rocking back and forth between current generation consoles

I never did. I never mentioned current generation consoles not even implicitly. I always talked either about the N64 or (gaming) devices that are constrained and cannot run linux.


Except you did:

“I think that Nintendo might use Microkernels [in the Switch] according to reverse engeneering finds shows their massive potential for constrained devices.”

Maybe you hadn’t grokked that pjmlp was talking about the Switch (Nintendo’s current generation console) rather than the N64?

Either way, my other comment[0] also applies:

[0] https://news.ycombinator.com/item?id=25559670


Execpt that you entered "[in the Switch]" from your own imagination and it's simply not there in my original comment.

> Nintendo actual devices use microkernel based designs. We are way past the usual FUD against microkernels.

Regarding pjmlp post, yes that's true, it wasn't clear (and still isn't) to me from his post, that he specifically speaks about the Switch when referring to "devices".


> Execpt that you entered "[in the Switch]" from your own imagination and it's simply not there in my original comment.

I know it’s not there in your original comment, that’s why it was inside square brackets. That’s a standard way of including context in a quote that otherwise would lack said context. You would see that in news papers and other publications. This isn’t some weird markup I’ve just invented and it’s definitely not a figment of my imagination because the post you were replying to was about the Switch.

> Regarding pjmlp post, yes that's true, it wasn't clear (and still isn't) to me from his post, that he specifically speaks about the Switch when referring to "devices".

You’re right, it wasn’t explicit. My apologies there.


> I know it’s not there in your original comment [and I'm sorry that I have abused the square brackets in such a way, that it changes the meaning] ... My apologies there.

[No problem.]


I wasn’t changing the meaning though. You were replying to a post about Switch. It’s not my fault you can’t grasp enough of this stuff to hold an intellectual discussion.


> Nintendo actual devices use microkernel based designs

That's very interesting, can you provide a source?



This is, of course, cool. No doubt. I have an EverDrive 64 and I’ll probably test this out later today or tomorrow.

I am skeptical that this would be the preferred way to port emulators or graphical games. You’re not getting a budget SGI workstation out of this, because the OS kernel itself is only a small part.

The N64 is built around a chip called the Reality Control Processor, or RCP. This contains the RSP, stripped-down MIPS CPU core with a fixed-point SIMD vector unit, and the RDP, a rasterization engine that does simple trilinear texture interpolation and color blending. This is the hard part of programming the N64, and it’s not something that’s addressed by swapping out the OS kernel.

There are… a number of major challenges you will still have to face if you want to make homebrew software for the N64, unless you are okay with just having something on the CPU and writing to the framebuffer.


I don't think anyone is claiming that the N64 is finally unleashed with this, or anything.

It will knock down an entry barrier for some people, though, and there's no harm in that at all.


> I don't think anyone is claiming that the N64 is finally unleashed with this, or anything.

Just trying to temper people’s expectations.

> It will knock down an entry barrier for some people, though, and there's no harm in that at all.

To be honest—I don’t think this is lowering the barrier of entry to N64 development much. Those are the expectations I’m trying to temper here. If you want to develop for N64, you’re going to go through a lot of fuss getting an EverDrive 64 or a similar alternative, setting up an accurate emulator like CEN64 (the popular emulators are not suitable for development), getting toolchains running on your development system, etc.

I think some people have equated “Linux has been ported to system X” as “development for system X is now solved”, when Linux is only a small part of the solution, and for smaller systems (like the N64, which has only 4 MB RAM base), Linux is probably not your kernel of choice anyway.

The Nintendo 64 development scene would definitely benefit from more people pitching in and doing tools development. This is a good time to do it, there are a lot of gaps ready to be filled, and the number of people doing N64 development has increased quite a bit over the past couple years.


> This is the hard part of programming the N64, and it’s not something that’s addressed by swapping out the OS kernel.

On the other hand , this might massively improve the turnaround time on RSP programming (which is something I've actively been trying to learn), since it'll be that much easier to edit and run microcode without having to bake entirely-new ROMs in the process: I'd just need a serial console (or even a framebuffer console on top of whatever the RSP's rendering) and an assembler (or, at the very least, something to turn hexadecimal input into binary data to DMA over - we're only talking 4K each of code and data here, after all).


This completely blows my mind, but also seems a natural fit considering the SGI lineage of the N64. It's like a super budget IRIX workstation ;P


But the N64 only has 4MB of ram, I think a GUI would be beyond it's capabilities. Still a cool project though.


Fascinatingly, quite the opposite! For example, the original Macintosh had only a small fraction of that to work with, at 128 Kilobytes![1]

Certainly a modern GUI a la KDE, Gnome and friends would be well outside of its abilities, but a functional GUI is possible on a shockingly small amount of memory!

[1]: https://en.wikipedia.org/wiki/Macintosh_128K


It was IRIX, not Linux, but UNIX GUI’s were memory hungry (more than their windows counterparts).

The SGI Indy which could be had with an R4000 CPU came with 16MB base


And? People ran FVWM and rxvt on 4MB of RAM for i386. Slow, but with 8MB it was usable, and with 16MB FVWM ran much faster than CDE itself, not to mention rxvt vs dtterm or even xterm.


IIRC from a talk a few years ago, Xorg can be stripped down to around 600KB, so you could probably have a minimal X environment.


TinyCore (on the frontpage a while ago) fits an OS with a FLTK/FLWM desktop in 16 MB of storage⁰, but it requires a bare minimum of 46 MB of RAM to boot (regardless of swap space, which it recommends along with 128 MB of RAM).¹

OTOH KnightOS² (not Linux) has a rudimentary (obviously not Xorg) GUI that IIUC runs on TI-73 series graphing calculators with 25 KB of RAM.

http://tinycorelinux.net/welcome.html ¹http://tinycorelinux.net/faq.html#req ²https://knightos.org/


Tiny X existed and Basic Linux ran in those specs:

https://distro.ibiblio.org/baslinux/


Yes, and also GEOS running in a Commodore 64


Or MirageOS on a TI-83


I don't see why a modern gui should be impossible. It's not like flat design with rounded corners is amazingly complicated. One issue could be high-res assets, but with some codegolfed vector graphics I think you could get something nice looking.


> codegolfed vector graphics

IIRC the Haiku folks had a format for that[1].

[1]: https://en.wikipedia.org/wiki/Haiku_Vector_Icon_Format


Ok I should have clarified that, I meant available Linux GUI's in reference to OP and IRIX. You'd have to create something from scratch. I don't think you could cram a GUI on par with IRIX in 4mb.


FVWM ran on that. Most people mid 90's ran X with 8-16MB.


The expansion pak is another 4MB of RAM, so that'd bring you to 8 total.

And Windows 95 only required 4MB, so it should be within the realm of possibility,


I can run X in 4 MB on a VAXstation 2000 in Ultrix or Athena-4.3BSD. But a usable Linux 5.10 system in 4 or even 8 MB? That's what I find hard to believe.


Look around for GEOS from Berkeley Systems. It was a graphical OS for the Commodore 64/128. I was using this thing in 1986.


I remember GEOS, but when I got my Commodore 128, it was only a couple of months before I discovered the "bigger computers." I didn't have my 128 for very long at all, but I had a Vic 20 and C64 before that. I thought the 128 was way better than my C64, but way too late. I wasn't much into video games, so that didn't last long. Chronology thing, I guess.

Berkeley Systems, of course, is known for BSD.


The machine can render 3d worlds but a simple GUI is too much?


a large operating system kernel + GUI is too much. You're not loading NintenDOS with Mario 64.


Win3.x would be fine with 4MB of RAM, and it's (just barely) enough for Win95 too. But those OSs were definitely optimised for low resource consumption far more heavily than Linux.


Linux wasn’t always this bloated. My first Linux system had 3 megs of RAM. This was back in 1993. A 386SX laptop with 1 meg on the motherboard plus a 2 meg expansion. I could barely run X.


I would argue that it isn't Linux that is bloated. Linux is the kernel. It is the desktop environments and the things added to them to make each distro distinct (and fat).

There are significantly less bloated desktop environments, but people like their bling, and bling by default includes an accompanying increase in resource consumption.

Look at Raspbian. It looks pretty good, but uses a lot less resources than gnome and kde. You can add to it if you want it to be as bloated as the typical desktop environments. But it's a choice.

Personally I'm in love with tiling desktops which use even less resources yet. And they're blazingly fast.


FVWM is floating and "bling bling" and it's lighter than tiling VM's.


...I wonder how difficult it would be for whatever desktop environments GP was using in 1993 to be used today, on a super-resource-constrained system like an N64. The Linux adage is "don't break userspace", right?


You can already do that by choosing desktop environments that aren't trying to be all things for all people. More bling is literally more resources.

A lot of it is merely feature creep. Remember when OSX used to have animations (like applications minimizing like a genie getting sucked back into its kettle) as a feature you could turn on if you wanted it? Now it's pretty much default, and most people don't know you can shut that stuff off. And that's how things get sluggish. Sexy new "advanced" features become default, and you always need faster computers and more memory to keep up.

If you insist on using Gnome or KDE, obviously bloated because everything has extra features enabled by default, then then shut those resource hogs off. You'll start to get back to fast desktop days again. Some people will miss the bling, but you can't have one without the other. You can't expect all that sweet sweet bling without the resources being tied up to make it happen.


I got voted down by someone who obviously never compiled their own kernel (which used to be the norm). After lots of questions about how you need to config, you get your kernel... THAT is Linux -- the kernel.

The rest, the part you are complaining about bloating is the GNU part. I'm not complaining about that GNU part, btw. I've lived in it since the early 90's. But the Linux kernel can be as slim and responsive as you want it to be if you are willing to compile yourself. You can even compile it as an RTOS, and you can never convince me that would be too sluggish.

What I got voted down for was pointing out that Linux-proper wasn't the slow part of that blend. It was the desktop. And that part is GNU, not Linux. Period.


It was very barebones. FVWM was the window manager I used, running on SLS Linux with a 0.99.xx kernel.


I used fvwm as my main environment back then. It was really good. You can still see its influence in Raspbian and other desktops like that.

I actually had a gig with SLS Soft Landing back in the day. It wasn't unrelated to the stuff I was doing with uucp at the time. That was important then, but totally not at all today, lol.

I am assuming you are making a bit of a jest, because things have moved along in the past many decades.


I used to do a ton of stuff with UUCP back in the day. Fun times. I had about a 5 other systems calling into mine for UUCP mail.

You are right. Linux has evolved tremendously over the past ~30 years. Those early times were fun though...


Windows 3.x ran OK with 1MB of RAM if you only used a couple of applications at the same time. It wasn't the fastest thing and swapped a lot but it was usable for Word, Excel and the like, as long as you stuck to contemporary versions, as later ones tended to be more memory-hungry.


That's more than plenty.

(It won't be running Gtk4 or Qt6 of course.)


Check out this article on the project:

https://www.phoronix.com/scan.php?page=news_item&px=Nintendo...

>It's also noted that Linux on the Nintendo 64 is still a big buggy and "constantly flirting with [out of memory]."

If you're running out of ram with just a shell that definitely doesn't leave much left. And there's no storage device, so you can't swap.


The DSLinux project runs in 4MB with no MMU as well. They do recommend using the +4MB RAM expansions cards to use the graphical environment, but it's not a a requirement.

IIRC, there used to be a GBALinux as well. That's what, 1/8th of the memory?

That this port is OOM-ing is just it being buggy.


> And there's no storage device, so you can't swap.

There's the cartridge. And yes, while technically that's supposed to be ROM, flashcarts like the EverDrive are able to get creative with that, and I can see that being a viable pathway to achieving something approximately resembling swapping.


Upgradeable to 8mb with the "Expansion Pack", for what that is worth.


IIRC, X11 ran fine on my 486DX2-66 with 4MB RAM with Linux 1.2.13.


You could run X with 4MB, but it would crawl. With 8mb it was ok-ish with swap, and it was snappy with 16. No KDE, no Gnome, no XFCE, but FVWM flied.


8MB with the Expansion Pak I think.


This, plus the recently announced PlayStation 5 gamepad driver support for Linux. Dreaming is free.



If you're building a hardware and have to ship it with an OS, Linux is the obvious choice for most... but the GPL licensing makes it very difficult for a lot others.

There's a reason a lot of open source in recent years have avoided GPL as a plague and opted for BSD or MIT or Apache licenses.


> Linux is the obvious choice for most... but the GPL licensing makes it very difficult for a lot others

Linux stayed on GPL2 instead of upgrading to GPL3 precisely to allow others freely using Linux in their commercial devices. In other words: they made the intentional choice of making it legally easy to embed Linux in proprietary hardware products (aka. Tivoization, which GPL2 allows but GPL3 forbids)


That's true.

GPL2 is better for commercialization than GPL3 is. However, BSD, MIT, Apache are much much better than any GPL version, including AGPL.


Yeah PS4 has done wonders to upstream FreeBSD.


I'm sure they've made some contributions but I'm not sure if they've done "wonders". But thats the beauty of the BSD license. Not only do you have freedom to use the software, you also have freedom to not give back your modifications if you choose not to.


Which is why FreeBSD has taken the world of UNIX by storm and is now the main UNIX clone in existence. /s


Let's see. Modified version of BSD OS runs on iOS devices, Macbooks, PS4s.

Chromium uses BSD license and browsers based on Chromium, including Chrome, Microsoft Edge, Opera run everywhere.

BSD is much more prevalent than GPL and software written in BSD-style licenses will carry on into the future as even fewer people will be willing to touch anything GPL.


Author also wrote a book on Tiny Core Linux http://tinycorelinux.net/book.html, seems fitting.


I think something like this is so cool: the relative ease in which Linux can be ported to something combined with a new application for a retro console. His post sums it up.

"But why... Most importantly, because I can."


Replying to my own post but want to give credit where it is due. First saw this on Phoronix.

https://www.phoronix.com/scan.php?page=news_item&px=Nintendo...


+1 to software/hardware necromancy.

How does a Linux port make it easier to port emulators or console games?

Is this a step towards getting an N64 on a modern (?) stack like qemu+linux?


> How does a Linux port make it easier to port emulators or console games?

It helps emulator development by giving you something to work with inside of your emulator that actually has some tooling to let you look around and test out the system from the inside, unlike trying to get a game working. If Linux-N64 can boot, you can expect that a lot of your code is working properly.


Ah, that makes sense. I can see why that would be useful.


Linux has debuggers and such and people (like me) are way more familiar with its stack than whatever they had back in the day. Software development would be way easier with all of that a support infrastructure.


Waiting for the Vulkan backend and the dolphin port.


Screw doom64. Time to port chocolate doom.


Or Nethack. I played even under a PSP, playable with keybindings and a lot of patience.


Is it possible to use the texture memory/cache (including the Expansion Pak) as additional (albeit limited) system RAM? If so, is that the case here?


The expansion pak would double the effective memory from 4MB to 8. But, there is no separate memory for textures to take advantage of. Just a 4k cache that textures were loaded into from main RAM one at a time while rendering.

There were some games that manually implemented their own virtual memory paging system to map the cartridge ROM to RAM address space.

But, in general a feature that distinguished the N64 from the PS1 was that it had a single, unified block of RAM for all of the hardware to share however you like.


In theory there is 1MB of hidden ram, used exclusively by GPU (9th bit). Wiki describes it:

> Differing memory countings are due to the 9th bit only being available to the RCP for tasks such as anti-aliasing or Z-buffering.

afaik there is no way of manually using this ram, its hardwired to fixed function pipeline stages of RCP.


I think you can very slowly peek and poke those hidden bits through the RDRAM test registers. Half the MMIO seems to be intended for board test via the CPU's A/D bus being twiddled via it's boundary scan.


TMEM isn't a cache unfortunately and has to be manually loaded and unloaded.

But because of that you might be able to use it for extra sort of like how the Factor 5 GC games would spill out to ARAM.

A lot of work for 4k of RAM though.


> This is a port of Linux to the N64. Only native drivers for now, that is, no Everdrive or 64drive specials. Expansion pak required. [0]

It seems that this actually requires using the Expansion Pak as system RAM to even run.

[0] https://github.com/clbr/n64bootloader/tree/master/n64linux


Expansion Pak is regular RAM both to this and the normal games.

The texture memory would be difficult to use as RAM though since it's not directly addressable.


texture memory would be difficult to use because n64 doesnt have any, its unified memory arch.


It has 4k of TMEM on the GPU, and is in fact the only place textures can be used by the GPU.

That's one of the major reasons the N64 has a reputation for being 'blurry'. 4k gives you a max of (with mip maps) one 32x32 16-bit texture at a time.


Not sure what you mean by “any” it certainly has texture ram but to a fault too little so n64 textures are notoriously crumby.


The cartridge port on the N64 exposes a 32 bit address bus, Theoretically there's nothing stopping someone from cramming a gig of SRAM inside a cartridge to increase the memory from 8mb. It's directly addressable by the cpu, even if it's not the RDRAM bus.

Everyone here saying 8mb too small, well just add more ram.


how do you even interact with the N64 ... ?? controller????


The ROM comes loaded with a terminal emulator that is operated via the N64 controller:

https://github.com/clbr/joyterm/blob/master/main.c


It did have a keyboard released in Japan

https://nintendo.fandom.com/wiki/Nintendo_64_Keyboard

No idea how realistic it'd be to get it working with Linux.


How do these sort of ports for niche systems get made?


Manuals for the system and it’s processor exist. The Linux kernel is pretty agnostic towards what it’s running on, as long as it supports the C Language. The difficult part is filling in the system specific blanks in the kernels’ source code, which is big work.

See here: https://www.linux-mips.org/wiki/Linux/MIPS_Porting_Guide


Simple ports to small systems are actually very easy. To run Linux all that you strictly need is a timer driver and an interrupt controller driver, assuming the CPU architecture is already supported and there is a compatible MMU. On small systems, those drivers can be very simple.

I was bored one evening and ported Linux to run on the Wii's IO/security processor (not the main CPU, that's Wii Linux). It only took a few hours to get the kernel booting to userspace.


Did the N64 have an MMU? I guess so, COP0. I wouldn't have guessed that.


Some games used it as well. Goldeneye for instance mainly runs out of a region that dynamically mirrors the cartridge to RAM, just like any other virtual memory scheme.


Here's the patch series presented more nicely. Lacks the intro email, though: https://patchwork.kernel.org/project/linux-mips/list/?series...


Nintendo has used microkernels for the Nintendo Switch, I think it would be interesting to see if microkernels would also fit for the N64 or similar constrained devices.


I would try this but my Everdrive 64 broke. Sad Christmas for me, I suppose.


So... has anyone tried running a Nintendo 64 emulator on it? :)


Tried with cen64, gets to "Booting kernel" then hangs. Maybe I'll get a flash cart and try it out.


The author has a fork of cen64. Would be interesting to see if that works.


Sadly, the device is not powerful enough for a Debian port.


God that's sweet. I hope it gets merged.


Finally! Been waiting for this for 24 years.


What does this mean exactly? Linux capable of running on N64? A Linux distro that can do what N64 does? A combination? Something else?


I'm confused. What is this and why is it significant? It's not an emulator, it's a port (what does that mean?) - but it doesn't necessarily run N64 games?


I never had a relationship with the N64. After the Super Nintendo I was deep into computers and programming. The place I used to go to rent video games also started to rent CD ROMS. I went down there one time to rent a CD ROM and saw the N64. Back then video games stores also worked as an arcade, where you could play a system for 30 minutes paying a small fee. So I decided to try this new system, the N64. The game I choose was Mario 64, or course. I did not know where to go, how to play, what my goals were... I remember back then feeling that I outgrew video games. This was around the year 2000. Wasn't until 2009 that I decided to play with video games again, I bought a used xbox 360 from a friend. I had played Mario Paper 64, but on an emulator, and way after the game was released. Cool project thought, thanks for sharing.


This response sounds like it is made from an AI


sorry, When reading the news, if I remember something funny related to the subject I usually share.


Sounds like what an AI would say.


Well, I'm not an AI. Sometimes I don't even feel I'm "I" at all


Had similar experiences with games and gaming.

Until now however;

La Noire

Outer Wilds

Command and Conquer Remastered

Broken Sword

And so on

What a time to be alive


Psst, if you like the investigative aspect of Outer Wilds, do check out Return of Obra Dinn.

And Disco Elysium, though that is a bit more RPG-y. (no combat though, but it has stats and "character progression")




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: