Hacker News new | past | comments | ask | show | jobs | submit login

It boils down to abstractions papering over ABI details.

How do you write "put 0x12345678 to register 0x04000001" in assembler? mov eax, 0x04000001 / mov [eax], 0x12345678

How do you write it in C-as-portable-assembler? You write (u32)0x04000001 = 0x12345678;

How do you write it in Java? You can't, the language has no such ability and if you try it's a syntax error. You have to call into a routine written in a lower-level language.

How do you write it in C-as-abstract-machine? You can't, the language has no such ability and if you try it's undefined behaviour. You have to call into a routine written in a lower-level language.

By the way, you can't write an operating system in C-as-portable-assembler either. No access to I/O port space, no way to define the headers for the bootloader, no way to execute instructions like LGDT and LIDT, no way to get the right function prologues and epilogues for interrupt handlers and system calls, no way to invoke system calls. All those things are usually written in assembler. Writing operating systems in C has always been a lie. Conversely, you can extend the compiler to add support for those things and then you can write an operating system in extended-C!




This addresses a part of my question, which I didn’t make clear, thanks! I mean after all this assembler stuff one could just use BASIC or similar. Yes, Java has no concept of PEEK/POKE, IN/OUT, but it just wasn’t designed for that. Meanwhile, 1980s small systems were all assembly + basic/fortran. Of course they had no kernel in a modern sense, but all the devices were there: a speaker (SOUND), a serial line to a streamer/recorder (BLOAD), a graphics controller, no dma though, but it’s just a controller with the same “ports” as well, which can r/w memory and generate interrupts. I don’t get it why we don’t just skip C to something high-level after wrapping all this pio/dma/irq/gdt/cr3 stuff into __cdecl/__stdcall format and then use ffi of a language which would decide to support that. I also don’t understand GC arguments down the thread, because GC over malloc seems to be just a synthetic detail. You definitely can implement GC over a linear address space, just bump alloc it until the limit, or page-table however you want for dma. Malloc is not hardware, it isn’t even a kernel thing. Apps run on mmap and brk, which are similar to what kernels have hardware-wise. Mmap is basically a thin layer over paging and/or dma.

It was so easy and then blasted into something unbelievably complex in just few years. Maybe 80386 wasn’t a good place to run typescript-over-asm kernel, but do we still have this limitation today?


We don't do that, mostly because many communities cargo cult C and C++, so unless you have a companies like Apple, Microsoft and Google stepping in and asserting "this is how we do it now if you want to play on our platform".

Arguably with their push for Swift and Java/Kotlin, Apple and Google are much forward than Microsoft on this matter, given that .NET tends to suffer from WinDev worshiping C++ and COM.

You can get that BASIC experience nowadays when playing with uLisp, MicroPython and similar environments for embedded platforms, most of them more powerful than 16-bit home computers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: