Hacker News new | past | comments | ask | show | jobs | submit login

Supercomputers need compilers, et. al. And now I need to go back and revisit the Pascal compiler for those machines...



The toolchain does not have to run on the supercomputer itself. Most supercomputer architectures have self-hosting toolchains there are also supercomputers that do not. Also compiling or even debugging programs directly on the machine is in most cases plain waste of (expensive) computing resources and it is not that one would ever have only the supercomputer and not any other computers (in fact, many traditional supercomputers cannot boot on their own and have to be booted by some kind of frontend computer).


> many traditional supercomputers cannot boot on their own and have to be booted by some kind of frontend computer

CDC went all in on this. Their large computers had ‘peripheral processors’ (for the CDC6600, based on the CDC160) that essentially ran the OS, leaving the main processor free for what it was good at.


You'd be surprised how true that is today too.

The Wii and WiiU run most of the "OS" on an external ARM core "Starlet"/"Starbuck". All I/O, network access, encryption for SSL, booting the main cores, the USB stack, etc. is on that ARM core, not the main PowerPC cores so those can be dedicated to running "game code".

The Cell in the PS3 is a SPI slave that gets booted by an external processor.

The PS4 is the same way, and that external core holds most of the filesystem (how game updates happen with the console "off").

And then most SoCs (including most AMD and Intel chips) boot system management cores (ME/PSP/etc.) that then is responsible for initializing the rest of the core complexes on the chip. Pretty much every ARM SoC sold these days will talk about how they have a CortexM3 in addition to they CortexA cores; that's what it's for. SiFive's Linux capable chip has one of their E series cores in addition to their U series cores for the same purpose on the RISC-V side of things.


> Pretty much every ARM SoC sold these days will talk about how they have a CortexM3 in addition to they CortexA cores; that's what it's for.

Usually the advertised-on-the-datasheet M cores are available for user code and you'll get a virtual serial port or some shared memory to go between them and the big core. I don't doubt that there are additional hidden cores taking care of internal power management, early boot etc.

At least, this is how it is on the STM32MP1 and the TI Sitara AM5 socs.


That overstates the case a little. The peripheral processors didn't run any user code, in particular the compilers still used the main processor.


You are confusing theory with practice. Back then, computers were expensive and rare. The general student population at my university had two choices: the CDC 6400, or an HP time-sharing system that ran BASIC. A friend and I actually wrote a complete toolset in BASIC that allowed students to learn HP-2100 assembly language. (I did the editor and assembler, he did the emulator and debugger). But writing a PASCAL cross-compiler in BASIC, that output a paper tape of COMPASS, or binary? No way. Or FORTRAN, SNOBOL, Algol, ...


I learned FORTRAN on a HP 2000C timesharing system, using a FORTRAN emulator written in BASIC. It was dog slow, but it worked. I have no idea where the emulator came from.


Did they also have that at launch time(s), back in the 60s/70s?


I believe so, the comp. arch. textbooks were pretty emphatic on the description of the CDC 6600 as "full of peripheral processors", e.g. for I/O and printing, etc. Deliberately, not something tacked on later as an afterthought.


I cannot find any information about whether one of the peripheral processors in CDC 6600 (which were full-blown CPUs, not glorified DMA engines as in Cray-1 or System/360) has some kind of system management role. On the other hand Cray-1 needs not one, but two frontend computers to work (one is DG Nova/Eclipse supplied by Cray which actually boots the system and second one has to be provided by customer and is essentially an user interface)


The peripheral processors were integral to the CDC 6600 and it's successors (6400,6200,6700,7600, and Cyber 70 series) built inside the same mainframe cabinet. In the 6000 and Cyber 70 series There were '10 of them' that shared the same ALU with a barrel shifter that would shift 12 bits after each instruction. That shift would load the registers for the 'next PP' in a round robing fashion. They were pretty primitive. There were no index registers so self modifying code was a regular thing and polling was the only method of IO supported at least at first. I think the later models did support some sort of DMS. The PPs did have access to the 60 bit main memory and there was an instruction exchange jump or XJ which would load the register block and switch between user and supervisor modes.


What do you mean? The CDC OSes actually ran on the PPs and for all intents and purposes managed the system. The two-headed video console was hardwired to a PP as well, and used to managed the system.


That makes a lot of sense. I bet a lot of these "scientific" machines ended up primarily being used for software development...

http://www.lysator.liu.se/hackdict/split2/pascal.html

> Pascal n. An Algol-descended language designed by Niklaus Wirth on the CDC 6600




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: