Hacker News new | past | comments | ask | show | jobs | submit login
First new VAX in 30 years? (netbsd.org)
218 points by JoachimS on July 7, 2021 | hide | past | favorite | 89 comments



Really cool, but just to nitpick a bit not entirely correct in some ways.

Logical systems still sell their NuVAX machines, see https://logical-co.com/product/nuvax-4400-system/

They also do PDP/11 systems and hardware.

I think they might be built using an emulator on PC hardware with some interfacing hardware, but still I would call it a "new" VAX for most purposes.


NuVAX is based around software reimplementation of VAX cpu with special-purpose I/O system to reuse existing interfaces (important when you need to deal with custom interface cards, for example).

Funnily enough, it appears that real vaxen ended up being produced for shorter time than PDP-10, despite efforts going as far as encasing last high-end PDP-10 prototype in concrete and dumping it in mill pond (or so the story goes), as known single-chip PDP-10 were produced at least as late as 2004, after last VAX rolled off production line.


"If you're not playing with 36 bits, you're not playing with a full DEC!" -DIGEX (Doug Humphrey)


I didn't realize anybody still remembered Digex.


DIGEX was Doug Humphrey's nickname long before he started an ISP using that name.

Doug had a KA10 in the living room of his apartment in a high rise tower in Maryland, near where I used to live.

He had to take it up the elevator in two pieces, and the elevator had a hard time aligning with the floor because of the weight, so he had to lift it up a foot to get it out the door.

The apartment didn't have the right two phase 220 volt power outlet, but it was possible for two people to simultaneously plug it into two different 110 volt outlets at opposite ends of the apartment that were on different phases to get it to work.

It didn't have any memory except for the registers, but you could load a loop of code into the registers and run it really fast (since they were mapped to low memory), to keep the apartment nice and cozy warm.

Good thing electricity was included in the rent!


I barely remember them. I worked for an ISP that had a dual T1 to Digex. We also had a couple to UUNet and another to MCI. Digex was cheap compared to the other two.


"Radix-50 Rulz!!!"


So you are a DEC fan but don't remember Radix-50 encoding?


> as known single-chip PDP-10 were produced at least as late as 2004

Who was producing a PDP-10 in 2004? I'd love to know more about this.


XKL's TOAD-2 is/was based on a single chip PDP-10.

The hardware is used as a router, but supposedly if you ask it very nicely you can get it to run TOPS-20.



Living Computer Museum had two XKL-2 running TOPS-20.


Yup, the XKL TOAD ("Ten On A Desk") and TOAD-2.

https://en.wikipedia.org/wiki/XKL#TOAD-1


At the time of TOAD-1, there were also still machines built based on SC design, for effectively singular consumer - Compuserve.


Fascinating... in the phrase "replaces the VAX chassis, CPU, memory, KWV11-C clock and mass storage", the clock being mentioned separately intrigued me, so i googled it. Turns out that only the clock was a whole board full of good old TI 74x TTL chips - and a "used but working" board goes for 345$ on eBay (https://www.ebay.de/itm/M4002-KWV11-C-MODULE-USED-AND-WORKIN...)...


The first-gen PDP-11's CPU was just a couple of boards full of good old TTL chips.


Yes, the pdp-11/20, released in 1969. Core memory only. In my opinion, it took the pdp-11/45 to firmly establish an impressive new line of machines, separate and more powerful than the pdp-8 line. But, in DEC's way of thinking, it's instructive to note that a pdp-11/40 was used as the console of the DECsystem-20. Separately, it's a shame we no longer have 36-bit machines; 36-bit ints would hold time_t just fine, and 72-bit doubles will work great for science. Oh well. What we're stuck with now bytes.


I remember reading an old paper on some funky experimentat user-interface that used a pair of PDP-11s (possibly, might have been event older) essentially wired together at the busses, with extra instructions added with new logic in TTL silicon, I think it was for doing the maths for the vector graphics. It was amazing the amount of hardware hackery that was necessary to build these experimental systems.



Compared to x nm VLSI, TTL and ECL are ridiculously low density. And soooo sloooow.

DEC were very pleased with themselves when they got to ~40VUPs in the later ECL models, but a full modern VLSI - not FPGA - implementation wouldn't break a sweat at 1000VUPs.


What seems particularly interesting is that the board seems to really be just an RTC, yet does not have any obvious place for backup battery and has 10MHz oscillator on it.


GE Canada has VAXen running their atomic plants and they are under contract to keep 'em running for a long time, 2035 or 2045. (Might be called BWXT today.)


Melbourne (Australia)'s train signals used to be controlled by PDP-11s running Ericsson JZA715 train control software. They were replaced by Ospreys. An Osprey is actually a hardware PDP-11 CPU on an expansion card which plugs into an x86 PC bus. It uses an actual CPU not emulation because realtime applications like train control need to be 100% cycle accurate. (Originally they used actual PDP-11 CPU chips manufactured by DEC, later they switched to using FPGAs). It also has Unibus cards to do Unibus-ISA/EISA/PCI translation so it can integrate with the original peripherals.

http://web.archive.org/web/20210126085900/https://www.equico...

http://www.strobedata.com/home/ospreyguide.html


Why does train control need to be 100% cycle accurate?

PDP-11s ran at what, 1.25Mhz? I'd think that a modern CPU software emulating a PDP-11 CPU could get to below those cycle times.


> Why does train control need to be 100% cycle accurate?

It's not uncommon for IO interfaces to be cycle-sensitive. I can only speculate, but perhaps they do not use a timer and it is cycle-exact code for certain timing operations. They could be bit-banging a serial protocol, as that was done then as now. Or they could be controlling very sensitive things where even a few microseconds of jitter is unacceptable. Tying it so closely to the CPU like that was a dirty but common practice in the 70s and 80s. And sometimes simply obligatory! There is no high precision timer on the low-end PDP-11s by default, I believe.


A few cache misses in a row, and some branches mispredicted and your modern CPU is slower than 1.25mhz. This almost never happens (CPUs are very good at this) but when it does things can get bad.


You could keep all of the PDP-11's RAM and probably most of its "external storage" in the L2 cache of most modern CPUs.


https://en.wikichip.org/wiki/amd/microarchitectures/zen_3

> 512 KiB per core, 8-way set associative

So the 64 core 7763 / 7713 you are looking at 32 megabytes of L2.

The 11/70 in 1975 already had 22 bit address space and like 10MB disks. So -- yes, it's not unlikely with a bit of hand crafting you could keep the whole shebang in L2 but you need a top end CPU.


Lets say the modern CPU gets itself really tied up in knots and is out of action for a staggering 10ms. During that time a speeding train doing 350kph travels not quite a meter. Do trains run such tight scheduling that this isn't sufficient time to cause delay on actuating a switching element and cause an accident?


You have a legacy safety-critical system, which incorporates legacy hardware peripherals. How sensitive is it to changes in timing? You may not actually know. Do you want to do the engineering analysis necessary to prove that replacing one part of that system with potentially different timing is not going to cause problems? Or do you just seek out a replacement whose timing is as close as possible to the original?

The big issue may not be with the trains themselves but the communications protocols used to talk to signalling equipment and other peripherals. Changing the timing in the communication with them may lead to problems.

And what if the original software has race condition bugs which have never been surfaced, and the occasional inaccuracy in timing starts to surface them? Good luck fixing bugs in some obscure piece of PDP-11 software that was written in the 1970s.


You could always setup a train in a box system and iterate all the control logic sequences to verify being within margin of error. Once you know that, equipment substitution is straightforward.


I have no idea. If this is real time control that could mean you keep running the motors in the switch long enough to damage something. Or maybe you go past the end of travel switch signal without reading it, the switch turns off and you never stop... There are a lot of ways real time systems can fail.

You are correct that 10ms is well within the margin of error for safety stopping a train, but it may be out of the margin for some subsystem in the control.


With bipolar memory option, the pdp-11/45 had a 300 ns memory cycle time.


I would love to have some one write an article about how they even manages to do that.

Are they real VAX servers or have they've been virtualized/emulated somewhere down the line.


More than you would imagine.



Definitely at least some software emulation on x86. The manual for that system shows a board name "PEAK876VL2" in a picture - that's an x86 board - but LGA1156, so old!


I suppose that means the author of the blog post potentially has a market for his FPGA implementation.


I think getting VAX software to run on an FPGA board is the lesser problem - getting it to be 100% compatible so it can serve as a "slot-in" replacement for an existing VAX is probably a lot more difficult.


Thanks, didn't know about NuVAX. How old is that product? It seems like you could do much better than 70 VUPs today. Or 5 years ago, for that matter. I guess it is because everything is done in sw emulation?


Are there really PDP-11s out there still doing stuff? That is wild


There were apparently 2 prior working VAX FPGA implementations, but both were lost to time[1]. I tried the Wayback machine for both, but it doesn't seem to have the right time period cached.

[1] http://www.avanthar.com/healyzh/decemulation/pdp_fpga.html (scroll to bottom)


RT Logic: https://web.archive.org/web/20061019161717/http://www.rtlogi...

Not much info -- no more than is here: https://comp.os.vms.narkive.com/UKFNZj9v/fpga-vax

Noboyuki Kondoh's PhD thesis on the university effort is here, but only the abstract is in English: http://labo.nshimizu.com/thesis/b2004/1adt2415.pdf


Oh that's sad. I remember when one of those was announced, I was going to grab the code, but now it's gone? Is internet decay accelerating?


Wild. Apparently there are many different efforts to use FPGAs to simulate old hardware:

"Using FPGAs to Simulate old Game Consoles": https://jakob.engbloms.se/archives/3026

"AMSTRAD ON AN FPGA": https://hackaday.com/2017/01/06/amstrad-on-an-fpga/

"MISTER FPGA: THE FUTURE OF RETRO GAME EMULATION AND PRESERVATION?": https://www.racketboy.com/retro/mister-fpga-the-future-of-re...


I find it interesting when someone makes foundational improvements when using an FPGA to mimic an old CPU.

The NextZ80 is a good example. It's designed to run 4x the amount of instructions at the same clock rate as a real Z80. And you can clock it to 40MHz, so it's effectively a 160Mhz Z80...compare to a typical 4/8Mhz real Z80.

https://opencores.org/projects/nextz80


Yes, but how many programs written for the real thing have hard coded values dependent on cycle times?


Perhaps for games on a platform like the ZX Spectrum and MSX, but I suspect most of the CP/M targeted apps wouldn't care.


But how many of those systems have BIOS code that uses hard-coded timing loops to talk to devices?


related / other thread from this netbsd list a few days ago

New Vax Implementation https://news.ycombinator.com/item?id=27742540


I took my original programming classes on the VAX. I'm somewhat nostalgic for it. In many ways it was ahead of its time


The clustering technology was way ahead of its time. Stuff that everyone oozes about over Kubernetes was in VMS clusters back in the 1980s.


Absolutely, and it did it better than k8s.

You could cluster different versions of VMS, or cluster Alpha VMS with Itanium VMS, or all 3 at once. Alphas and Itaniums can run VAX emulators for legacy compatibility, and the copy of VMS in the emulator can be clustered with the OS on the host machine it's running on.

VMS clusters had uptimes in decades.

Compaq bought DEC. HP bought Compaq. So HP inherited VMS.

It's now been spun off as a new company: https://vmssoftware.com/

When VMS later gained a POSIX mode and TCP/IP it was renamed OpenVMS. VMS Software is now porting VMS to x86-64.

You can port *nix apps to VMS and run them natively. The clustering is still there and still works.

There's a small chance OpenVMS might enjoy a small renaissance, bringing mainframe-like uptime and resilience to commodity x86 hardware, without all that pointless faffing around with VMs and one OS running another different OS inside a VM, with all the duplication and waste that this entails.


Yep, me too.

Like a lot of people, I really miss filenames with built-in version numbers.

[1] Create a file called `WUMPUS.FOR`

[2] Every time you saved it, the OS automatically incremented the version number: `WUMPUS.FOR;2` and `WUMPUS.FOR;3` and `WUMPUS.FOR;42`.

[3] You could optionally `PURGE WUMPUS.FOR` and all the old versions would be removed, and then the OS would continue making new versions.

With this, who needs a revision control system? For a single user, this is all you need -- you can go back to old versions, copy or rename them to branch, etc.

Unix came along at the same time as this was happening and it never incorporated this functionality, which is why we need clumsy functionality bolted on top, such as Git. The actual versioning info isn't in the filesystem -- it's hidden inside concealed data files.


This seems interesting, but I lack context. Can someone ELI5? Or maybe more like 'Explain Like I'm a CS Undergrad' (ELICSUG?).


In 1981 the VAX 11-780 was an object of lust for engineering undergrads, who'd be lured by faculty with promises of access to the "1 MIPS, 1 MB RAM beast" ...

The author says he re-implemented the CPU (using Verilog) on an FPGA running at 50 MHz, well enough to run a test binary successfully.


Thanks!


Um. Now I feel like I'm 106 instead of "just" 53.

OK, so, basically all modern mass-market OSes of any significance derive in some way from 2 historical minicomputer families... from the same company.

Minicomputers are what came after mainframes, before microcomputers. A microcomputer is a computer whose processor is a microchip: a single integrated circuit containing the whole processor. Before the first one was invented in 1974 (IIRC), processors were made from discrete logic: lots of little silicon chips.

The main distinguishing feature of minicomputers from micros is that the early micros were single-user: one computer, one terminal, one user. No multitasking or anything.

Minicomputers appeared in the 1960s and peaked in the 1970s, and cost just tens to hundreds of thousands of dollars, while mainframes cost millions and were usually leased. So minicomputers could be afforded by a company department, not an entire corporation... meaning that they were shared, by dozens of people. So, unlike the early micros, minis had multiuser support, multitasking, basic security and so on.

The most significant minicomputer vendor was a company called DEC: Digital Equipment Corporation. DEC made multiple incompatible lines of minis, many called PDP-something -- some with 9-bit logic, some with 12-bit, 18-bit, or 36-bit logic.

One of its early big hits was the 12-bit PDP-8. It ran multiple incompatible OSes, but one was called OS-8. This OS is long gone but it was the origin of a command-line interface with commands such as DIR, TYPE, DEL, REN and so on. It also had a filesystem with 6-letter names (all in caps) with semi-standardised 3-letter extension, such as README.TXT.

This OS and its shell later inspired Digital Research's CP/M OS, the first industry-standard OS for 8-bit micros. CP/M was going to be the OS for the IBM PC but IBM got a cheaper deal from Microsoft for what was essentially a clean-room re-implementation of CP/M, called MS-DOS.

So DEC's PDP-8 and OS-8 directly inspired the entire PC-compatible industry, the whole x86 computer industry.

Another DEC mini was the 18-bit PDP-7. Like almost all DEC minis, this too ran multiple OSes, both from DEC and others.

A 3rd-party OS hacked together as a skunkworks project on a disused spare PDP-7 at AT&T's research labs was UNIX.

More or less at the same time as the computer industry gradually standardised on the 8-bit byte, DEC also made 16-bit and 32-bit machines.

Among the 16-bit machines, the most commercially successful was the PDP-11. This is the machine that UNIX's creators first ported it to, and in the process, they rewrote it in a new language called C.

The PDP-11 was a huge success so DEC was under commercial pressure to make an improved successor model. It did this by extending the 16-bit PDP-11 instruction set to 32 bits. For this machine, the engineer behind the most successful PDP-11 OS, called RSX-11, led a small team that developed a new, pre-emptive multitasking, multiuser OS with virtual memory, called VMS.

VMX is still around: it was ported to DEC's Alpha, the first 64-bit RISC chip, and later to the Intel Itanium. Now it has been spun out from HP and is being ported to x86-64.

But the VMS project leader, Dave Cutler, and his team, were headhunted from DEC by Microsoft.

At this time, IBM and Microsoft had very acrimoniously fallen out over the failed OS/2 project. IBM kept the x86-32 version OS/2 for the 386, which it completed and sold as OS/2 2 (and later 2.1, 3, 4 and 4.5. It is still on sale today under the name Blue Lion from Arca Noae.)

At Microsoft, Cutler and his team got given the very incomplete OS/2 version 3, a planned CPU-independent portable version. Cutler _et al_ finished this, porting it to the new Intel RISC chip, the i860. This was codenamed the "N-Ten". The resultant OS was initially called OS/2 NT, later renamed – due to the success of Windows 3 – as Windows NT. Its design owes as much to DEC VMS as it does to OS/2.

Today, Windows NT is the basis of Windows 10 and 11.

So the PDP-7, PDP-8 and PDP-11 directly influenced the development of CP/M, MS-DOS, OS/2, & Windows 1 through to Windows ME.

A different line of PDPs directly led to UNIX and C.

Meanwhile, the PDP-11's 32-bit successor directly influenced the design of Windows NT.

When micros grew up and got to be 32-bit computers themselves, and vendors needed multitasking OSes with multiuser security, they turned back to 1970s mini OSes.

This project is a FOSS re-implementation of the VAX CPU on an FPGA. It is at least the 3rd such project but the earlier ones were not FOSS and have been lost.


great comment - thanks :)

and lets not forget: * VMS/VAX had virtualization & clustering in the 80ties

* DECNET

* systems which ranged from low-cost CMOS - for example microVAXes or the older bitsliced 11/730 - to high-performance ECL - for example the 9000

* VMS uses ASTs - asynchronuous system traps - for system calls ... imho. this approach is far supirior to any "software-interrupt" concept from UNIX etc. and pays out especially under high loads (read: database-server)

* ...

cheerst

ps.: VMS and windows NT where so similar in their (basic) concepts, that the joke VMS++ = WNT still holds ;)

pps.: when it came out 1992 and during the following years digitals alpha-processor was far superior to nearly anything in the market back then ... ok, maybe 2nd to MIPS ;) * http://alasir.com/articles/alpha_history/


Virtualisation on VMS? I don't recall that... tell me more?

There was a ton more influence too. I was trying to highlight perhaps the most significant bits people who weren't yet born in the 1970s or 1980s might have seen, used, and know.

As pointed out over on lobste.rs by David Chisnall: (https://lobste.rs/s/kmpnqd/historical_significance_dec_pdp_7...)

• The VAX memory layout is why modern OSes' memory layouts are the way they are.

- The VAX block size of 512 bytes is why hard disks had 1/2 kB blocks until recently. - The VMS VM file was called PAGEFILE.SYS, which is why NT's has that name. - Unix first got virtual memory in the VAX version. The ordinary kernel was called `unix`, and to distinguish it, the VM version was called `vmunix`. That's why the Linux kernel was called `vmlinux` and later the compressed version `vmlinuz`.

• The PDP-11 instruction set influenced the CPU designs of the Heathkit H11 (CP1600) & Intellivision (CP1610), Hitachi SuperH and Motorola 68000.

• The VAX's protection rings influenced the protection rings of the Intel 80386, which is why x86-32 has 4 rings but almost all OSes use only 2 of them.

There is a huge amount of low-level stuff in the designs of the Mac, Amiga, ST, the PC and x86 in general, in games consoles, even in late-1970s and early-1980s 8-bit home computers, that are that way because DEC were so totally dominant... but a lot of this stuff is only visible if you program in machine code, know the memory layouts and so on.

So I tried to focus on the stuff visible to users, sysadmins, maybe techie-minded gamers. Filenames, commands, and so on.

The 1970s computer industry was factiously described as "IBM and the Seven Dwarves". The minicomputer makers were called "the BUNCH": https://en.wikipedia.org/wiki/BUNCH

Of the BUNCH, DEC were by far the biggest and most influential.

DEC's range of minicomputers ranged from de-facto microcomputers -- single-user desktop minis built into terminals -- to massive multi-cabinet minicomputers that competed directly with mainframes. Really, the lines blurred together.

But DEC's fatal flaw was that its senior management thought cheap single-user micros, and UNIX, were passing trends of no real importance. It feared micros cutting into its lucrative minicomputer profit margins, and thought its proprietary OSes were better than UNIX.

Both of these statements were arguably true but even an industry giant can't change the course of a whole industry.

So, ironically, in the end, IBM designed the IBM PC using ideas and concepts from DEC (and Apple), not from IBM's own big hardware. The PC was a huge success and changed the industry... but IBM itself lost control of the PC industry and ended up quitting it.

DEC refused to embrace micros and UNIX until it was too late, so never gained decisive control of those markets... which doomed it.

https://digital.com/digital-equipment-corporation/


I have a podcast talking about these machines and a recreation of them here: https://www.livinginthefuture.rocks/e/episode-4-historical-c...


...slow clap...

thank you for this.


I was not aware that NT shared OS/2 lineage.


Oh yes. It was widely-known and acknowledged at the time but it's forgotten now.

• OS/2 version 1.x: 80286, 16-bit, only a single DOS session. Cannot run Windows apps, but can run "family apps" -- binaries compiled to execute natively as DOS apps on DOS and OS/2 apps on OS/2.

• OS/2 version 2.x: 80386, 32-bit, true DOS multitasking alongside OS/2 text-mode and Presentation Manager GUI multitasking. Can boot DOS in a VM, and also run a modified copy of Windows 3 inside a VM allowing 16-bit Windows apps to run on the OS/2 desktop alongside native OS/2 apps.

Marketed as OS/2 v2.0, 2.1, and later as "Warp 3", "Warp 4" and "Warp Server". Licensed to 3rd parties as eComStation (from Mensys, a PDP-11 tools vendor) and later as Blue Lion.

• OS/2 version 3: unfinished, planned to be CPU-independent, able run on various RISC chips (e.g. IBM POWER, SUN SPARC, MIPS etc.) as well as x86.

OS/2 3 was substantially rewritten and completed by Cutler and team using a lot of VMS ideas, technologies and terminology as OS/2 NT, later Windows NT. NT for "New Technology" was a backronym.

NT 3.1, 3.5 and 3.51 contained an "OS/2 subsystem" and were able to execute text-mode 16-bit OS/2 binaries natively. They could also format and read/write hard disk partitions in OS/2 HPFS format.

NT 4 removed the ability to handle HPFS but the 3.51 SP5 HPFS driver could be installed and used to access HPFS partitions, but not create them.

Windows 2000 removed the OS/2 subsystem completely.

• IBM Workplace OS/2 for POWER: a largely unrelated OS, running on top of the same Mach microkernel used in Mac OS X, with an OS/2-compatible userland on top. Able to run 16-bit and 32-bit OS/2 x86 binaries under emulation as well as native POWER binaries.


There was even an OS/2 subsystem allowing you to run 16-bit OS/2 programs which wasn’t phased out until the Windows XP era. Almost nobody used it but there must have been some large customers insisting on it.

https://en.wikipedia.org/wiki/Architecture_of_Windows_NT


Well they do not really share that much given none of the OS/2 that shipped were based on NT, although that was the plan at the beginning of the dev of NT, and first versions of NT had a (quite limited) OS/2 subsystem. So there may have been some influences, but the target "quickly" switched to Win32 anyway.



Interesting. I had an assembly course taught on a VAX system in 2001. At the time, I had the impression it was because the instructor had access to it and felt it was of historical significance. I wish I could remember more about the experience.


I’d say it was because the VAX (and the PDP-11 family that preceded it) has one of the cleanest instruction sets of any processor family that was ever extensively used. It was almost as easy as C to program in its assembly language.


Yes, and... each of the earlier processors had certain 'corner cases' in the way they handled setting (or not) flag bits; DEC s/w had to check these corner cases and execute a proper set of instructions for that specific machine. But yes, conceptually, the orthogonal addressing modes, and 'obvious' instruction names influenced probably 2 generations of students (for the better). The major lesson learned tho, I think, was: never be stingy with address bits! (16 is way too few; and even later 22-bit addresses weren't enough.)



well does it boot vms or not?


That would be awesome if that is possible some day. I still run OpenVMS in simh. And the new owners of OpenVMS are busy beavering away on an OpenVMS port to the amd64 platform and most software should compile cleanly so I've read. Can't wait !!!


VMS Software has released the x86_64 port of OpenVMS recently. However it is not currently available under the hobbyist program. They say that is coming soon.


Someone should tell them their website still says "VMS Software, Inc. is porting OpenVMS to x86", present continuous case.


Needs to add MMU.


Oh, wow, it's indeed nice (even though it's more like TransMeta's x86 implementation than a silicon-level design!) Reminds me of the efforts to implement the Japanese SuperH(2) RISC instruction set.

(To you: you might want to clarify in the title that this is the VAX architecture they're talking about (since "vax" unfortunately might be confused here with the now-used shorthand for vaccine).)


VAX was a heavily microcoded design, so there wasn't really "silicon level" implementation of VAX instructions themselves - the microcode decoded the instructions and set inputs to execution units, and unlike some modern microcoded designs, there was much less hardwired implementation AFAIK


At least some VAX models supported user-supplied microcode and thus a customizable ISA. This could be used, for example, to implement a Prolog abstract machine directly on the CPU[1].

[1] http://hps.ece.utexas.edu/pub/gee_micro19.pdf


I've been meaning to check out J2 (the FPGA SH-2 core) for a while now. Mostly haven't because there doesn't seem to be as much support/howtos for the chip as there is for say arm/mips/m68k/whathaveyou. Most Super-H knowledge I can find online centers around programming the Dreamcast's SH-4 with GCC.


or more likely the brand of vacuum cleaners :)


I felt quite old recently when someone at work recommended a Vax carpet cleaner and I mentioned that my first email account was on a Vax... and they had no idea what I was talking about. Had never heard of the platform, or of DEC.


Fun fact: the vacuum cleaner company (https://www.vax.co.uk) had "Nothing sucks like a VAX!" as its advertising slogan: http://foldoc.org/vax


I have a vague recollection that “nothing sucks like a vax” was a parody and never the vacuum cleaner’s slogan. A cursory google search doesn’t pull up anything authoritative and it seems like most of the claims that the vacuum cleaner people used it are all worded nearly identically. The photocopied afi saw back in the 80s was too on the nose to be really believable, feeling an awful lot like the mouse balls memo.


In that case it may be an urban myth, with its origin in the (apparently real) Electrolux advert shown here: https://web.archive.org/web/20180707150205/http://adland.tv/...


The ad I remember showed an actual vacuum cleaner. I'd seen the one you linked to in my cursory googling. I'd hoped it would have the same vacuum cleaner picture to make the identification that much easier.


Did anyone else misread the title as "First new wax in 30 years" and imagine a completely different article? xD


Only the young


Thought this might be about getting "VAX-inated" with the vaccine..... ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: