Hacker News new | past | comments | ask | show | jobs | submit login
RISC-V Offers Simple, Modular ISA (riscv.org)
211 points by luu on April 6, 2016 | hide | past | favorite | 157 comments



    Simple Memory Model
    The  RISC-V  address  space  is  byte  addressed  and  little-endian.  Even  though  most  other  ISAs  such  as  x86  and ARM  have  severalpotentially  complex  addressing  modes, RISC-V  only  uses  base+offset  addressing  with  a  12-bit immediate  to  simplify  load-store  units.
I remember reading [1] that one reason C became so popular was because it was easy to write a compiler for, not because it was easy to write a program in. Thus, compilers emerged for various platforms, making it accessible. At the opposite, stuff like Smalltalk (I may be mistaking the language) were complicated to implement and the compilers/environments were expensive, limiting the reach.

Looks like the RISC-V is taking a page from this strategy book.

[1] was it Gabriel's Worse Is Better essay? https://www.dreamsongs.com/RiseOfWorseIsBetter.html


Other commenter's claim is inaccurate. I have a detailed history, with references, of how C became what it was:

http://pastebin.com/UAQaWuWG

That its predecessor compiled on an EDSAC, it compiled on a PDP-11, and was the language of UNIX (its killer app) was why it spread everywhere. Network/legacy effects took over from there. There were languages like Modula-2 that were safer, efficient, easy to implement, and still close to the metal. Such languages kept being ignored and mostly are to this day for system programming. They even ignore the ones that are basically C with enhanced safety (eg Cyclone, Popcorn).

It's all social and economic reasons justifying C both at its creation, during its spread, and for its continuing use. The technical case against such a language has always been there and field-proven many times.


C was quite a few years before Modula-2. C was developed in 1972 and was heavily used in UNIX by 1973. Development of Modula-2 wasn't started until 1977 and it wasn't widely available until the 1980s.


Point was we see two traditions developing:

1. Languages designed to be high-level, readable, optionally easy to compile, work with large programs, have safety, and compile to efficient code.

2. Two languages, BCPL and C, that strip as much of that as possible to compile on an EDSAC and PDP-11 respectively.

Thompson and Ritchie along with the masses went with No 2 while a number of groups with minimal resources went with No 1 with better outcomes. At each step of the way, groups in No 1 were producing better languages with more robust apps. Yet, Thompson et al continue to invest in improving No 2. By present day, most fans of No 2 forget No 1 existed, push No 2 as if it was designed like No 1, and praise what No 2 brought us whereas it actually hampered investments No 1 enabled easily.

Compiler, HW, and OS people should've invested in a small language from No 1 category as Wirth's people did. That Pascal/P got ported to 70 architectures, some 8-bit, tells me No 1 category wasn't too inefficient or difficult either. Just gotta avoid heavyweights like PL/1.


But C on microcomputers did not come any earlier than Pascal or Modula-2 compilers.

My first compiler was Modula-2 on my Atari ST. But it was difficult to do much in it because so much of the OS documentation and example code was geared towards C. Also compiling on a floppy based system (couldn't afford a hard disk) was terrible.


I had to do lots of transfers with floppies but never had to do iterative development on one. I feel for you, bro. :)


But quite a few years later than Algol and PL variants.


The irony too, that Ritchie, Thompson, Pike et al at the Unix labs were then enamoured by Modula-2 & Oberon and used the ideas to build plan9 but in a new version of C.


The wikipedia article says that, when designing Google's language, all three of them had to agree on every single feature so no "extraneous garbage" crept in. The C developers dream language was basically an updated Oberon. That's them dropping the QED on C supporters for me. :)

Funny thing is, Oberon family was used to build both apps and whole operating systems. Whereas, Thompson et al's version is merely an application language lambasted for comparisons with system languages. I don't know if they built Oberon up thanks to tooling and such or if they've dropped back down a notch from Component Pascal since it's less versatile. Just can't decide.

Note: Imagine where we'd be if they figured that shit out early on like the others did. We'd be arguing why an ALGOL68 language wasn't good enough vs ML, Eiffel DbC, LISP's macros, and so on. The eventual compromise would've been better than C, Go, or ALGOL68. Maybe.


I still use Acme, a plan9 text editor that was based on Oberon and it is the best software ever. Just the notion that "all text is a potential action" blows all other usability notions out of the water.

Want a drop down menu ? : make a file in the current directory with the name of the menu, put the commands in it you want in the menu. Done. And that can go as deep as you like.

Those commands can include snippets of shell code that act on the current document e.g. | sed s/[a-z][A-Z]/g

Highlight some text, middle click that command and the command runs on the selected text.

Add to that a file system for the text editor :

date > /n/acme/new

execute that and you get a new document containing the date

When using it, it feels limitless. You build a command set that matches your current project. There's a pattern matcher too: the plumber, so you can click somefile.c:74 and the editor opens that file at that line. so "grep -n someregex ???.c" and you get a list of clickable "links" to those files.

When you browse the plan9 source code you might see the unusual

    void
    main(args)
we follow that convention so one can 'grep -n ^functionname ???.c' and you get a source code browser

I'll stop there, and I've only scratched the surface

??? means * - you can't type * . c without spaces in HN !


That's pretty wild. I might have to try it some time. The Oberon interface was certainly interesting. More interesting was that hyperlinked documents became the default way of doing apps that replaced native ones in many places. Something Oberon did before that they rejected. ;)

Nonetheless, I'm hesitant to put a execution system in a text editor. One of my favorite aspects of them is that they load, edit, and/or render data. The data stays data. Means throwing one in a sandbox was always safest route to inspect files I wasn't sure about. An Oberon-style text editor that links in other apps functionality might be a nightmare to try to protect. It's why I rejected that style of GUI in the past.


Composability, it's what makes Unix Unix. Doug McIlroy and his pipes, that's what makes it powerful in th ehands of a skilled operator. Your system grows with you

Possibly a nightmare for someone else to reason about, so I accept that aspect. But every long time plan9 user I know (and that's about 20 I know by name to their face + more I meet at conferences) finds going back to plain old unix a retrograde step. It's like going back to a 16" b&w tv.


*.c


Have you ever actually tried to write code in Modula-2?

Has there ever been an efficient, portable implementation of Cyclone or Popcorn? Does anyone seriously consider either to be easy to implement?

And, if you just randomly picked three examples, why are they all bad? What does that mean about the other examples, statistically?


"Have you ever actually tried to write code in Modula-2?"

Three people wrote a consistent, safer platform in it from OS to compiler to apps in 2 years. Amateurs repeated did stuff like that with it and its successors like Oberon for another decade or two. How long did the first UNIX take to get written and reliable in C?

"Has there ever been an efficient, portable implementation of Cyclone or Popcorn? "

I could've said that about C in its early years given it wasn't intended to be portable. Turns out that stripping almost every good feature out of a language until it's a few primitives makes it portable by default.

"And, if you just randomly picked three examples, why are they all bad? What does that mean about the other examples, statistically?"

It means that, statistically, the other examples will have most of C's efficiency with little to none of its weaknesses. People using them will get hacked or loose work less. Like with all safe, systems languages that people turned down over C. Least important, it means I picked the examples closest to C in features and efficiency because C developers' culture rejects anything too different. That includes better options I often reference.

Why is that bad for their ability to produce robust programs? And what does that mean about C developers statistically?


Here's what's wrong with your whole line of reasoning. People wrote an OS in Modula-2? That's great. How many of them wrote an OS that people wanted to use? That's a considerably harder task, you know. It's not just that all OSes are equivalent, and the one with the most publicity or corporate support wins.

In particular, I assert that Unix was considerably bigger and harder to write than those OSes written with Modula-2, in the same way that Linux quickly became much bigger than Minix. That "bigger" and "harder" isn't the result of bad technique or bad languages - it's the result of making something that actually does enough that people care to use it.

Next assertion: C makes some parts easy that are hard in Modula-2. That makes it more likely that the harder parts actually get written, that is, that the OS becomes usable instead of just a toy. (True, C also makes it easier to wrote certain kinds of bugs. In practice, though, the real, usable stuff gets written in C, and not in Modula-2. Why? It's not just because everybody's too stupid to see through the existing trends and switch to a real language. It's because, after all the tradeoffs are weighed, C makes it easier to do real work, warts and all.)


I dunno, I sure as hell never asked for Unix. That was a decision made for me by AT&T and it's monopoly a decade before I was born.

As for why "the real, usable stuff" gets written in C? Because C is the only first class programming language in the operating system.

And of course 20 years ago "the real, usable stuff" was actually written in assembly because C was slow and inefficient. Or Fortran if you needed to do any math more complicated than middle school algebra.


Well, the "decision" was actually the consent decree in the AT&T antitrust case, which said that AT&T couldn't go into the software business. (Note well that AT&T did not have any kind of a monopoly on computer operating systems - not then, and not ever.)

The result, though, was that Unix became available for the cost of distribution plus porting, and it was portable. It was the easy path for a ready-for-real-work non-toy operating system on new hardware.


As with scythe, you're ignoring the greater point to focus on tactics that it makes irrelevant. C started with stuff that was literally just what would compile on an EDSAC. It wasn't good on about any metric. They couldn't even write UNIX in it. You know, software that people wanted to use. So, seeing good features of easy compilation and raw performance, they decided to invest in that language to improve its deficiencies until it could get the job done. Now, what would UNIX look like if they subsetted and streamlined a language like ALGOL68 or Modula-2 that was actually designed to get real shit done and robustly?

It's programming, science, all of it. You identify the key goals. A good chunk of it was known as Burroughs stayed implementing it in their OS's. Their sales numbers indicated people wanted to use those enough they paid millions for them. ;) Once you have goals, you derive the language, tools, whatever to achieve those goals. Thompson failed to do this unless he only cared about easy compilation and raw speed at the expense of everything else. Meanwhile, Burroughs, Wirth, Hansen, the Ada people, Eiffel later... all sorts of people did come up with decent, balanced solutions. Some, like the Oberons or Component Pascal, were very efficient, easy to compile, easy to read, stopped many problems, and allowed low-level stuff where needed. Came straight from strengths of design they imitated. A form of that would be easy to pull off on a PDP as Hansen showed in an extreme way.

C's problems, which contributed to UNIX Hater's Handbook and many data losses, came straight from the lack of design in its predecessors which soley existed to work on shit hardware. They tweaked that to work on other shit hardware. They wrote an OS in it. Hardware got better but language key problems remained. Whether we use it or not, we don't have to pretend those effects were necessary or good design decisions. Compare ALGOL68 or Oberon to BCPL to see which looks more thought out if you're still doubting.


> They couldn't even write UNIX in it.

Are you referring to C here, or to B? If C, I'd like to see your source. If B, that's a highly misleading statement. You're attributing to C failings that belong to another language, and which C was designed to fix those failings.

> Now, what would UNIX look like if they subsetted and streamlined a language like ALGOL68 or Modula-2 that was actually designed to get real shit done and robustly?

But they weren't. I mean, yes, those languages were designed to get stuff done, and robustly, but in practice they were much worse at actually getting stuff done than C was, especially at the level of writing an OS. Sure, you can try to do that with Algol. It's like picking your nose with boxing gloves on, though. (The robust part I will give you.)

> Their sales numbers indicated people wanted to use those enough they paid millions for them. ;)

But, see, this perfect, wonderful thing that I can't afford is not better than this piece of crap that crashes once in a while, but that I can actually afford to buy. So what actually led to widely-used computers was C and Unix, and then assembly and CP/M, and assembly and DOS.

> Once you have goals, you derive the language, tools, whatever to achieve those goals. Thompson failed to do this unless he only cared about easy compilation and raw speed at the expense of everything else.

Nope. You don't understand his goals, though, because they aren't the same as your own. So you assume (rather arrogantly) that he was either stupid or incompetent. (The computing history that you're so fond of pointing to could help you here.)


"Are you referring to C here, or to B? If C, I'd like to see your source. If B, that's a highly misleading statement. You're attributing to C failings that belong to another language, and which C was designed to fix those failings."

My side of this discussion keeps saying C's design is bad because it avoided good attributes of (insert prior art here) and has no better design because it's foundations effectively weren't designed. The counters, from two or three of you, have been that the specific instances of the prior art were unsuitable for the project due to specific flaws, some you mention and some not. I countered that error by pointing out C's prior art, BCPL to B to original C, had its own flaws. Rather than throw it out entirely, as you all are saying for C alternatives, they just fixed the flaws of its predecessors to turn them into what they needed. The same thing we're saying they should've done with the alternatives.

So, you on one hand talk like we had to use Modula-2 and the others as is or else impossible to use it. Then, on the other, justify that prior work had to be modified to become something usable. It's a double standard that's not justified. If they could modify & improve BCPL family, they could've done it with the others. The results would've been better.

"The robust part I will give you."

As I've given you speed, ease of porting, and working best in HW constraints. At least we're both trying to be fair here. :)

"but that I can actually afford to buy. So what actually led to widely-used computers was C and Unix, and then assembly and CP/M, and assembly and DOS."

It did lead to PL/M that CP/M was written in. And to the Ceres workstations that ETH Zurich used in production. And A2 Oberon system that I found quite useful and faster than Linux in recent tests despite almost no optimization. Had almost no labor vs UNIX and its basic tools. I imagine data, memory, and interface checks in a micro-ALGOL would've done them well, too.

"Nope. You don't understand his goals, though, because they aren't the same as your own. "

That's possible. I think it's more likely they were very similar to my own as security and robustness were a later focus. I started wanting a reliable, fast, hacker-friendly OS and language for awesome programming results. Started noticing other languages and platforms with a tiny fraction of the investment killed UNIX/C in various metrics or capabilities with various tradeoffs. Started exploring while pursuing INFOSEC & high assurance systems independently of that. Eventually saw connection between how things were expressed and what results came from them. Found empirical evidence in papers and field backing some of that. Ideas you see here regularly started emerging and solidifying.

No, I think he's a very smart guy who made many solid achievements and contributions to IT via his MULTICS, UNIX, and Plan 9 work. UNIX has its own beauty in many ways. C a little, too. Especially, when I look at them as an adaptation to survive in specific constraints (eg PDP's) using specific tech (eg BCPL, MULTICS) he learned before. Thing is, my mental view of history doesn't begin or end at that moment. So, I can detach myself to see what foolish choices in specific areas were made by a smart guy without really thinking negative of him outside of that. And remember that we're focusing on those specific topics right now. Makes it appear I'm 100% anti-Thompson, anti-C, or anti-UNIX rather than against them in certain contexts or conditions while thinking better approaches were immediately apparent but ignored.

"The computing history that you're so fond of pointing to could help you here."

I've looked at it. A ton of systems were more secure or robust at language level before INFOSEC was a big consideration. A number of creations like QNX and MINIX 3 achieved low fault status fast while UNIX took forever due to bad architecture. Oberon Systems were more consistent, easier understanding, faster compilation, and eventually included a GC. NextStep & SGI taught it lessons for desktops and graphics. BeOS, like Concurrent Pascal before it, built into OS a consistent, good way of handling concurrency to have great performance in that area. System/38 was more future proof plus object-driven. VMS beat it for cross-language design, clustering, and right functions in OS (eg distributed locking). LISP machines were more hacker-friendly with easy modifications & inspections even to running software w/ same language from apps to OS. And so on.

The prior history gave them stuff to work with to do better. Hence, me accusing them. Most of the above are lessons learned over time building on aspects of prior history plus just being clever that show what would've happened if they made different decisions. If not before, at least after the techs showed superiority we should've seen more imitation than we did. Instead, almost outright rejection of all that with entrenched dedication to UNIX style, bad design elements, and C language. That's cultural, not technical, decision-making that led to all related problems.


> My side of this discussion keeps saying C's design is bad because it avoided good attributes of (insert prior art here) and has no better design because it's foundations effectively weren't designed. The counters, from two or three of you, have been that the specific instances of the prior art were unsuitable for the project due to specific flaws, some you mention and some not.

No, my counter in the specific bit that you are replying to here is that your history is wrong. Specifically, you said that C was initially so bad that they couldn't even write Unix in it. That statement is historically false - except if you're calling BCPL and B as "part of C" in some sense, which, given your further comments, makes at least some sense, though I still think it's wrong.

I'm not familiar enough with Modula or Oberon to comment intelligently on them. My reference point is Pascal, which I have actually used professionally for low-level work. I'm presuming that Modula and Oberon and that "type" of languages are similar (perhaps somewhat like you lumping BCPL and C together). But I found it miserable to use such a language. It can protect you from making mistakes, but it gets in your way even when you're not making mistakes. I would guess that I could write the same code 50% to 100% faster in C than in Pascal. (Also, the short-circuit logical operators in C were vastly superior to anything Pascal had).

So that's anecdote rather than data, but it's the direction of my argument - that the "protect you from doing anything wrong" approach is mistaken as an overall direction. It doesn't need for later practitioners to re-discover it, it needs to die in a fire...

... until you're trying to build something secure, or safety-critical, and then, while painful to use, it still may be the right answer.

And I'm sure you could at least argue that writing an OS or a network-facing application is (at least now) a security situation.

My position, though, is that these "safety-first" languages make everything slower and more expensive to write. There are places where that's appropriate, but if they had been used - if C hadn't won - we would be, I estimate, ten years further behind today in terms of what software had already been written, and in terms of general availability of computers to the population. The price of that has been 40 years of crashes and fighting against security issues. But I can't say that it was the wrong choice.


" Specifically, you said that C was initially so bad that they couldn't even write Unix in it. That statement is historically false "

Well, if you watched the Vimeo video, he looks at early references and compares side-by-side C with its ancestors. A lot of early C is about the same as BCPL & its squeezed version B. The first paper acted like they created C philosophy and design out of thin air based on B w/ no mention of BCPL. Already linked to it in another comment. Fortunately for you, I found my original source for the failed C attempt at UNIX which doesn't require a video & side-steps the BCPL/B issues:

https://www.bell-labs.com/usr/dmr/www/chist.html

You'll see in that description that the B -> Standard C transition took many intermediate forms. There were several versions of C before the final one. They were simultaneously writing UNIX in assembly, improving their BCPL variant, and trying to write UNIX in intermediate languages derived from it. They kept failing to do so. Ritchie specifically mentions an "embryonic" and "neonatal" C followed by this key statement:

"The language and compiler were strong enough to permit us to rewrite the Unix kernel for the PDP-11 in C during the summer of that year. (Thompson had made a brief attempt to produce a system coded in an early version of C—before structures—in 1972, but gave up the effort.)" (Ritchie)

So, it's a historical fact that there were several versions of C, Thompson failed to rewrite UNIX in at least one, and adding structs let them complete the rewrite. That's ignoring BCPL and B entirely. That they just produced a complete C magically from BCPL or B then wrote UNIX is part of C's proponents revisionist history. Reality is they iterated it with numerous failures. Which is normal for science/engineering and not one of my gripes with C. Just got to keep them honest. ;)

" I would guess that I could write the same code 50% to 100% faster in C than in Pascal. (Also, the short-circuit logical operators in C were vastly superior to anything Pascal had)."

Hmm. You may have hit sore spots in the language with your projects or maybe it was just Pascal. Ada would've been worse. ;) The languages like Modula-3, Component Pascal, and recently Go [but not Ada] are usually faster to code in than C. The reasons that keep turning up are straight forward: design to compile fast to maximize flow; default type-safety reduces hard-to-debug problems in modules; often less interface-level problems across modules or during integrations of 3rd party libraries. This is why what few empirical work I read comparing C, C++, and Ada kept showing C behind in productivity & with 2x the defects. Far as low level, the common trick was wrapping unsafe stuff in a module behind safe, simple interfaces. Then, use it as usual but be careful.

". until you're trying to build something secure, or safety-critical, and then, while painful to use, it still may be the right answer."

Not really. Legacy software is the counterpoint: much stuff people build sticks around to become a maintenance problem. These languages are easier to maintain due to type protections countering common issues in maintenance mode. Ada is strongest there. The simpler ones are between Ada and C in catching issues but allow rapid prototyping due to less debugging and fast compiles. So, reasons exist to use them outside safety-critical.

"My position, though, is that these "safety-first" languages make everything slower and more expensive to write. "

In mine, they're faster and less expensive to write but more expensive to run at same speed if that's possible at all. Different, anecdotal experiences I guess. ;)


What I think is interesting is Intel is adding bounds checking registers to their processors. That should eliminate a lot of the issues people complain about. (Except your programs foot print will be larger due to needing to manage bounds information

https://gcc.gnu.org/wiki/Intel%20MPX%20support%20in%20the%20...


>Three people wrote a consistent, safer platform in it from OS to compiler to apps in 2 years.

I take it you didn't link to the OS because it was only ever a toy, correct? If it had been useful for anything, it would be worth linking to -- otherwise, you're better off if I know less about it.

https://en.wikipedia.org/wiki/Lilith_(computer)

Lilith wasn't popular because it didn't solve any problems that needed to be solved -- it was too slow to be useful as a microcomputer OS, and it wasn't designed for that anyway. Microcomputer OS's of the time were written in assembly, and assembly continued to be important until around the time of Windows 3.1. Not integrating well with assembly is an anti-feature; the fact that it would become unnecessary some 15 years later is irrelevant. C did the job that needed to be done; Modula-2 did the job that somebody thought was cool. Also, you have yet to give me a reason to believe that Lilith was somehow safer or better-designed than UNIX of the time, considering that security wasn't even really a thing in 1980.

That's not to mention it's [Pascal family] just poorly designed from a usability standpoint, with the language creators doing silly things like removing "for" loops from Oberon (a decision which both Clojure and Rust eventually admitted was bad). Lilith itself was "succeeded" by an OS that was written in an entirely different language and made no attempt to be backwards compatible (but portability is a red herring amirite?).

>I could've said that about C in its early years given it wasn't intended to be portable. Turns out that stripping almost every good feature out of a language until it's a few primitives makes it portable by default.

I guess it's not a big surprise then that C wasn't popular in its early years? The implementations of Cyclone and Popcorn were never even complete enough to write software on normal operating systems, much less to write UNIX.

>It means that, statistically, the other examples will have most of C's efficiency with little to none of its weaknesses

It means that, statistically, they will all have other huge, glaring weaknesses that you pretend don't exist...

>Least important, it means I picked the examples closest to C in features and efficiency because C developers' culture rejects anything too different. That includes better options I often reference.

If you have better options, why didn't you name them? Because they're equally bad if you spend even a few minutes thinking about it? (Lemme guess: Limbo, SML, Ada)


" take it you didn't link".... "you have yet to give me"... "s a red herring amirite?" "you pretend don't exist...""why didn't you name them?"

The answer to all that is that my response to you was less thorough and referenced on purpose. That's due to the trolling style of your first comment that this one matches albeit with more information. A comment more about dismissal and challenge with little informational content didn't deserve a high-information response. My observation and recommendation is here though:

https://news.ycombinator.com/item?id=11448855

That is, there were specific languages in existence from PL family, Pascal tradition, ALGOL's of Burroughs, ALGOL68, and so on that balanced many attributes. A number, from safety-checks to stronger typing to interface protections, had already been proven to prevent problems or provide benefits. Thompson would've known that given MULTICS was one of systems that provided evidence of that even if it failed in other ways. He even re-introduced some problems that PL/0 prevented. Let's focus on fundamental failure, though.

So, if I were him, I'd note that the consensus of builders of high-reliability systems and CompSci researchers is we need a language like ALGOL68 that balances various needs. I'd still need as much of that as possible on my crappy PDP. So, I'd subset a prior, safe language then make it compile with tight integration to the hardware due to resource constraints. Might even made similar tradeoffs they did although not in a permanent way. If safety checks wouldn't work yet, I'd turn off as many as I needed to get it to run. As hardware improved, I could turn more on. I'd keep well-written language definition and grammar as others showed to do. I might also borrow some syntax, macro's or easy parsing from LISP to make my job easier as Kay is doing and numerous Scheme teams did at the time. Keep language imperative and low-level, though.

Thompson had a different idea. There was a language that had no good qualities except raw performance on crap hardware. Was opposite of its authors design intent to top it off. Thompson decided on it. Any gripes you have with Modula-2, Cyclone, Popcorn, etc apply at this point because the language, esp his B variant, wasn't so good enough for about any job. As I'm advising for safe languages, it would have to be modified to get stuff done. The first version of C was still crap to the point UNIX was mostly assembly. They added structs to get information hiding/organizing then were able to write UNIX in C. Both C and UNIX were simple enough to be portable as a side effect. Rest is history.

Almost any gripe you've had against safer languages of the time I can apply to C's predecessors other than compilation difficulty. That you will dismiss them on grounds that they need modification for the job at hand, but don't do that for C, shows the bias and flaws of your arguments. I've called out everyone from McCarthy to Wirth to Thompson for stuff I thought were bad ideas. For loop removal is a good example. I didn't settle against C until years of watching people's software burn over stuff that was impossible or improbable in better designed, even lean, languages. Evidence came in even in empirical studies on real prorammers, C showed to be behind in every metric except raw performance, its history tells us why, and logically follows that they should've built on better, proven foundations. Those that did got more reliable, easy to understand, easy to maintain systems.

Of course, don't just take my word for it: Thompson eventually tried to make a better UNIX with Plan 9, but more importantly a better language. He, Ritchie, and Pike all agreed every feature in their language. That language was Go: a clone of ALGOL68 and Oberon style with modern changes and updates. Took them a long time to learn the lessons of ALGOL that came years before them.


Kronos, the Soviet 32-bit workstation, was designed and built from hardware to all compilers, OS and miscellaneous applications by a small group of students in a couple of years. The only language they used was Modula-2.

[1] https://en.wikipedia.org/wiki/Kronos_(computer)


Thanks for the link! Had no idea Russians built a Wirth-style system based on Lilith. There's almost a trend with them as they received the Oberons and especially Component Pascal better than most countries. My foray into Blackbox showed lots of Russian use.

There have to be attributes of the language and tooling that appeal to Russian programmers more than others for whatever reason. I wonder what those are, both aspects of Russian style and language that appeals.


IBM did.

After Assembly, IBM used mostly Modula-2 and PL/M to write their mainframe OSes, e.g. OS/400, migrating the code slowly to C++ afterwards.

Also OS research at DEC and Compaq was done in Modula-2+, which gave birth to Modula-3.


Those mainframes and OS/400 were also very reliable and fast compared to the UNIXen. Hmm. Probably just a coincidence the C people will tell us. ;)


I did, as a teenager. The language was nice.

The strong types did get in the way of interfacing with a lot of OS level stuff that wanted to deal in void pointers tho.


My strategy for that sort of thing was wrapping all the stuff in unsafe modules that exported safe API's. Just do some basic checks as function is called then messy stuff is handled in its own little file of scary code.

I was doing this in other languages way after you were dealing with Modula-2 on even older hardware. So, I'm curious as to how you and others then handled that problem? What were the tactics?


I was too young and too distracted by the world of being young to focus on it enough back then to truly sit down and do that. By the time I got my Modula 2 compiler I was 13 or 14 years old and then high school happened and about 3 or 4 years later I had a PC running (pre 1.0) Linux, so. I never had to truly deal with it.


Oh OK. That's aight.


That has no basis in reality. Smalltalk is to implement that one implementation fits in the Smalltalk-80 book.

C got popular because it allowed low-level code to be implemented in something higher-level than assembler and moderately portably too (machines back then where as lot more dissimilar than today). It's hard to disassociate C from Unix, the former making the latter easy to port. A symbiotic relationship.

It's important to remember C was designed for the processors of the time, whereas processors of today (RISC-V included) are arguebly primarily machines to run C. C has brought a lot of good but also a lot of bad that we are still dealing with: unchecked integer overflows, buffer under- and overflow, and more general memory corruption. No ISA since the SPARC even tries to offer support for non-C semantics.


> C got popular because it allowed low-level code to be implemented in something higher-level than assembler and moderately portably too (machines back then where as lot more dissimilar than today). It's hard to disassociate C from Unix, the former making the latter easy to port. A symbiotic relationship.

You mean like Burroughs was doing with Extended Algol in 1961 for its Burroughs B5500 system, implemented with zero lines of Assembly, while offering all the features for memory safety that C designers deemed too hard to implement?

Archeology of systems programming languages is great to dismitify the magic aura C seems to have gained in the last 20 years with the raise of FOSS.


"Extended Algol"? So, not Algol, then. Algol with the parts that would be in assembly language turned into libraries?

And FullyFunctional's point was largely about portability. So, if Burroughts took that OS and tried to port it to, say, a PDP-11, how well do you think that would have worked?


Just like it is impossible to have an ANSI C library without Assembly or language extensions. Extended Algol had what we would call intrisics nowadays.

PDP-11 was more powerful than Burroughs systems.


> Just like it is impossible to have an ANSI C library without Assembly or language extensions. Extended Algol had what we would call intrisics nowadays.

Sure. It makes your "zero lines of assembly" claim somewhat less impressive, though...


Intrinsics aren't the same as Assembly.

Intrinsics semantics can be validated by the compiler.

https://en.wikipedia.org/wiki/Burroughs_large_systems#ESPOL_...

"In fact, all unsafe constructs are rejected by the NEWP compiler unless a block is specifically marked to allow those instructions. Such marking of blocks provide a multi-level protection mechanism."

"NEWP programs that contain unsafe constructs are initially non-executable. The security administrator of a system is able to "bless" such programs and make them executable, but normal users are not able to do this."

"NEWP has a number of facilities to enable large-scale software projects, such as the operating system, including named interfaces (functions and data), groups of interfaces, modules, and super-modules. Modules group data and functions together, allowing easy access to the data as global within the module. Interfaces allow a module to import and export functions and data. Super-modules allow modules to be grouped."

Sounds familiar? Rust like safety in 1961, but lets praise C instead.


unchecked integer overflows

This is my one big beef with the RISC-V ISA, after I went over it with a fine toothed comb, otherwise it's brilliant to my untutored eyes, and the ISA doc, a paper I read about the same time about how difficult it was to make ISAs super fast, etc. helped explain why, e.g. the VAX ISA took so long to make faster, and was probably doomed, while Intel got lucky? with x86.

Anyway, the large integer math e.g. crypto people are not happy about it, the lack of support for anything other than being able to check for divide by zero means their code will be slow compared to older ISAs that support it. And their justification in the ISA doc is that the most popular languages nowadays don't check for this, which is not the sort of thing I want to see for an ISA that's billing itself as a big thing for the future. I very much got Worse Is Better/the New Jersey vibes from this....

Although this discussion a couple of weeks ago implies it's getting some consideration: https://groups.google.com/a/groups.riscv.org/forum/#!searchi...


Intel didn't get "lucky". Going superscalar (and OoO) with CISC ISA's like the VAX, IBM/360, x86, ... obviously isn't impossible (as illustracted by IBM's zNext and Intel's everything), but it's VERY difficult (thus hard to design and verify => time to market risks) and expensive (it takes a lot of logic and more pipeline stages). An ISA like RISC-V tries to avoid as much as _practical_ of these obstackles, thus making it easier & cheaper for implementations to be fast. This part is straight out of the DEC Alpha playbook.

Re. large integers: the RISC-V 2.0 (non-priv) ISA is frozen solid so any change would have to be an extension. I've been gently pushing for the mythical vector extension to be at least friendly to large integers.


" C has brought a lot of good"

I've seen no evidence that C brought us anything good. Anything I can do with C I can do with a Modula-2-like language with better safety and compiler robustness. If a check is a problem, I can disable it but it's there by default. Nonetheless, I'll look into any benefits you think C brought us that alternatives wouldn't have done.


The alternative to C for systems programming in the '70s was not Modula-2 or anything like it, the only realistic alternative for most production systems was writing your kernel code in assembly. C allowed the industry to finally stop defaulting to assembly for systems programming. That was a very good thing, regardless of how old and inappropriate C might be for such purposes nowadays. The comment you were responding to reads to me like someone providing historical context, not an expression of an ideal.


That is an urban legend perpetrated by C fans rewriting history.

Extended Algol was available in 1961.

PL/I and PL/M variants are older than C.

Mesa was already being used at Xerox PARC when C was born.

There are quite a few other examples that anyone that bothers to read SIGPLAN papers and other archives can easily find out.


  > Extended Algol was available in 1961.
What extension was that? Burrough's ESPOL (the notable use of an Algol extension for system programming, on a processor designed to be an Algol target) had what we would now call inline assembly.

  > Mesa was already being used at Xerox PARC when C was born.
Close; C ~ 1972 (started 1969), Mesa ~ 1976 (started 1971). The Alto system software was mostly BCPL; Mesa arrived with the commercial D series. Mesa also targeted a virtual instruction set (i.e. like Java or p-code) designed specifically for it, run by an interpreter on the bare metal.


ALGOL68: The language Thompson, Ritchie, and Pike basically cloned when they made Go. Nothing went in there unless everyone agreed on it. Final was every feature they had wanted in a language. Most of which was already in ALGOL68 with key aspects in use by Burroughs since their 1961 designs. Had they just subseted or simplified ALGOL68, they'd have quite the head start and we'd be stuck with a language that's a lot better than C. I imagine the implementation would've been more suitable for OS development, too. ;)


Burroughs had Extended Algol and ESPOL, yes.

It still doesn't invalidate the fact that it had more memory safety features as C, one decade earlier.

Regarding Mesa that is like calling x86/x64 firmware an interpreter for Intel bytecode.

Most literature in those days used bytecode to refer to Assembly processed by CPU microcode.


The point was that those machines had the benefit of instruction sets co-designed with the language. (And Mesa had strong default type safety, but basically the same memory model as C, allowing pointer arithmetic, null pointers, and dangling pointers.)


Of course Mesa had pointer arithmetic, null and dagling pointers.

Any systems programming language has them.

However there is a difference between having them as an addition to strong type features and them being the only way to program in the language.

For example, using arrays and strings didn't require pointer arithmetic.

In C all unsafe features are in thr face of the programmer. There is no way to avoid them.

In Mesa and other similar languages, those unsafe are there, but programmers only need to use them in a few cases.

C was designed with PDP-11 instruction set in mind. For many years that the C machine model no longer maps to the hardware as many think it does.


One of the things about C was: American Telephone and Telegraph Company was a regulated monopoly and prohibited from selling anything outside of the telephony business. So they developed C and Unix but couldn't sell either commercially. However they could allow universities to use both for free. Deal with C was one grad student could write a cross compiler in about 300 hours. And then you could use that to cross compile Unix and the compiler.

End result a lot of CS students learned C in school.

Other languages tended to be tied to a particular machine, cost money, or weren't really suitable (Fortran/p-system Pascal) for the stuff people wanted to do.


"The alternative to C for systems programming in the '70s was not Modula-2 or anything like it"

The main alternatives were ALGOL60, ALGOL68, and Pascal. They were each designed by people who knew what they were doing in balancing many goals. They, esp ALGOLS, achieved good chunk of most. A subset and/or modification of one with gradual progression for improved hardware would've led to better results than C with same amount of labor invested. On low end, Pascal ended up being ported to systems from 8-bitters to mainframes. On high end, Burroughs implemented their mainframe OS in an ALGOL with hardware that enforced its safety for arrays, stacks, and function calls.

In the 80's, things like Modula-2, Concurrent Pascal, Oberon, and Ada showed up. I'll understand if they avoided Ada given constraints at the time but the others were safer than C and quite efficient. More importantly, their authors could've used your argument back then as most people were doing but decided to build on prior work with better design. They got better results out of it, too, with many doing a lot of reliable code with very little manpower. Hansen topped it off by implementing a barebones, Wirth-like language and system called Edison on the same PDP that C was born on.

"The comment you were responding to reads to me like someone providing historical context, not an expression of an ideal."

The historical context in a related comment was wrong, though. It almost always is with C because people don't know it's true history. Revisionism took over. The power that revision has in locking in people to C is why I always counter it with actual evidence from the inventors of the concepts. Two quick questions to illustrate why I do that and test if it's sensible:

1. When hearing "the programmer is in control," did you think that philosophy was invented by Thompson for C or it was stolen without early credit from BCPL along with its other foundations?

2. Did you know that those were not designed or validated as a good language at all? And that they were grudgingly accepted just to get CPL, an ALGOL60 variant, to compile on crappy hardware whose problems no longer exist by chopping off most important attributes?

If you knew 1 and 2, would you still think C was a "well-designed" language for systems programming? Or a failed implementation of ALGOL60 whose designers were too limited by hardware? If No 1, we should emulate the heck out of C. If No 2, we should emulate ALGOL60-like language that balances readability, efficiency, safety, and programming in the large. Btw, all the modern languages people think are productive and safer lean more toward ALGOL more than C albeit sometimes using C-like syntax for uptake. Might be corroboration of what I'm saying.


I actually mostly agree with you. My point was poorly made. I didn't mean that C was particularily great, only that it "won" and having a winner meant more portable code.

I'll back out of the PL debate; this isn't really the place and there's too much to be said on the topic than will fit in this margin.


"only that it "won" and having a winner meant more portable code."

That's generally true so long as the winner is open. C was. Now, it runs on everything.


  > [...] or it was stolen without early credit from BCPL along with its other foundations?
That's simply not correct. The first public paper on C (The C Programming Language in BSTJ #57, the Unix issue) spent the first 4-page section on its BCPL ancestry.


It's a contested claim. However, here's the first document and reference on the C language:

https://www.bell-labs.com/usr/dmr/www/cman.pdf

Show me where it references BCPL. As I see it, Ritchie only references B as the inspiration for C. From the document, anyone who hadn't read the B paper (most people) would think they, Thompson, and/or Ritchie mostly came up with C's features and strengths. They eventually became known as the "C philosophy" and C-like languages.

Whereas, it was Martin Richards that invented both of those plus the idea of stripped down languages for writing tools on minimal hardware. He should get credit for creating the "BCPL philosophy" that the "programmer is in control" plus inspiring a BCPL-like language called C that allowed UNIX's success. Instead, early work focused all attention on Thompson and Ritchie with UNIX papers/software making sure that was the case by adding momentum.

At least I got to see the guy in action in the video linked to my History of C page. It shows his team, the constraints they worked on, why they dumped most of ALGOL, and him working with it later on. That presentation put things into an honest perspective where I can see his and UNIX founders' contributions to overall success with due credits (and/or critiques) to each.


That's laughably untrue. In fact, in the 70s, most system programming was done in anything except C. Off the top of my head:

IBM: assembler & PL/S Unisys: Algol GE/MIT (Multics): PL/I(-ish) Control Data: Cybill (Pascal-ish) & assembler Intel and Digital Research: PL/M DEC: MACRO-11 & BLISS DG: assembly & PL/I Xerox: assembly, Smalltalk, Mesa/Cedar PERQ: Pascal


It's really not a question or theory that C brought the computer industry a huge amount of value, more than any other single language.

Look at all the C applications, the C ABI that every other language exports for FFI, all the operating systems, big and small.

While I enjoy programming in others more, C is the single greatest most versatile language. All the languages I like all can call C a common ancestor.

Are there better options today? Maybe. Is there a question that C deserves a huge amount of credit? Not at all.


No, investment in compilers for a language brought that. There were numerous languages at various points in our history that could've received that compiler work. C's design-level issues actually make it harder to write optimizing compilers for it that don't break the correctness. So, my claim is that it's the hardware, software, and groups using it that brought C's success (and compiler investments) along rather than the language itself.


While everything you say is possibly true, the original statement was that C didn't bring anything good. I believe the evidence of it's usage disagrees with that statement.

Also, on optimization, I don't know exactly what you mean by that. I assume you mean b/c it's got a lot of undefined behavior. There is so much optimization going on in tools like LLVM that I question it's accuracy.

Anyway, I agree that there are more pleasurable and portable languages (and now just as performant/lightweight with Rust).


There are numerous barriers to optimization that mostly fall into the categories of a) too many operations break continuity of typing and data flow, and b) many small, independent compilation units.

For example: Pointers, casts, unions, casting pointers to different types, and doing all of this across procedure call boundaries all challenge/break data flow analysis. You can pass a pointer to a long into a procedure, and the compiler has no idea what other compilation unit you are linking to satisfy the external call. You could well be casting that pointer-to-long to something totally crazy. Or not even looking at it.

I was associated with a compiler group experimenting whole-program optimization -- doing optimization after a linking phase. This has huge benefits because now you can chase dataflow across procedure calls, which enables more aggressive procedure inlining, which enables constant folding across procedure calls on a per-call-site basis. You might also enable loop-jamming across procedure calls, etc. C really is an unholy mess as far as trying to implement modern optimizations. The compiler simply can't view enough of the program at once in order to propagate analysis sufficiently.


Do you know if anybody in CompSci or industry has put together a paper that has guidelines on this topic? Specifically, what elements of language design make it easier or more difficult to apply various optimization strategies. Just a whole collection of these for language designers to factor into their design to make compiler writers' job easy as possible.


Unfortunately, I do not, other than reading relevant conference proceedings, and sharing a cubical with the right person. None of this is in the Dragon Book, that much is certainly true.


Imagine a world where there are three languages of exactly-equal merit. X, Y, and Z. Y wins based on luck and network effects, and amazing things are done with it. Did Y bring us anything good? Not when X was there first. X would have worked fine to power those uses.

Now imagine Y was slightly worse than X, based on some magic objective measure. Even though it's the backbone of all these amazing things, Y has actually had a negative impact on the world. We could all have computers that are equally easy to program and use but slightly less buggy, if only that effort has focused on X instead.

So when you say there's no question that C brought a huge amount of value based on usage stats, you're wrong. It can definitely be questioned.


That's basically how I see it. I can even trace many CVE's and crashes to C's safety issues. Currently, CompSci people are building reliable and secure HW for us but existing C code is difficult to integrate. Had it been Oberon or something, it would be a lot easier as the developers don't meddle directly with bits and memory as much.


Compared to other low-level languages, C is a difficult language to optimise because of the aliasing rules and the incredibly vague/flexible machine model that allows it to be implemented for nearly any CPU and allows the programmer to break many assumptions that compiler writers would like to make.

All the undefined behaviour in the C spec enables optimisations, yes, but it would be so much better if those optimisations didn't require a lot of mental labour by both compiler writers and programmers.



Syntax that doesn't require you to type out "begin" and "end" for scoping and ":=" when you mean "=" for assignments are the first things that come to mind.


Begin and end annoys me a bit too because it's harder to write. Yet, we learned years later that code is read more than written. Now, all the advice in the world is to optimize code for reading, maintenance, and extension. Kind of like what non-C languages were doing, eh? Short-hand could still be useful as we pick up little things like that easily without need for verbose syntax.

Now, := is another situation. Let me ask you: what did = mean before you learned programming? I'm guessing, as math teaches, it meant two things were equal. So, making = be the conditional check for equality is intuitive and reinforced by years. A colon is English for something similar where you assign one thing a more specific meaning. So, := for assignment and = makes sense even if a better syntax exists for := part. W

hat makes no sense is making the intuitive operator for equality (=) into assignment then equality becoming ==. We know it didn't make sense because Thompson did it when creating B from BCPL. His reason: personal preference. (sighs) At the worst, if I were him, I'd have done == for assignment and = for equality to capitalize on intuition.


> ...I'd have done == for assignment...

It makes much more sense if you consider it from a logic perspective. '=' is material implication (→), and '==' is material biconditional (↔) - which is logically equivalent to (a → b) ∧ (b → a)... so two equal signs (the logical conjunction is redundant so you drop it). Computer science favors logic, so I'd be pretty surprised if it was simply personal preference.


I actually never considered that. That's a good idea. Now, hopefully I remember this if I see other Thompson papers. Use of formal logic in them or in his background might explain it.


To match math usage, a plain = should be when the programmer wants to state that two things are in fact equal. In debug builds, it would serve as an assertion. In release builds, it would be an optimization hint.

That isn't either assignment or equality testing.


"To state two things are in fact equal"

"Isnt... equality tedting"

Contradicted yourself there. The equals is used in conditionals with other logical operators to assert equality is true or not. That's a legit use for it.


It could be something more like what Visual Studio's compiler does with the __assume keyword.

https://msdn.microsoft.com/en-us/library/1b3fsfxw.aspx

You can't use that for equality testing at all. If the assumption is wrong, the compiler will generate bad code.


Hmm. That's interesting. It's closer to the other commenters definition. I like how they straight up call it "assume" for accuracy. :)

Note: This discussion on an "obvious" problem shows how deep the rabbit hole can go in our field in even the simplest thing. Kind of ridiculous and fun at same time haha.


Why would begin/end be better for reading than "{" and "}"?


Showing programming languages of different types to laypersons can be interesting experience. You can see just how intuitive or not on average something is. Most people I've shown code to knew begin was the start of an activity with end being the end of it. They were hit and miss on braces as they couldn't be sure what they signified.

In this case, though, it would be fine to use something smaller like braces, brackets, or parentheses. The reason is the intuition learns it quickly since the understanding is half there already: the braces visually wrap the statements. Not a big deal to me like equality was.


What non-C semantics did SPARC chips offer? I wasn't aware of that.


Tagged add, optionally with overflow trapping, see https://en.wikipedia.org/wiki/SPARC This was inspired by Smalltalk on a RISC and added explicitly for Lisp and Smalltalk. Also in support of this, trapping on unaligned loads.


Unaligned loads? Can you elaborate? You mean not along a 4 byte boundary?


Right. The idea being that the bottom two bits of words are tag bits. tadd dictactes that they have to be 0 for integers which means pointers are non-zero in their tags. Let's say the pointer tag is 1. That means to load a word from an object will have to compensate, eg.

   car: ld r2, [r3-1]
        ret
        nop
If it happens that r3 _wasn't_ an object pointer, that load above would not be word aligned and thus trap. In other words, the hardware does the tag checking for you. (As an aside, the 486 introduced the option for making unaligned access trap, but AFAIK, no one has ever used it because you'd have to fix all the alignment assumptions in library code).


But unaligned load is very important to a lot of code like LZ compression.


Thanks for the explanation.


Loads/stores not "naturally" aligned: 4-byte boundaries for 4-byte data, 8-byte boundaries for 8-byte data.

In the late 90s I would routinely get bus errors while running Netscape on SPARC Solaris machines, presumably due to corner-case unaligned memory accesses. x86 processors perform unaligned loads and stores, but at a slight speed penalty, and with the loss of atomicity.


If you're at all interesting in getting into hardware, buy yourself a cheap FPGA and put a RISC-V CPU on it. I'm doing it with my kids and it's been tons of fun. :)


I recommend the DE0-Nano: https://www.adafruit.com/product/451

Altera's tools are much nicer for the beginner to use than ones offered by Xilinx. This board also has a lot of peripherals you can interface with (ADC, accelerometer, etc.) to grow your skills.


Have you tried Vivado? Xilinx ISE is hands down terrible, but supposedly Vivado was "500 man-years of engineering effort" [1]. Unfortunately, vendor lock-in means I can't try Altera's tools for my boards, but Vivado's high level synthesis is really cool. It generates a hardware design from a given C program, and then let's you tune the generated verilog [2].

[1]: http://www.eejournal.com/archives/articles/20120501-bigdeal

[2]: http://xillybus.com/tutorials/vivado-hls-c-fpga-howto-1


Vivado is miles ahead of Quartus, especially for beginners. Altera's tools have a very steep learning curve for beginners to hardware.


I think that is mainly a matter of opinion. Unfortunately, you can't use Vivado with Xilinx's older, cheaper chips. I think beginners are more likely to use these chips than their more advanced, expensive chips. We'll have to see if the Spartan-7 is supported by Vivado or not.


The newish Arty board has a 7-series part and is only $99


I've been wanting to try it, but for some reason Xilinx refuses to offer support for the Spartan-6. They only offer support for their expensive chips.


I haven't checked out the high-level synthesis part, but I've been using Vivado for a project recently and it's absolutely horrible. To name some of the issues I've had;

- Memory leaks (grows from about 1GB to 11GB and starts swapping in a couple of minutes when editing existing IP)

- Single-threaded synthesis is slow (though isn't limited to Vivado specifically)

- Failing after 20 minutes of synthesis because of errors that would be easy to check at the start

- Placing debug cores can result in needing to synthesise, find something went wrong, delete debug core, re-synthesise, re-add debug nets, synthesise again...

- Aggressive caching results in it trying to find nets which were changed and no longer exist, despite re-creating the IP in question from scratch

- Vivado creates a ridiculous amount of temporary files and is a royal pain to use with version control (there is an entire document which details the methods to use if you want to create a project to be stored under version control)

I've been playing around with IceStorm for the iCE40 device and it's an absolute joy to use: fast, stable and simple. I appreciate that there are a lot of complex tools and reports which Vivado provides, but I would much rather use an open source tool like IceStorm for synthesis alongside the advanced tools from Vivado.


What would be a use case for using one of these FPGAs rather than something like a Raspberry Pi with a traditional microcontroller?

I'm genuinely interested. I've been really curious about FPGAs but I don't know what a good use case there is for them for a hobbyist.


The use case for these boards is to learn how to use FPGAs.

Why would you use an FPGA? Mainly when you have specialized requirements that can't be met by processors. FPGAs mainly excel in parallelization. A designer can instantiate many copies of a circuit element like a processor or some dedicated hardware function and achieve higher throughput than with a normal processor. If your application doesn't require that, you might like them for the massive number of flexible I/O pins they offer.

Lastly, using FPGAs as a hobby is rewarding just like any other hobby. Contrary to popular belief, you don't "program" with a programming language like with a processor. You describe the circuit functionality in a hardware description language and a synthesizer figures out how to map it to digital logic components. You get a better insight into how digital hardware works and how digital chips are designed. When you use them as a hobby, you get the feeling that you are designing a custom chip for whatever project you are working on. Indeed, FPGAs are routinely used to prototype digital chips.


Sounds like something that might be just as enjoyable on a simulator.


No, not really. FPGA's by definition are massively parallel. There is no way you can simulate them on CPU's with any reasonable speed (think: 1ms cpu time for simulating one clock cycle, so your simulation maxes out at a few kHz max). That sucks all the enjoyment out of it.


Ha! I often find myself wishing I had an FPGA. It's very common that I have to control external devices that would be straight forward with logic but requires all soft of hacks and tricks using a microcontroller.

Here's just one simple example: controlling servos. Sure you can do that with most uC simply enough, say using timer interrupts, but what if I need to control 100 of them? In logic, I can just instantiate 100 very trivial pulse controllers where as this typically is impossible with a microcontroller or at the very least leaves no cycles free for any computation.

Another example: you want to implement a nice little crypto, like ChaCha20. Even though ChaCha20 is efficient, it's still a lot of cycles for a microprocessor, where as an FPGA can implement this is a nice pipeline, potentially reaching speeds like 800 MB/s, while still having ample resources left for other work.

I could go on.


Great comment and examples. The CPU's are optimized to try to do everything kind of fast a step at a time within specific constraints, often legacy. The FPGA's let us build just the right hardware for our computations using components that run in parallel with fewer constraints. The result is often some amazing performance/circuitry ratios.


To see the board running your program, there's a rare beautiful moment where YOU, one hobbiest, designed the whole stack :).


I don't know if this is a stupid question, but could one design a very basic LISP machine on a FPGA? how about a diminutive JVM?


Why not? It was done on real hardware in the 80s, right? hell, here's one:

http://www.aviduratas.de/lisp/lispmfpga/


It's very easy. Personally, I find Reduceron [0,1] far more interesting.

[0] https://www.cs.york.ac.uk/fp/reduceron/

[1] https://github.com/reduceron/Reduceron


I made a little FPGA LISP machine:

https://github.com/jbush001/LispMicrocontroller


A few ARM CPUs have not a JVM, but the ability to accelerate a JVM by directly executing Java bytecode (Jazelle DBX).

"The Jazelle extension uses low-level binary translation, implemented as an extra stage between the fetch and decode stages in the processor instruction pipeline. Recognised bytecodes are converted into a string of one or more native ARM instructions."


Cool, haven't thought about that. I probably need to get an FPGA. Really liked the book "The Elements of Computing Systems" [1][2] in which one builds a computer from NAND gates upwards, a compiler, vm and finally a simple OS with applications. The hardware part of the course seems to be on coursera now as well. [3]

[1] https://mitpress.mit.edu/books/elements-computing-systems

[2] http://www.nand2tetris.org/

[3] https://www.coursera.org/learn/build-a-computer


That's some really neat stuff that I somehow missed in prior research. Thanks for the links. I'm particularly going to have to take another look at the paper that details their methodology for building systems ground up. The abstraction process and samples.


Try interfacing one of them with DRAM, at a moderate speed.

You'll learn: Pipelines, Caches, Why cache misses are so painful and a whole host of CPU performance stuff that "looks" esoteric will become plain as day.


FPGAs are for pretending you have the money to fab every hardware design iteration. Small CPUs are...not?

Honestly the FPGA in data center stuff is probably mostly hype for most people, but toying with an FPGA is super fun.


Well... if you're going for something stupidly small/underpowered (ATTiny level of power consumption), but run out of cycles to handle multiple I/Os at the same time, FPGA allows you to cheat a little bit by doing things in parallel. For example with 10 inputs on standard CPUs you have to spend some cycles checking each one separately. With FPGAs you can have block for each and just get a signal propagated when something "interesting" actually happens. Then again, you could just invest in bigger batteries and better CPU instead :)


And here's an FPGA optimized RISC-V RV32IM core for that device (Altera Cyclone IV): https://github.com/VectorBlox/orca

I haven't tried this particular one.


I'm the principle author, I can answer any questions


This seems like an ultra-basic question (sorry). On the VectorBlox/orca github page, it mentions that the core takes ~2,000 LUT4s. Are those numbers apples-to-apples with the 22,320 LEs given for the Cyclone IV board mentioned earlier[1]?

If so, then (naively) could one pack ~10 on that single FPGA? Or does the 'packing overhead' become a big problem? Or does the design use more (say) multiply units pro-rata, so that they become the limiting factor?

[1] https://www.adafruit.com/product/451


Yes, that is more or less true, the problem you would run into is communication between cores. When you have lots of cores you run into problems if they all want to talk to the same memories. If they were could all run more or less independently there is no real issues. The reason I list LUT4s, is newer chips have ALMs which is definitely apples to oranges. Also there Cyclone IV chips with much more than 22K LUTs.


I haven't simulated this to check but it may be the case that IO pins become a limiting factor. The Cyclone IV has 153 IO pins so the design would have to use less than ~15 to be able to fit 10 copies on.


Well a CPU doesn't inherently use any I/O pins so that shouldn't be a problem.

You can easily add some logic to let CPUs share pins, too.


Good point sir


There is an amazing Amiga accelerator using the Alter FPGA http://www.kipper2k.com/accel600.html


The DE0-CV is a bit more expensive, but I think it's probably better than the Nano for the FPGA 101 type experiments. It's got more on the way of switches and LEDs, and the buttons are a lot easier to get to. I have both, and I had a lot more fun with low level learning activities with the CV than the Nano. Moving past that, I also enjoyed messing with the VGA and SD card peripherals more than the ADC and accelerometer, and also found those easier to add on after the fact.


I added a hand-made VGA "shield" to the Nano and used 3 digital I/O pins to get 3 bit color. I don't have a picture of the add-on board handy, but you can see a breakout game I made here: http://jrward.org/breakout.html


I also made a VGA shield for my Nano! I got a couple pictures at https://mobile.twitter.com/dyselon/status/648020130471899136

That project is actually kind of what convinced me to go ahead and buy the CV. It was fun, but it just felt like a lot of work making things that I could just already have on the board.


Can you tell us more about what you're doing and how? This seems like the kind of story HN would eat up, if you might like to write about it.


Which CPU did you try?


You may want to try PULPino: http://www.pulp-platform.org/


Any FPGA's available yet with an open(ish) toolchain? Including the bitstream generator/programmer?


Yes, there's a fully open stack for the Lattice iCE40s (the largest of which has 7680 LUTs):

http://www.clifford.at/yosys/ (Verilog synthesis) https://github.com/cseed/arachne-pnr (Place and route) http://www.clifford.at/icestorm/ (Bitstream generator and programmer)


I think some people had a full open source toolchain (or were close to it) for some of the Lattice FPGAs, but I think only the really feeble ones that have 10s of LUTs.


I've just found their online simulator (caller ANGEL). It boots linux and busybox:

http://riscv.org/angel-simulator/


    / # uname -n
    ucbvax
    / # cat /proc/cpuinfo
    CPU info: PUNT!
    / #
Amusing.


No RISC-V post would be complete without a link to lowRISC:

http://www.lowrisc.org/


There are many other, equally if not more interesting implementations...


lowRISC is the only project that aims to distribute ASICs AFAIK. Are there others?


The pulp platform has taped out a RISC-V based microcontroller, but I don't think they have plans to sell/really distribute any asics unfortunately.


Yeah, that is the problem with niche CPU arches; if mass quantities aren't available to buy (either lone chips or in consumer products), people aren't going to use it much. Not everyone can afford to have their own fab :)


You can always start playing around with PULPino on a FPGA :)


Pretty much everyone is looking at eventual ASIC fabrication. It just takes a lot of money to do that, so people are validating ideas first on FPGAs.


Everyone is looking at ASIC fabrication, it's distribution that we're concerned about.


URLs or it didn't happen!



A microcontroller based on RISC-V was announced recently called the pulpino [1].

They actually produced some silicon recently, but unfortunately they don't have plans to sell any. It would be so much fun to play around with a fully open micro!

[1]http://www.pulp-platform.org/


You can start playing on FPGAs. There are prebuilt Xilinx images for PULPino :)


Where's the inscrutable register names? Where's the complex instruction decode? Where's the legacy modes that date back to 1960??

THIS WILL NOT STAND! WHO WOULD WANT TO USE THIS???


Seriously, they need to read Worse is Better before they get into a bad situation trying to build The Right Thing. Maybe start cranking out x86 clones to sell off to Intel later on. That's a technology that's paid off several times. Even Intel couldn't market a superior processor [1] [2]. Sets the bar high for the little guys. ;)

Note: The manual in Wikipedia link describes full ISA and features. It was actually a bad-ass CPU design. I'd really like to know what process node the remaining one's run at. Couldn't find it, though.

[1] https://en.wikipedia.org/wiki/Intel_i960

[2] http://www.intel.com/design/i960/



This is a paper CPU, right? You can't actually buy an IC. Even though there's a Linux port for it and four annual conferences so far.

There seem to be too many CPU options. Multiply and divide as an option belongs down at the 50 cent 8-bit CPU that runs a microwave oven, not on something that runs Linux. Optional floating point is maybe OK, although you should be able to trap if it's not implemented and do it in software. You don't want to have zillions of builds with different option combinations. That drove MIPS users nuts.

The protection levels are still undefined; "User" mode is designed, but what happens at Supervisor, Hypervisor, and Machine mode?

If this were well-defined and cheap tablets were coming out of Shentzen using it, it would be important.


> This is a paper CPU, right?

What? No, it's an Instruction Set Architecture. It's the language the CPU speaks.

> Multiply and divide as an option belongs down at the 50 cent 8-bit CPU that runs a microwave oven, not on something that runs Linux

Irrelevant, nobody has to know your CPU doesn't implement multiply/divide, not even the OS!; you trap to machine mode and life goes on. And not every RISC-V core needs to run Linux.

> but what happens at Supervisor, Hypervisor, and Machine mode?

You read the manual that exists, implement the privileged spec, and it'll run Linux. Just because it's not frozen doesn't mean that many RISC-V cores can't already boot a number of different operating systems.

> If this were well-defined and cheap tablets were coming out of Shentzen using it, it would be important.

There are already shipping products with RISC-V cpus (like in cameras). Just because you're not the customer doesn't mean this work isn't very important to other people.


There are multiple open source implementations of it available, written in hardware description languages. You can program those on to an FPGA or even make an ASIC if your budget extends to that. The http://www.lowrisc.org/ lowRisc guys are working on an ASIC if you need a real chip.


> This is a paper CPU, right? You can't actually buy an IC.

Yes you can. You just won't know about it. The open source CPU cores tend to go into things like CMOS cameras, ISPs, video processing chips, or FPGA projects.

But RISC-V is exciting because it's the first open-source ISA that really has a good chance of becoming an open 32-bit microcontroller or even a phone/tablet CPU.

There has been silicon chips taped out as university projects. No you can't buy them, but it means there's a pretty low barrier for a commercial company to produce one. But I think the compilers just needs to mature more, especially the LLVM one (which would be useful for making IDEs etc.)


I agree about too many CPU options. It's a pain with ARM Cortex, but I'm not sure of a way around it. I think this ISA is designed to go down to the Cortex M0 level (which also lacks hardware divide). Being about to trap out of it and switch to software seems like a good system - do any other processors work like that?


This is all interesting. I have question about the "Free and Open" part. Does this mean I could go and manufacture a an FPGA that implements this ISA and sell a product that uses it?


Yes. You can do whatever you want, even extending or changing the ISA as you want (though if you break I-subset compatibility, the only thing you can't do is still call it a RISC-V processor).


How do these compare to the Power8 chips? I know Power8 is targeting supercomputers, government research and some large banks.


It doesn't.

How does Apple's A9X compare to a Xeon E7?

RISC-V is excellent for hobbyist boards, mobile CPUs, high-efficiency low-performance servers, as a base for DSPs, high-efficiency network equipment, etc. Everything from the DSP in your phone to a Rasperry Pi to a Macbook Air.

POWER8 is excellent for high-performance computing, supercomputers, and workstations (Talos[1] when?! :( ). Things like CAPI, Nvlink, etc—stuff that's well outside the scope of RISC-V (though I suppose you could make an ISA targeting HPC based on the RISC-V base integer ISA, but it would only tangentially resemble RISC-V by the time you're done).

Personally I'm looking forward to using both of them in the future!

1. https://raptorengineeringinc.com/TALOS/prerelease.php


You could use RISC-V in those POWER8 applications with a suitable implementation. I don't think it would be POWER and probably be only a fraction of it. Just so much careful optimization and design in POWER at ISA and implementation level for number crunching. Yet, an Octeon III-like implementation of RISC-V esp w/ accelerators for SIMD/MIMD or onboard GPU could kill a lot of Intel and POWER processors in performance while getting us close to top contenders.


The Talos stuff looks interesting but $3,700 is a bit of sticker shock. Hopefully demand and economies of scale will drive down those price points.


POWER has been overpriced for too long. From my past looks at catalogs, paying $3,700 today would be a steal to get a high-performance, POWER workstation. Not necessarily competitive price/performance with Intel but certainly with older RISC workstations.


There are a million things that influence performance to a greater degree than the ISA. However, there's no reason to believe you can't make a perfectly good high-performance RISC-V processor. In my own benchmarking of ARM, x86, and RISC-V CPUs, I've found bigger differences between different GCC compiler versions than between ISAs themselves.


And RISC-V target low power usage. The easiest example to show the difference is big int computations: they aren't very easy/efficient on RISC-V as it doesn't have a "condition code register".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: