Hacker News new | past | comments | ask | show | jobs | submit login
NASA selects SiFive and makes RISC-V the go-to ecosystem for future missions (sifive.com)
792 points by georgelyon on Sept 6, 2022 | hide | past | favorite | 187 comments



For context: "This contract is part of NASA’s High-Performance Space Computing project". I used to work with the HPSC leads and paid attention to it. This is one of many parallel threads for next-gen computing, including snapdragon and others. Yes, it will be rad-hard. That was a program requirement and is mentioned in the press release. As such, it fits a particular mission profile.

There is no existing compulsion for missions to use HPSC. This is a technology development program seeking to meet the requirements of missions. It's great news they converged on an architecture!

As part of their development, HPSC folks will seek mission partners for tech demos. Then, missions will voluntarily (or with some light compulsion) adopt the technology based on the growing heritage and mission need.


This is really pretty strange.

RISC-V is a respectable enough architecture, but not better than recent POWER designs, equally open. NASA has an enormous amount of experience with POWER.

I suppose it comes down to what bidders are offering to build for them.


I'm sure NASA has an enormous amount of hard-earned in-house expertise in rad-hardened electronics, hard real-time safety critical software, and related things. But how much of that knowledge is tied specifically to the POWER ISA? Not much, would be my guess.

I've been following POWER at a distance, and while I'm sure the ISA itself is good enough, I'm not really convinced about the vitality of the ecosystem. The only place where serious money is being invested is in the high-end CPU's which IBM keeps developing enough to prevent their high-end captive customers from defecting. But at the lower end and embedded, it's crickets. Beyond some token efforts like the microwatt for FPGA's and an old supercomputer core design they open sourced, the whole OpenPOWER effort seems to be little more than a couple of guys producing the occasional powerpoint and press releases.


Compilers have had many years of work put into generating optimal instruction sequences for POWER. That is all flushed away moving to RISC-V.

Momentary "vitality of the ecosystem" just seems like a poor basis to choose the underpinnings for decades of future work. But we seem to do that every time.


> Compilers have had many years of work put into generating optimal instruction sequences for POWER. That is all flushed away moving to RISC-V.

Well, to an extent. If one looks at GCC at least (presumably LLVM as well, although I don't follow it as much), POWER-specific patches seem to amount to enabling and optimizing the latest POWER server CPU's. I can't offhand actually recall much if any work on other POWER CPU's. So all that work won't really benefit an embedded POWER core. And further, which core would that be? I haven't seen anything new in the embedded POWER market for a very long time, but maybe I'm missing something? You're certainly not going to put a POWER10 on a space probe and call it a day.

So if NASA was really wedded to POWER, would they have to foot the bill to develop a new POWER core, and maintain the compiler backend for that one? Sounds a lot more expensive than picking an existing core developed by someone else, and compiler backend maintenance done by someone else as well, and rad-hardening it.

> Momentary "vitality of the ecosystem" just seems like a poor basis to choose the underpinnings for decades of future work.

Well, AFAICS it's not momentary but terminal decline. Now of course it might be that RISC-V dies away as well after the current excitement, but at least to me that looks much less likely than POWER reversing its fortunes in the embedded market.

As to why not ARM then, I'm not sure. Might have to do with such government contracts requiring working with US companies, and both Microchip and SiFive are US, whereas ARM is British/Japanese?

> But we seem to do that every time.

Sure. I do think that if IBM had done the OpenPOWER thing 15 years ago and put serious money behind it, POWER could have been a worthy competitor to ARM in the embedded market, and RISC-V might never have seen the light of day. But, they didn't, and here we are.


Most compiler profiles for space barely optimize anyway. Determinism is much more important than performance.


Depressing.


Why? Technology marches on.


The point is, it doesn't, mostly. It mostly just churns.


The world has mostly moved on to RISC-V. It is a better ISA, and it isn't encumbered.

The POWER ISA is only partially open. All you get is the old version.


Not really... they're at opposite ends of their life cycles. POWER is a dying, dead-end[1] architecture while RISC-V is just getting started. Open sourcing something already in steep decline won't typically bring it back unless there are no other options.

[1] The only people who have any incentive to invest in it going forward are those who already have significant investments in it.


There is no natural life cycle for an ISA. (I am looking at you, x86!)

It is easiest to pretend there is such a thing, because we are socially attuned for that. But it is not a technical observation, but a sociological one. And, that was my point.

We have maybe learned a few things about ISA design since POWER was cast. But we have also forgotten things. It is far from clear we are ahead. Most of what we learned feeds into stuff like vector units that have little to do with whatever they are bolted onto.

One example is the popcount instruction, omitted from the base RISC-V, and from all chips when last I checked. It has been added to every architecture ever mainstreamed, always at great expense, but it seems hard for ISA designers to contemplate. There is a special extension just for it so it could be put into a standard "profile" without importing the entire B thing.


True, from a technical standpoint. I was referring to the business standpoint as it seems like other than IBM, what businesses are still making any investments in it? (i.e. no consoles using it anymore, Apple is long gone, Motorola (spinoffs) appear to at best be phasing out etc)


Power? How about LEON And other rad-hard Sparc processors from Microchip (formerly Atmel)?

Looking at the problem long term RISC-V makes more sense. RV has a lot of steam building behind it and we have really neat things like the Polarfire SoC and many more in the works. New tools, compilers and software support are being added every day (hell, there is a plan 9 Risc-v compiler already...) RV is the future from the look of things.

Power has an EoL set of chips from NXP which were former Freescale/Motorola designs and costly server chips from IBM. Arm ate their lunch in the embedded sector and Intel in the desktop/server sector. No one is looking to build new Power chips. No one is porting to Power unless its a Raptor Computing Talos workstation or IBM box, essentially an expensive toy or big iron, niche stuff.



Looks like they left POPCOUNT off. Again.


The molecularity of the RISC-V architecture is going to be very appealing for embedded roles like this. That flexibility can be a liability for, say, a desktop operating system but if you've got an embedded system like a space probe where you develop the hardware and software in parallel it means that you can only pay for what you use in terms of ISA features. Need 64 bit memory addressing to handle all the RAM on your graphics card but don't need the chip to have an MMU? You can do that within the RISC-V spec but not within POWER. And with RISC-V there are ways to add your own specialized instructions for your particular use case.

If I were designing an open source processing for a laptop or cell phone I'd use the POWER ISA. But for a space probe I'd certainly go with RISC-V.


>If I were designing an open source processing for a laptop or cell phone I'd use the POWER ISA.

Sure, you can license a fast POWER from IBM today.

But if you were to design your own, why would you ever pick POWER?


Sometimes there are documents that say why NASA chose what they chose, there might be one for this


Following article has a bit more info on motivations, although it's from SiFive guy not NASA directly -- https://www.theregister.com/2022/09/06/nasas_spaceflight_com.... Summary: more vibrant eco system over the next 5-20 years and vector math.


Last I checked, another program feature is help from domestic customers that require rad-hard, high-performance chips. That may influence the architecture significantly. In theory, domestic customers add significant resources and speed to the program.


Isn't RISC-V touted as the new open architecture going forward? Much more so these days than Power.


Exactly the point: not technical merit.


And that is not wrong. (It's not necessarily right either.) The same way in software, some esoteric language or framework might be a perfect fit for a project and yet something boring and common or something new and exciting for which it easier to find employees is the better choice.

Take for example a low level programming task with the following choices: Ada and Rust (excluding other options for sake of argument), with Ada being the better fit. It may still make more sense to choose Rust because the Rust community is more vibrant and the projection for the ecosystem is also up even if it might be a worse fit for the project.

Choices are not always technical and as a technical person I might disagree with the choice but I can still respect it if it is well argued.


Is your claim that the POWER ISA is somehow better designed than RISC-V?

I honestly do not see it.


No. It is that it is not worse, and there is already a great deal of experience with it.


Technical Merit rarely plays apart anymore. It is all about ideology or interest.


Radiation hardened targets seem to stay around for a long time, judging by how many 750s are still around, so I suspect we'll see RISC-V around in aerospace for quite some time


I could imagine HPSC being useful for edge "compression" of sensor data. A continuously running high speed camera (or whatever they're doing at the LHC) generates terabytes of data per second, too much for direct download. The satellite has to be able to do analysis or at least process the data into a manageable stream before sending the results to earth.


Last I checked, another program feature is help from domestic customers that require rad-hard, high-performance chips. That may influence the architecture significantly. In theory, domestic customers add significant resources and speed to the program.


Does that, in practice, mean military, or civilian aerospace?


Interesting! ESA has been using a custom SPARC V8 rad hard architecture: https://en.wikipedia.org/wiki/LEON

I'm currently using a dual-core 90 MHz processor that is relatively advanced and has good performance for many applications. It has error-correcting memories (SDRAM, cache, registers) and a lot of integrated peripherals (spacewire, serial, ethernet, 1553, CCSDS) that help reduce board complexity.

Next up in the pipeline is an 8-core 1 GHz monster: https://www.gaisler.com/index.php/products/components/gr765


> Interesting! ESA has been using a custom SPARC V8 rad hard architecture: https://en.wikipedia.org/wiki/LEON

While LEON isn't going away any time soon (especially given the recent release of the new LEON-5), I very much get the impression that the long-term plan is to replace it with RISC-V based solutions such as NOEL-V. Nobody can throw out working systems, but if you are starting something brand new, does it really make sense to base it on an ISA which the rest of the industry has essentially abandoned? Even its originator (or successor thereto) has basically walked away from it. Aerospace stuff is as expensive enough as it is already, getting stuck on a niche legacy ISA seems like a recipe to only make it even more expensive.


If you look at the page for the gr765 the parent poster linked to, you can see that the customer can choose the gr765 with either SPARC or RISC-V cores. I think you're right that the plan is to transition to RISC-V, given that the SPARC ecosystem otherwise is pretty moribund.


Interestingly enough, the LEONv3 architecture also underlies the SkyTraq Venus8 GPS/GNSS chips, which are in the Navspark arduino-compatible modules, which means there's a really, really accessible toolchain for 'em. Just add the board support URL to the IDE, and voila, you're building for LEON.

The NavSpark Mini is still $36 for a 6-pack, making it one of the cheapest uCs with a floating-point unit. (The ESP32 technically has one, but its performance is inexplicably poor.)

I'm not clear on whether SkyTraq's chips have all the error-correcting goodies that're in other LEON implementations; would those be baked in by default?


NASA has mostly been using a PowerPC.

https://en.wikipedia.org/wiki/RAD750


Are any of these boards available for regular people at a regular person kind of price? Or is it 1000s of USD for an evaluation board?


It’s super hard to get pricing info for radiation hardened chips that are currently being sold, let alone just buying one from Digi-Key. It’s pretty damn annoying if you just want the one super robust chip for a system supervisor or something else like that, to have no idea how to plan the design for a cost budget before you begin doing a lot of contact work talking to people for price quotes. It’s a weird ecosystem that feels surprising antiquated compared to the more modern parts of the embedded ecosystem that I’m familiar with.

Worst price I can remember off hand was $15 grand USD for a radiation hardened microcontroller, pretty sure it was an Arduino mega family chip I could have gotten the industrial quality version of for $25 bucks from Digi-Key… if I didn’t want it radiation hardened.


$40k for a QML-V RTG4. Something like $100-150k for the newest Virtex 6 Ultrascale. You can get an engineering model for like a quarter of the price.


Can you even buy one without running afoul of ITAR?

I'm slightly surprised you could even buy a rad-hardened microcontroller, but maybe microcontrollers are considered simple enough that they are exempt?


I’m pretty sure the reason a lot of it is behind such gatekeeping and onerous sales process crap is because of ITAR. Dealing with American companies is the god damn worst for anything like this. So much so I’ll actively avoid anyone US based. I am building a satellite, and America has lots of cool space tech… but it’s globally second class compared with any other country since some of them give so little of a shit about arms control (they import their real weapons from countries like the USA, UK, etc and so any space tech is just so tiny a niche businesses they don’t care) that it’s more paperwork to get through customs upon arrival than it is to buy stuff like solar panels and other basic stuff from the USA due to ITAR.

Get that from the USA and your basically in for a polite colonoscopy before they even tell you their prices (only slightly exaggerating)


Most of the time, it's easy to just put a microcontroller core into an FPGA but sometimes you just need some basic operations and don't have a lot of power to spare for an FPGA you won't fully utilize. The Atmel chip is perfect for this situation, TI makes one too. It's not going to be good for a 15 year GEO mission but there's still a niche to fill with these parts.


I recently saw an article where a guy got Ada/SPARK up and running on one of those Gaisler chips. Was super neat to see, kind of made me want to play with Ada.


I love Ada, but the tooling is hell. Alire is a great start for a package manager, but it's still clunky and parallel projects like the ada language server still don't build with it for some reason.


I am a little surprised to see this. I'd imagine NASA would chose a CPU with an already proven track record (especially for something so wide ranging, the press release says it expected to be used for 'virtually every future space mission, from planetary exploration to lunar and Mars surface missions'. This is a new design from SiFive and there's not a huge amount of SiFive IP around yet to demonstrate how good (or otherwise) their verification is. Running into a hardware bug whilst your CPU is in space is not good.

Still good SiFive have convinced NASA otherwise, they must have a pretty strong verification story behind the scenes.


The whole point of choosing SiFive over anything else is to encourage competition.

SpaceX is similar while they are very successful it also encourages competition in a space that really needed it and now there are so many other companies doing smaller launches that don’t make business sense to SpaceX.


I agree. NASA has hit the jackpot by using fixed-price contracts from vendors which don't have a legacy track record from the Apollo days.

The Cost-plus contracts have made them fat and lazy and somewhat remind me of that Bugs Bunny cartoon where the cats don't mind the mice emptying the fridge.

NASA is taking a small risk, but the rewards can, as SpaceX has demonstrated, be HUGE.


I absolutely love that NASA is doing this AND encouraging the RISC-V ecosystem. Maybe it's impossible, I don't know, but if we have an open soruce alternative for hardware with the kind of reach that Linux has in software i think that's the best way forward.


I agree open source hardware would be great but SiFive is a closed source vendor like many others.

It's RISC-V itself that has a permissive license, allowing anyone to make an implementation open or closed source.


An open ISA is a good start. It means short of any closed-source extensions the code can move to another chip vendor. With smart modularization of the code, even parts using closed-source extensions could be potentially retargeted faster than moving to a whole new ISA. Finding a radiation hardened complete solution that's totally open source could be a tough request to fill.


> short of any closed-source extensions the code can move to another chip vendor

That's a huge limitation. It's unfortunate that RISC-V allows for closed implementations (IP) and also closed ISA extensions.


>It's unfortunate that RISC-V allows for closed implementations (IP) and also closed ISA extensions.

It is rather fortunate. Else, it couldn't possibly aspire to be the standard ISA for absolutely everything.


I think it's a rather pragmatic decision given corporate capitalism is the economic model under which most processor design and trade happens, at least toward the higher end. The good news is since the main ISA and several open extensions exist, any open source hardware implementation of those aren't hindered by the ISA. That's also true for SPARC, POWER, and a few others of course. But RISC-V seems to be the belle of the ball and apparently simpler to build a hardware design around.

Someone, somewhere may eventually form a coalition to pay TSMC or Samsung to generate an in-house spec for a high-performance RISC-V processor with an open-source license on the hardware design, too. Some places use RISC-V in products where the processor is another line item in the BOM and not the centerpiece of a SOC or dev board, like Western Digital using it in storage products. A group of companies and institutions could very conceivably get things where we'd wish now that the open ISA has set the stage.


> rather pragmatic decision given corporate capitalism

Supporting a system that centralized designs of critical semiconductors in the hands of few companies and manufacturing in the hands of TSMC is not very pragmatic. If anything it's very ideological.


Acknowledging that's the system under which something has happened does not imply, let alone assure, personal support for that system overall. The real world doesn't bow silently in deference to our hopes and dreams.


> The real world doesn't bow silently in deference to our hopes and dreams.

You are implying that socioeconomical systems are "the real world" and somehow they just happen to exist independently from our "hopes and dreams".

It's the complete opposite. Unlike solar flares and ocean tides, socioeconomical systems are entirely the product of human decisions.


SiFive is a new company (with backing from large companies like Qualcomm), but the people working there are industry experts with decades of experience who know the value of verification to both designers and customers.


One of the main value adds of an open ISA + open reference implementations Chisel/Chipyard + access to RTL (I think?) is that LOTS of people are hungry for really good multi-core out-of-order CPU pipelines with super-stringent levels of provable correctness. I think the tooling ecosystem is on the right track in terms of providing a FOSS toolchain that supports this. It's not there yet, but you're average ECE master's student could probably teach themselves chipyard and start their own Rocket fork which is much easier to test/verify at the basic-block level than a lot of other tooling I've used.


Note that X280 is not OoO. It's the same proven dual-issue in-order U74 core (similar to ARM A55) what has been out for a few years, and is just now hitting Raspberry Pi levels of price in the VisionFive 2 and Pine64 Star64. The X280 just has an added RISC-V Vector unit.


Also because the CPU layout is generated by software, maybe it is easier to add custom constraints specific for space.


The alternative is the ancient PowerPC-based RAD series of which the most recent incarnation is now almost 2 decades old.


NASA is partitioned between the (IMHO limited) parts that actually do stuff and the golly-geewhiz-PR stuff. People who are planning next gen planetary missions are considering trusted, proven rad hard platforms. This is just PR to distract in its own small way from the SLS debacle.


> This is just PR to distract in its own small way from the SLS debacle.

You're implying a degree of coordination which organizations the size of NASA just don't have. Artemis has had two scrubbed launches, they didn't put out this announcement to distract from it. I doubt someone went over to SiFive and said, "Hey, help us distract from the Artemis I launch problems and we'll give you a nice contract."


Yes, it's a ridiculous claim. HSPC has been in the works for several years now, and is ran thru JPL which is completely unaffiliated with Artemis.


I don’t think coordination has to be explicit. Seeing meh PR and thinking, “hey maybe we should step up the advanced technology press releases” is enough. As for NASA contracts, it’s almost harder to name someone who doesn’t have even a tiny one, even cranks. Also it’s not like 2 Artemis failures are in a vacuum - it’s the culmination of a decade of failure and misspent funds.


>>> This is just PR to distract in its own small way from the SLS debacle.

> Seeing meh PR and thinking, “hey maybe we should step up the advanced technology press releases” is enough. As for NASA contracts, it’s almost harder to name someone who doesn’t have even a tiny one, even cranks.

Except no one cares about the instruction set of space computers, except a small cadre of computer geeks.

A story no one cares about is a bad thing to use as a distraction.


I agree with you. I meant it as part of the flux of NASA tech press releases. It could be some CPU thing, it could be a magic space drive, it could be the official kitty litter of the moon base. ;)


You are overthinking it.


Does it stay crunchy in moon milk?


What debacle? Is it dead?


Money spent vs results over time. Results excluding Boeing & subcontractor revenue. SLS was originally sold as cost effective reuse of existing SSMEs combined with relatively simple tankage to get a cheap large booster. Against those metrics it’s a debacle, unclear if it ever will fly, especially more than once.


While I agree about SLS, and I'm also hopeful about SpaceX's Starship stack, I'd really like to see a successful test. They're working on it, but just imagine how much closer they could have been with even a fraction of this SLS boondoggle contributed.

If NASA is supposed to be a multi-state grant to the sciences, I'd much rather they focus the funding where it will deliver results that benefit the public commons. The jobs program for obsolete and finicky space tech is a dis-service to the public and even the workers who's skills are in questionably useful specialties.


Compared to Starliner and its move from Atlas V to Vulcan [0], SLS is a paragon of incremental success. Additionally, to be fair to SLS, many of the problems associated with the program have to do with the mobile launch platform, which was built for the previous Constellation program and required considerable refit for SLS. [1]

0. https://arstechnica.com/science/2022/09/nasa-will-pay-boeing...

1. https://en.wikipedia.org/wiki/Exploration_Ground_Systems#Lau...


> Against those metrics it’s a debacle, unclear if it ever will fly, especially more than once.

Not to mention a national security nightmare. Imagine a world without the COTS program, where we would most likely still be waiting for whatever version of Starliner that we finally got. We would right now be reliant on Russia to return our astronauts to earth from the ISS. Can you even imagine what we’d have to give up for their safety?


This alternate reality could yield a good movie script. Think of American astronauts trying to hijack a Russian Soyuz docking at the station to get back to earth.


The SLS was pretty much designed by Congress. Design choices were made primarily to keep jobs in plants which were previously used for Space Shuttle parts.

With its impending first launch, a lot of discussion is popping up again about it arguably being way too slow to develop, overly expensive, and not fit for the intended mission profile.


Criticism that SLS is unfit for the intended mission profile is misguided. SLS never had an intended mission profile. The mandate was always to build a rocket in these places and with these contractors and at fabulous expense, and any notion of what would be done when/if that boondoggle I mean rocket ever flew was the mere coda of an afterthought.


Yeah the engineers were really fucked in their options and it shows in their evaluation.

They were supposed to build a 130 to LEO rocket that should launch the first time in 2016 (min 70t) and use existing contractors. That is kind of insane.

So NASA basically allow their internal teams to come up with Block upgrades based systems and that's where the whole SLS Block 1a, 1b, 1c comes from.

The problem with this was (and is) that they had no reason to rush to a quick block 1 as there were really no missions for it.

Had they been given until 2020 developing a single core modern Saturn V would have been a much better design (this is clearly evident in their internal design evaluation documents). The had already put a lot of work into J-2x and modern F-1 version was attainable. My alternative would have been to use a whole bunch of SpaceX Merlin 1D instead but they never considered that. This version would also have been much closer to the 130 tons in the first version.

And it could have been evolved to be reusable unlike the current SLS.

What people from the outside don't really see is the NASA internal competition. Johnson Space Center clearly wanted the Shuttle derived version, Kennedy Space Center really wanted to Saturn V style rocket. Some at NASA HQ wanted to give it as a contract to SpaceX/ULA.


The article (nor the product page for the X280) doesn't say whether these chips are radiation hardened, or whether NASA is now more comfortable with software solutions to the difficulties of reliable computation in space.

If the latter, that seems (to a layperson) quite exciting, since more local processing capabilities will hopefully lead to more efficient use of the very limited radio bandwidth these missions have available at such vast distances.


Related NASA press release (I think): https://www.nasa.gov/press-release/nasa-awards-next-generati....

That seems to say it’s for a radiation-hardened design:

“In 2021, NASA solicited proposals for a trade study for an advanced radiation-hardened computing chip with the intention of selecting one vendor for development. This contract is part of NASA’s High-Performance Space Computing project”

They also say:

“The processor will enable spacecraft computers to perform calculations up to 100 times faster than today’s state-of-the-art space computers”

and

“Our current spaceflight computers were developed almost 30 years ago,”

That, for me, also points towards a radiation-hardened design. If it isn’t, 100 times faster than 30 years ago is an incredibly low hurdle to clear.

Also, it’s a $50 million firm-fixed-price contract. I have no idea whether that’s a sharp price for this, so can’t judge how much risk SiFive takes on with this.


NASA currently uses the RAD750 which is based on a PowerPC 750 that was new in Macs in 1997.

https://en.wikipedia.org/wiki/RAD750


The newer designs aren't actually RAD-hardened, but fault tolerant.

It's physically impossible to produce RAD-hard semiconductors at the small scale we're currently producing high-end CPU's with.

I read this in an article somewhere but don't have the link.


Seems it's not quite as dire as one might imagine, for example the DAHLIA project is using ST's 28nm FDSOI process[1].

In any case, I found this[2] article interesting and illuminating, which goes into different aspects of radiation hardening, including how the "old = safe" isn't strictly true.

[1]: https://dahlia-h2020.eu/about-project/

[2]: https://habr.com/en/post/518366/


You are correct, since chips designed at around 5-7 nm are much more likely to be hit with radiation. However, by scaling up the chip, you can be a lot more tolerant to it. Because of this, most chips are fabbed at around 28 nm. Generally, you want a mix of both things. You can harden a chip by shielding it, but that doesn't stop everything, so you add some fault tolerance.


I don't think SiFive is directly the contract holder, it'd be microchip who deals with nasa who obtained a cpu core from sifive


>100 times faster than 30 years ago is an incredibly low hurdle to clear.

"today’s state-of-the-art space computers" is not the same computer as "Our current spaceflight computers were developed almost 30 years ago".


Per https://www.theregister.com/2022/09/06/nasas_spaceflight_com...

this was developed for the HPSC program whose goal was to develop a replacement for the RAD750, and the "100 times performance" requirement is wrt that.


SiFive X280 is an IP core, not an IC. The design itself isn't inherently radiation-hard, but there's nothing preventing it from being manufactured on a radiation-hard process.


There certainly things you can do in the design to be radiation hardened, it's not just in the manufacturing. The LEON [1] processor, initally designed by ESA for space missions incorporates quite many countermeasures in its IP core.

[1] https://en.wikipedia.org/wiki/LEON


> incorporates quite many countermeasures in its IP core

How hard would it be for SiFive to take one of their existing designs, which lacks those countermeasures, and add them?


Hard but not impossible. Maybe too expensive to make sense for SiFive?

IIRC, all internal logic will require triple mode redundancy and all type of memory will need ECC plus scrubbing.


And yes, having more onboard processing will lead to more pre-processing capabilities for data before it's sent/received via precious and limited DSN bandwidth. Could also allow for a low quality preview and selective-send mode of operation.


There are definitely things you can change in the design to make it more radiation-proof. ECC memory for example, logic sanity checks etc.


In general, the design has gone towards have 3 to 5 redundant non-hardened computers with ECC that compare results and if one computer disagrees, restart it. Radiation hardening makes sense when computers cost tens of thousands, and weigh 10s of pounds, but now they are cheap, and only weigh a few grams.


SiFive is fabless. Looks like they will be collaborating with Microchip on actual production.

https://www.nasa.gov/press-release/nasa-awards-next-generati...

https://www.electronicdesign.com/technologies/embedded-revol...


I see Microchip doing nothing but ARM in this space and only SiFive folks quoted in their PR:

https://www.microchip.com/en-us/products/microcontrollers-an...


This lays it out more explicitly:

https://www.eenewseurope.com/en/microchip-to-develop-next-ge...

"NASA’s Jet Propulsion Laboratory has selected Microchip to develop the High-Performance Spaceflight Computing (HPSC) processor that will provide at least 100 times the computational capacity of current spaceflight computers for all types of future space missions, from planetary exploration to lunar and Mars surface missions.

The radiation hardened, fault tolerant processor will be based on 12 instantiations of the X280 RISC-V core from SiFive and will be used in a series of ruggedised radiation tolerant single board computers."

Microsemi (acquired by Microchip) has built RISC-V hardware in the past:

https://www.electronicdesign.com/technologies/iot/article/21...


And now I want a 12 core radiation hardened SBC. Hopefully Microchip can sell these commercially. It'd be fun to see folks do steam punk themed rad hardened "cyberdeck" using one: https://hackaday.com/2022/09/04/2022-cyberdeck-contest-the-b...


I wonder if a rad-hardened SBC would be covered by ITAR like a lot of other space stuff? If so, would probably push the price and amount of paperwork out of reach of private individuals?


Thank you, that was the missing piece.



The last mars rover and its drone run embedded Linux, the JWST runs javascript (not node.js though!). NASA is already working with much more powerful and capable software stacks.


Where are you reading that?

The Perseverance rover uses the VxWorks real time OS from Wind River Systems and the same RAD750 PowerPC processor that NASA has been using for the last 20 years. The rover is designed to last and operate a long time.

The Ingenuity drone helicopter uses an off the shelf Qualcomm Snapdragon 801 and Linux. While NASA wants it to last a long time that was not part of its main design.

https://www.pcmag.com/news/linux-is-now-on-mars-thanks-to-na...


The helicopter uses an RAD hardened FPGA for handling the flight computation while the Snapdragon handles the less critical stuff. I'm guessing if the Snapdragon faults out, the flight computer will slowly descent and go idle.


It's not hardened, they have 2 synced systems effectively that can pick up from each other, such that, if there's a bitflip, it can restart and switch over in less that the time it takes for a loop (kinda) so it can easily do this is mid-air with no loss of alititude or positioning.


I remember reading there is a system for detecting crashes and restarting it.


The rover still uses the radiation hardened PowerPC (RAD750 circa 2001 = 301K per CPU) but the drone uses a Snapdragon 801 SoC - it was not considered mission critical so the team could use off the shelf hardware.

https://en.wikipedia.org/wiki/RAD750


The MARCO Mars satellites also used COTS components and computers, all running Linux. The worked great AFAIK. They even had a cheap COTS breakout-board camera which snapped a couple of pictures of Mars!


They are experimental and low cost (183 million USD) Cubesats - a great way to get educational institutions involved in space exploration without the backing of a NASA budget.


> the JWST runs javascript

What?! I'd like to know more about this. Does it have GC? I just can't fathom that being the case...


See [1] (HN discussion: [2]). According to the paper [3] it looks like that it does have a GC and its resource usage is limited by other means.

[1] https://www.theverge.com/2022/8/18/23206110/james-webb-space...

[2] https://news.ycombinator.com/item?id=32519918

[3] https://www.jwst.nasa.gov/resources/ISIMmanuscript.pdf#page=...


> If you’re still worried, do note that the Space Telescope Science Institute’s document mentions that the script processor itself is written in C++

So they decided on Javascript back in 2006 and it uses an interpreter/runtime written in C++. Which runtime is this?


It is Nombas ScriptEase. The author updated the homepage after >10 years due to interest from JWST.

https://brent-noorda.com/nombas/us/


Looks like it's actually C.


Wow. Thanks!


In an earlier era, NASA also sent Lisp to Mars.[1] And my understanding is that they sent it with GC.

[1] https://corecursive.com/lisp-in-space-with-ron-garret/


I can't fathom deploying anything critical WITHOUT GC.

It's far too easy with malloc/free to make errors such as replacing a pointer with a pointer to another object and forgetting to free the original, freeing something that is still referred to somewhere else and then the other place using it and corrupting any new object allocated in that space, or freeing something that has already been freed, again potentially corrupting a new object reusing that memory.

None of those is possible with GC.

The only problem GC is subject to is allocating more data than you have physical memory for, and that is equally as much a problem with malloc/free so GC is no worse for that error, and better for the others.


> The only problem GC is subject to is allocating more data than you have physical memory for...

On the contrary, the big issue with the GC is what happens when the actual collection happens. There's an apocryphal story about a research team that was making a table-tennis playing robot; but every 5 minutes the robot would randomly pause; in then end they discovered it was the Java garbage collector pausing the entire thing to run. Much less apocryphal, and more recently, Discord rewrote one of their services from Golang to Rust specifically because of the garbage collector [1].

Yes, misusing pointers is a potential problem; but when you're talking about situations where precise timing of maneuvering thrusters or stepper motors can make or break your mission, a garbage collector is arguably a much bigger problem if you don't have a way to make it predictable.

[1] https://discord.com/blog/why-discord-is-switching-from-go-to...


Go GC is still pretty immature compared to the best Java systems.

And there were really quite good GC developed in the 90s. GC systems with predictable pause times (if you know the load) very much exist.

So if you are not loading arbitrary code you can do a lot with a memory safe GC system.

However I do think more could have been done in that space in the last 20 years instead of having so much C code everywhere.


There are predictable garbage collectors, though. Not every GC is a stop-the-world type. A good reference-counting system with the ability to weak-reference a variable can be pretty predictable and avoid loops as just one example.


Its scripting engine for running commands sent from the ground is based on Javascript. So all the important stuff is done in native code, but the orders are taken in JS.


More over, Inginuity uses COTS altimeter from Sparkfun, it's still working!


COTS = Commercial Off The Shelf


Well almost. The Perseverance rover runs vxworks, like most if not all of its predecessors. Ingenuity (the drone/helicopter) does indeed run linux.


except that the more complexity you add and make available the more things can break and go wrong.

I imagine it is an interesting balancing act.


> The article (nor the product page for the X280) doesn't say whether these chips are radiation hardened, or whether NASA is now more comfortable with software solutions to the difficulties of reliable computation in space.

> If the latter, that seems (to a layperson) quite exciting, since more local processing capabilities will hopefully lead to more efficient use of the very limited radio bandwidth these missions have available at such vast distances.

Is that either/or, though? I could imagine building a probe with a radiation hardened central processor (or two), plus a non-radiation hardened "accelerator" CPU that's not considered mission critical. The "accelerator" could be tasked with things like pre-transmission data analysis/triage, and if it fails the mission could continue without feature (like Galileo continued without its high-gain antenna).

Though the value of that might be less than it seems at first look: New Horizons had literal years of time to transmit its data back to Earth.


What is radiation hardening of chips?

Stuff on the outside to absorb? Different band gaps in the silicon?


https://en.wikipedia.org/wiki/Radiation_hardening

Larger node, different substrate, shielding, and other things.


Thanks. Is this always an energetic issue, like the bit gets flipped or is it about long term material changes?


Both but primarily the former.


I always wondered can "you not just" dumped a your modern'ish pc in big bucket of mineral oil to fight of the radiation ? Given that water (and other hydrocarbons has been suggested as a shield material for cosmic radiation). As I understand It Cosmic Radiation is the bigger enemy than the "normal" beta/gamma radiation ?


I'm ready to throw our old, expensive, and fairly slow flight computers with no drivers and terrible lead time to the curb and get on board with something new. Especially if an open ISA means that our systems will be more portable and faster to develop.


It really seems like the perfect storm. Hopefully other folks like NASA will get comfortable jumping on the RISC train now that their only option isn't licensing from ARM.


NASA's been on the RISC train for nearly 30 years. They've been using the RAD750 since the turn of the century. Before the RAD750 they used the RAD6000, a radiation hardened POWER1 chip. They also use the Mongoose rad hard MIPS chips in a number of orbiters.


I have a document on my bookshelf a few feet from here comparing the 386 to the 486 for space mission purposes. Its core conclusion was that the 486 on-die cache was not spaceworthy and that with it disabled integer math was not really faster than on the 386. The recommendation was to use the 386+387+plus an external cache, partly because compilers for the 486 all optimized for that internal cache and compiled code poorly with it disabled, even if adding external cache. Putting computers in space is not a simple thing, of course. Such documents as this point out some of the difficulties.

"Analysis of the Intel 386 and i486 microprocessors for the Space Station Freedom Data Management System" Yuan-Kwai Liu NASA May, 1991 RTOP 488-51-01 ISBN 9781729173374 https://www.libreriauniversitaria.it/analysis-intel-386-and-...


That's interesting as the ISS (née Freedom) did in fact go with rad hard 386s for the MDMs (multiplexer-demultiplexer). These are the general purpose computers that connect dumb devices on the station like sensors or actuators to the station's main data bus.

As for why Freedom/ISS went with 386s for the MDMs, I don't personally know. I'd guess that of the rad hard chips various defense companies shipped, a 386 was one of the more powerful. AFAIK the likes of the hardened RCA 1802 and some other 8-bit CPUs were pretty popular in the defense space. The RCA 1802 was used on a lot of the NASA's New Frontiers program probes.

The MDM design on Freedom/ISS was meant to have a standardized component that was easily replaceable and use software to define the role of the device. An MDM plugged into a pump actuator today could have its EEPROM flashed and be plugged into a pressure sensor tomorrow. A useful feature in a system humans live in.

That doesn't change the fact NASA's been RISC-y with their probes for the past three decades. The added power of the CPU (whatever the architecture) in today's probes means fewer specialized chips like were needed in older probes. That means more of the mass budget can go to science instruments and software can do the jobs that used to require dedicated chips.

For anyone interested you can get the paper from NASA here[0]. It's pretty interesting. Thanks for pointing it out.

[0] https://ntrs.nasa.gov/citations/19910016373


There are already companies building hardware and software stacks so you can concentrate on the part you care about. I think this is great!

Also excited to see RISC V gaining some acceptance in space.


> Especially if an open ISA means that our systems will be more portable and faster to develop.

So I understand that RISC-V does not immediately impose licensing fees, but how does that translate into portability and speed of development? I'd think that tools and toolchains do not especially benefit from there not being licensing fees, and do benefit from somebody paying cool kernel/compiler hackers. Tools hardware designers use as well as the designs themselves will stay closed. What am I missing?


It's great to see parts of the US government embracing RISC-V and SiFive.

I hope NASA and others using RISC-V also take the opportunity, of a bit of a fresh start, to push for more of an open hardware platform around the CPUs (chipsets, devices, etc.).


I hope that’s the intention. Silicon Valley pretty much sprouted with military funding. Unfortunately, projects with long lead times are impossible to sustain in this new economy and research scientists will get scouted before NASA gets to increase their pay.


I'd love to see an implementation in silicon carbide so we could drop it on Venus, wait for it to bounce a few times and then start exploring.

https://spectrum.ieee.org/the-radio-we-could-send-to-hell


I always wondered why NASA never sent any surface probes to Venus, until I read about the insane engineering (and lots of trial and error) that went into the Soviet Union's Venera program. Despite all that effort, the successful Venera probes only lasted minutes on the surface before failing.


The fact a lander is only going to return an hour (at best) of science data is why NASA hasn't bothered with Venus. There's a low chance of initial success and then no real opportunity for a mission extension if things actually go well. A Mars lander returns a lot more science per dollar than a Venus lander will.


The DAVINCI upcoming Venus mission (2031) has a descent stage, but surface operations are not part of the prime mission. Just taking pictures and readings on gas composition as it goes down towards the surface

https://www.nasa.gov/feature/goddard/2022/nasa-s-davinci-mis...

> “The probe will touch-down in the Alpha Regio mountains but is not required to operate once it lands, as all of the required science data will be taken before reaching the surface.” said Stephanie Getty, deputy principal investigator from Goddard. “If we survive the touchdown at about 25 miles per hour (12 meters/second), we could have up to 17-18 minutes of operations on the surface under ideal conditions.”

There will be a companion orbiter, as well, called VERITAS:

https://www.jpl.nasa.gov/missions/veritas


I was trying to highlight why NASA has largely ignored Venus since Magellan. I'm glad NASA is heading back to Venus, it's just been a gravity assist target for thirty years.


I think a larger mission with some sort of airship dropping little science packages would yield quite a trove of data without the entire mission burning up in an hour. Hopefully the success of the little Martian helicopter gives mission planners something to think about. -edited to add, something like a molten salt battery could travel to Venus completely inert and only become active as it warms, also acting as a heat sink.


StarFive announces new JH7110 SoC and SBC based on SiFive U74 CPU core[1].

You can experience SiFive's product in the real world.

[1]: https://www.kickstarter.com/projects/starfive/visionfive-2


Pine64 is also releasing their own JH7110-based SBC soon[1].

Antmicro also teased their low-density SoM board [2] last year, but I have not heard any news about that since, which is a shame because you could put a ton of those in a server rack.

[1]: https://www.cnx-software.com/2022/08/29/pine64-star64-sbc-st...

[2]: https://www.cnx-software.com/2021/04/30/antmicro-arvsom-offe...


Interestingly SiFive RISC V core was licensed by Tenstorrent (Jim Keller AI startup) for their designs.


True but know also that Tenstorrent is working on their own superscalar OoO chip as well.


They probably need a "director" CPU with a familiar programming interface to orchestrate their AI engines ???.


They're apparently working on one, in-house[0]. It's apparently called Ascalon, and there's some information starting at around 8 minutes into the video.

0. https://youtu.be/KOHQQyAKY14


Shot.


Yes, and it is a RISC-V core by a team led by Jim Keller.


What is "ratification intelligence"?

From the article:

> The HPSC processor and X280 compute subsystem is expected to be useful to other government agencies in a variety of applications including industrial automation, edge computing, ratification intelligence, and aerospace applications.

I couldn't find a good answer online.


I couldn't find a formal definition quickly from my sources. However, it appears in context in a few places and it seems to use the sense of "ratify" meaning "to confirm" or "to validate". "Ratify" and "ratification" are used in places where one might expect to replace it in the prose with "verify" or "verification", as in using multiple different intelligence sources (video, photo, signals, live, etc) to build confirmatory evidence of something when multiple overlapping sources exist rather than trusting a single source.


A chip like that could be used for sensor fusion and validation, calculation validation, and a lot of other things. For example, if you have three IMUs doing your positioning, you could use this to ratify and validate that they are all working correctly or throw out an answer if one disagrees with the other. I've generally thought of using an FPGA for this sort of work, but a certified and inexpensive coprocessor would work well for this use case.


Ok, so that's related to consensus algorithm.

Thank you!


RISC architecture is gonna change everything.


ARM is already RISC. Majority of smart phones in the world


It already did.


Basically all processors today are RISC.

Event the venerable behemoth x86(_64) has a front-end which translates CISC instructions into RISC-like micro-ops.

The distinction is pretty meaningless today.


Micro-ops are a microarchitecture implementation detail. They are also used by many RISC microarchitectures, and are ultimately irrelevant to ISA.

Ever since the RISC paper, every new architecture of any remaining significance today has been RISC. Today, it makes more sense than ever[0].

The only CISC that remains in large-scale use is x86, which I expect to finally be deprecated during this decade.

0. https://itnext.io/risc-vs-cisc-microprocessor-philosophy-in-...


My point was more that 'RISC vs CISC' is not a useful way to compare two processors. In the same way CISC CPUs break down instructions into a series of micro-ops, RISC CPUs (like the RISC-V) fuse instructions into macro-ops. Yes the ISA is different and yes that imposes some constraints on the design but it's not a particularly useful way of comparing two processors.

Ok CISC ISAs are out of favor today, although I suspect as some of these RISC ISAs age, they'll accrete instructions. The ARM ISA sure is a lot chonkier than it used to be. Modern ARM CPUs have a massive micro-op cache just like x86 CPUs do. I wouldn't be surprised if we wake up one day and it's half way to x86.

In my opinion ISA isn't relevant or interesting. [1]

I echo the sentiments in this article:

> In short, there’s no meaningful difference between RISC/ARM and CISC/x86 as far as performance is concerned. What matters is keeping the core fed, and fed with the right data which puts focus on cache design, branch prediction, prefetching, and a variety of cool tricks like predicting whether a load can execute before a store to an unknown address.

So to your point that:

> Today, [RISC] makes more sense than ever.

I would respond, today, it makes less difference than ever.

What actually happens under the hood of a modern performance-optimized CPU is so far removed from the ISA that the ISA is just a design curiosity.

'RISC architecture' isn't going to 'change everything' - how it's implemented: advances in branch prediction, prefetching, etc - that's going to continue to iteratively improve processors. The number of instructions is truly not a factor of note.

Does it never matter? Probably sometimes. Once in a while in some very specific applications - like pico-amp scale microcontrollers - it might? But for anything you’re thinking of it’s super irrelevant.

[1] https://chipsandcheese.com/2021/07/13/arm-or-x86-isa-doesnt-...


>My point was more that 'RISC vs CISC' is not a useful way to compare two processors.

It isn't. Instead, it is a way to compare ISAs. And RISC is the way to go, because we've known for some 40+ years that CISC is bad.

>In my opinion ISA isn't relevant or interesting.

Except when it is.

One example: x86 is too complex to be reasoned with, so it's not an option where high assurance (and thus formal proofs) is a necessity.

>What matters is keeping the core fed, and fed with the right data which puts focus on cache design, branch prediction, prefetching, and a variety of cool tricks...

Another example: ARMv8 and v9 have horrendous code density. As a result, L1 needs to be larger to fit the same amount of code. Which means lower frequency. larger area and higher power usage. Similarly, a microcontroller's ROM would have to be larger to fit the same program, also bad.

>RISC CPUs (like the RISC-V) fuse instructions into macro-ops.

Not really. Fusion is mostly academic, rather than an industry standard. E.g. no RISC-V processor in the market does fusion[0].

>I would respond, today, it (RISC) makes less difference than ever.

For most applications, end users don't know or care what ISA is in there.

But, for anyone actually designing systems, it does matter. Complexity breeds bugs. Most bugs are security bugs. x86's (the one CISC that still sees chips fabbed, large scale) complexity is insane. And security thus impossible; a losing proposition.

In the present hyper-networked world, this is unacceptable. In IoT, using x86 should be criminal negligence, and we will no doubt see it actually recognized as such in the courts at some point in the not-so-distant future.

x86 had a good run. It's already well past the time to move on, leave it in the past where it belongs.

0. https://news.ycombinator.com/item?id=32614034


These are all great opinions that are countered soundly by the article I linked. Do you have articles you suggest I read?

> And RISC is the way to go, because we've known for some 40+ years that CISC is bad.

Quantify 'bad'. Again these are all opinions.

Nobody will argue with you x86 is terrible. What I'm saying, backed by the article I linked, is that the fact the x86 ISA is terrible really doesn't hold it back. And once you start optimizing a RISC architecture, over time, for performance, it quickly approaches the same thing.

> Not really. Fusion is mostly academic, rather than an industry standard. E.g. no RISC-V processor in the market does fusion[0].

It doesn't depend on fusion but since 2016 it's pretty clear it'll be an optimization. A big one! Which means that complexity is coming whether or not any market cores implement it today or not, which is part of the argument I'm making haha. Once you take the path of these optimizations, the cores start to look pretty familiar. Read the reply to the comment you linked.

Nobody is going to leave performance on the table to satisfy some niche opinions on complexity being bad.

> But, for anyone actually designing systems, it does matter.

What is "actually designing systems" today? There's complexity in everything (especially anything performant) and frankly, that complexity is abstracted effectively by compilers and operating systems.

> In the present hyper-networked world, this is unacceptable. In IoT, using x86 should be criminal negligence, and we will no doubt see it actually recognized as such in the courts at some point in the not-so-distant future.

Citation needed?

> x86 had a good run. It's already well past the time to move on, leave it in the past where it belongs.

Fine, but not relevant to CISC vs RISC.


Glad they are not sending entire M1 MacBooks into space for lack of individual M1 silicon modules.


Write once assembly with a simple macro-assembler, run "everywhere"...


Guess its time to learn RISC-V. Any good courses?


If you already understand assembly-language programming for some other ISA then you'll have no trouble following straight from the RISC-V reference manual:

https://github.com/riscv/riscv-isa-manual/releases/download/...

Plus recently ratified extensions:

https://wiki.riscv.org/display/HOME/Recently+Ratified+Extens...


Unless you're into compilers and whatnot, it shouldn't be necessary?


You need it for OS or RTOS design


I wonder if NASA would be interested in my 1980 Timex Sinclair 1000?


Why, is there an RFP for doorstops?


RISC-V wasn't that bad. Lol. /s


I wonder if NASA has a clue that RISC-V's ISA is open-source but SiFive's designs certainly aren't.


Yes, it's very likely the people involved in this didn't do 30 seconds worth of googling "what is RISC-V." It's so good they could wait around for our unique insights here.

I swear, this place's appetite for empty lowbrow armchair dismissal is very disappointing.


You can't launch an ISA into space and expect it to do anything. At some point, hardware has to be designed and built based on that ISA.


Nonsense; I printed out a copy of the AGC instruction set reference, stuffed it in one of my model rockets, and it turned into a Saturn V mid-flight.


Then what advantage does it have over ARM or x64? The ISA being open doesn't seem to make any difference at all.


RISC-V is massively simpler (and smaller silicon and lower energy use) for the same performance. Oh, and significantly smaller code size too, so everything from ROMs to RAM (if code is loaded to RAM) to icache can be sized smaller.

The ISA being open source means that if SiFive goes out of business or changes its product lineup, or starts to make buggy stuff, or charge too much ... no matter what happens, the customer is free to find another vendor to make software-compatible chips.

As we've found out with ARM this week, even if one company with ARM's highest and most expensive form of license, the Architecture License Agreement (ALA), makes a CPU design they are not allowed to transfer or sell that design to another company that also has an ALA without ARM's explicit permission. Or so ARM claims. Let's see what the courts decide about that.


>>RISC-V is massively simpler (and smaller silicon and lower energy use) for the same performance.

Per my understanding RISC-V has nothing even close performance-wise to Arm's high-end stuff. So isn't it apples to oranges?


Yes you can look at the lower-end cores from both ISAs and compare. Higher end riscv will come eventually though, it'll take time just like it did for ARM


There's e.g. the SiFive P650[0], from 2021. SiFive does claim it does outperform Cortex-A77.

Cortex-A77 isn't ARM's fastest anymore, but it was in 2019. Meaning SiFive was, in 2021, not even 2 years behind. If tracing back to prior generations, you'll see SiFive has been quickly closing the gap.

To the point that it wouldn't surprise me if at the next iteration from both companies, SiFive and ARM highest performance cores were on par.

What's even more amazing is that SiFive's cores are consistently smaller area and lower power to the ARM cores they have comparable performance to.

[0] https://www.sifive.com/cores/performance-p650

[1] https://www.phoronix.com/image-viewer.php?id=2021&image=sifi...


I think your question makes a lot of sense in the general case, but in case of NASA it very well might be that they scratched their heads, came up with requirements and found that they don't care about there being an ecosystem of any kind due to special nature of their requirements.


The advantage is not NASA's, but SiFive's, or any other company that wants to make their own chip: they don't have to pay a license. I guess it is NASA's advantage too, because their software shouldn't be tied up with any limited group of license holders.

Open ISA is huge. It may not mean open hardware on its own, but it sure as hell opens the door to open hardware in the future. I think there is a chinese CPU in development that is claiming to aim to be the linux of CPUs. Can't remember the name but they were at one of the risc-v conferences recently.


RISC-V is NASA bad?

“Launch aborted, whenever we access the pressurization control we’re getting an ‘ISA not implemented in this version of the core’ error from the controller that arrived overnight. They must have received the wrong version of the chip and didn’t catch it in the test rig we gave them 6 years ago.” #roflmao




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: