Hacker News new | past | comments | ask | show | jobs | submit login
SiFive Tapes Out First 5nm TSMC 32-bit RISC-V Chip with 7.2 Gbps HBM3 (tomshardware.com)
192 points by pabs3 on April 16, 2021 | hide | past | favorite | 84 comments



I know they're separate lines and capacity is sold well in advance and all that, but this chip shortage still baffles me.

A startup can tape out a 5 nm chip, but STMicroelectronics can't make any of their 40-130 nm microcontrollers for the next year?

Also car companies are supposedly the culprit, even though their volume is only in the low tens of millions per year, and the dustup is apparently over only six months of capacity? What? I get that the auto industry is a nice reliable long-term source of revenue for chip companies, but fabs should barely be sneezing at that sort of volume.


This is an extremely low volume prototype run. You can get those scheduled on short notice. Fabs love them because they can do process optimization using them, without impacting production customers. They're ridiculously expensive per-die and you commit to accept a much higher failure rate than normal.

ST can and is making microcontrollers. It's just that they've sold their production for a year ahead, before it's even been manufactured. Car companies fucked everyone over by flipping a large volume of orders back and forth causing bullwhip effect on the whole industry, and lots of knock-on effects in other industries who suddenly got told (occasionally too late) that they need to plan their inventory a year ahead because they can't get anything at short notice anymore. Car companies vehicle production volume is tens of millions, but each vehicle has thousands to tens of thousands of ICs. The six months you are mentioning are not the capacity period, they are the lead times involved.

I don't want to repeat the whole story but I wrote a comment about this on another thread. See https://news.ycombinator.com/item?id=26659709


> Fabs love them because they can do process optimization using them, without impacting production customers.

I didn't realize that, but it makes a lot of sense. I assumed that they acted more like the downstream manufacturers that I'm used to dealing with, that don't even want to talk to you unless they think you're going to place a huge order.


Thousands to tens of thousands per car? I think you're off by an order of magnitude.


Maybe on microcontrollers, but the chip shortage is affecting all the "popcorn chips" too.

And there are a LOT of those.

How many chips per window motor driver? Solenoid locks. ABS systems. Radar. Tire pressure sensors. Temperature sensors. LCD displays. Peril-sensitive rear view mirror. Brake lights. Keyfobs. I can go on and on.

The reason why electric cars reduce the bill of materials so much over internal combustion engines is the fact that cars are already rolling computer banks. An electric car just gets rid of those silly parts required to turn plant slime into flame.


I know a typical Mercedes has roughly a hundred individual computers, not too far reached to think the average chip count could be 10 or higher per device on the can bus.


Most of those chips would be standard commodity silicon components (think op-amps and such) as opposed to purpose-built automotive micro controllers.


Uh, no. There's a ton of dedicated integrated circuits that are highly optimized for one specific task, and which contain everything you need. For example, an ABS module is going to contain something like the L9396, which does signal conditioning for wheel speed sensors, and mosfet drivers for the solenoids. All in one little package configurable by SPI.

And even the standard commodity silicon components are going to be AEC-whatever certified, as others have mentioned.


If there's a ton of dedicated circuits highly optimized for each specific task, then I don't know why there'd be 10+ of them per task as OP claims. :)


Are they really? I was under the impression that automotive grade components were at least certified to a wider temperature range than typical other components. JEDEC 94A annex A mentions that the operational temperature range is between -40C to 150C for automotive grade components.


You are correct. Automotive parts are also often certified to higher electrical stress levels as well.

This is one of the downstream problems of the shortage: even if you produce the chips, you may not have enough test equipment to qualify and certify them in a timely fashion.


Almost everything in a car has a number of chips. Power regulations, communication buses... and in electric cars with thousands of batteries, at least one chip per battery for protection.


Electric cars have ONE battery with thousands of *cells*. I do realize that the colloquial term for "cell" is "battery" (ex: an AA cell is called a battery), but it becomes important to be precise with our words when talking about manufacturing.

Small scale Li-ion does a protection-IC per cell (ex: cell phones), mostly because cell phones are so small they only use one cell.

Larger scale Li-ion, such as Laptop batteries, may use one-IC per cell, OR one-protection IC for all 3x or 4x cells combined. As long as all the cells are soldered together, one protection IC is cheaper and still usable.

At electric-car scales, you have thousands-and-thousands of cells. You can't just manage all of them with one IC, so you build an IC per bundle. Maybe 48 cells or 100-cells per IC or so.


Indeed yes I meant cells, I'm not a native English speaker.

> At electric-car scales, you have thousands-and-thousands of cells. You can't just manage all of them with one IC, so you build an IC per bundle. Maybe 48 cells or 100-cells per IC or so.

Ah okay, I had more expected something on the order of 1 IC per 4 cells to allow individual cell health monitoring.


> Indeed yes I meant cells, I'm not a native English speaker.

You're doing fine. Native English speakers don't know the difference between cell or battery either. This is more of a precise / technical engineering distinction.

* 9V Battery (https://imgur.com/FHJdhIK), a collection of 6x cells.

* AAAA Cell (one singular chemical reaction of 1.5V)

Notice that the imgur is wrong: they call it a AAAA battery (when the proper term is a AAAA cell).

--------

"Battery" is a bunch of objects doing one task. Originally, a "battery" described cannons. Or two rooks (in chess) that work together. Or... 6x 1.5V cells working together to produce a 9V battery.


In Russian the word "battery" is most commonly used to mean a hot water radiator in an apartment.

Which is a collection of fins.

An electrical battery is called akkumulator.


I always thought the "cell" in "cellular phone" came from the network topology, not the battery format.


Oh it does come from the network region/cells. In fact in the old days when the phones were the size of an actual brick they were probably powered by a proper multi-cell battery. (Did I miss a joke?)


This is blatantly false, unless you are confusing battery for an assembled battery pack. In EVs each battery management IC can run somewhere in the range of 4-14 cells in series per chip, and they almost universally run banks of up to 100 cells in parallel. For example, in the tesla model s the pack is comprised into submodules of 76 cells in parallel and 6 of those groups in series per management chip--so only one management chip per 456 cells.


What? You think it's ten thousands to hundred thousand. Hundred thousand seems excessive to me.


I still don't understand how the car industry could have had this effect. How/why did all the different companies in the auto industry coordinate chip ordering/cancelling like that? Or was it a single company that they all order from? Or an unfortunate coincidence?


I'm not saying that this is necessarily a correct interpretation of events (simply because I'm not qualified to make such a pronouncement), but I think the parent post tried to explain this by referencing the "bullwhip effect" [1], which is when relatively small changes in demand in one point of the supply chain lead to large systemic changes in resource allocation because the perception of the change is magnified at each upstream supplier.

[1] https://sloanreview.mit.edu/wp-content/uploads/1997/04/633ec...


They hang out at the same parties. For some reason all the car industry execs were convinced that people would buy dramatically fewer cars in 2020. Because they have a religious aversion to holding any stock they decided to shift the risk over to their suppliers, fucking said suppliers over, as the car industry normally does when they expect demand shifts. The thing that made this particular time special as opposed to business as usual is that the car execs all got it wrong, because people bought way more cars due to pandemic rather than less, due to moving out of cities and avoiding public transit. So they fucked over their suppliers a second time by demanding all those orders back.

Now, suppose you're a supplier of some sort of motor driver or power conversion chip (PMIC) in early 2020. You run 200 wafers per month through a fab running some early 2000s process. Half your yearly revenue is a customized part for a particular auto vendor. That vendor calls you up and tells you that they will not be paying you for any parts this year, and you can figure out what to do with them. You can't afford to run your production at half the revenue, so you're screwed. You call up your fab and ask if you can get out of that contract and pay a penalty for doing so, and you reduce your fab order to 100 wafers per month, so you can at least serve your other customers. The fab is annoyed but they put out an announcement that a slot is free, and another vendor making a PMIC for computer motherboards buys it, because they can use the extra capacity and expect increased demand for computers. So far so normal. One vendor screwed, but they'll manage, one fab slightly annoyed that they had to reduce throughput a tiny bit while they find a new buyer.

Then a few months later the car manufacturer calls you again and asks for their orders back, and more on top. You tell them to fuck off, because you can no longer manufacture it this year. They tell you they will pay literally anything because their production lines can't run without it because (for religious reasons) they have zero inventory buffers. So what do you do? You call up your fab and they say they can't help you, that slot is already gone. So you ask them to change which mask they use for the wafers you already have reserved, and instead of making your usual non-automotive products, you only make the customized chip for the automotive market. And then, because they screwed you over so badly, and you already lost lots of money and had to lay off staff due to the carmaker, you charge them 6x to 8x the price. All your other customers are now screwed, but you still come out barely ahead. Now, of course the customer not only asked for their old orders back, but more. So you call up all the other customers of the fab you use and ask them if they're willing to trade their fab slots for money. Some do, causing a shortage of whatever they make as well. Repeat this same story for literally every chipmaker that makes anything used by a car. This was the situation in January 2021. Then, several major fabs were destroyed (several in Texas, when the big freeze killed the air pumps keeping the cleanrooms sterile, and the water pipes in the walls of the buildings burst and contaminated other facilities, and one in Japan due to a fire) making the already bad problem worse. So there are several mechanisms that make part availability poor here:

1. The part you want is used in cars. Car manufacturers have locked in the following year or so of production, and "any amount extra you can make in that time" for a multiple of the normal price. Either you can't get the parts at all or you'll be paying a massive premium.

2. The part you want is not used in cars, but is made by someone who makes other parts on the same process that are used in cars. Your part has been deprioritized and will not be manufactured for months. Meanwhile stock runs out and those who hold any stock massively raise prices.

3. The part you want is not used in cars, and the manufacturer doesn't supply the car industry, but uses a process used by someone who does. Car IC suppliers have bought out their fab slots, so the part will not be manufactured for months.

4. The part you want is not used in cars, and doesn't share a process with parts that are. However, it's on the BOM of a popular product that uses such parts, and the manufacturer has seen what the market looks like and is stocking up for months ahead. Distributor inventory is therefore zero and new stock gets snapped up as soon as it shows up because a single missing part means you can't produce your product.

and this is how a conference call among car industry exec buddies that have convinced each other that for some reason people will buy fewer cars in a pandemic managed to destroy the entire electronics market for (hopefully no more than) a year or so


Great explanation, thank you!


I'm in the semiconductor company.

I don't really understand your question.

Anyone can start a company and tape out a chip even in 5nm. My previous startup did something similar. We used an intermediate company between us and TSMC that specifically works with smaller companies. They (or TSMC) will bundle together 4 to 20 chips into a common mask as a "shuttle" run. Shuttle runs are really only used to get samples for the first version of your chip. You can't really go to production with them because the mask has chips from multiple different companies but this allows all of the companies to share the mask costs (I've heard up to $30 million for 5nm)

What is ST Micro talking about? I assume they can produce chips but can't get the volume that they want. SiFive are probably producing about 2,000 of these chips for development and test boards. ST Micro would be buying in the hundreds of millions or tens of billions range.


>" Shuttle runs are really only used to get samples for the first version of your chip."

Is a "tape out" the same thing as a shuttle run/sample chip run?


The only difference between a shuttle run and mass production is whether your chip is repeated 100 times in different parts of the wafer (surrounded by other people's chips) or a few thousand times completely covering the wafer with copies of just your chip.

The manufacturing process is identical either way.


Interesting fact, the term tape out comes from the way chips were physically laid out back in the early days. The chip layout was manually arranged using pieces of coloured tape on a large workspace/table. Once complete, this was photographed a layer at a time, which was then optically reduced to create the mask sets for manufacture. We still use the term tape out today, but no tape involved. I would love to have an experience of designing back during those days, it's so hands on.


Fascinating. So was each mask in a mask set individually taped to the work surface one at a time in order to be photographed?


a "tape out" is the process of transforming a design into a physical die - i.e. a manufacturing run. It's when you hand over a design to a foundry to do their thing with it.


Out of curiosity - what software is being used to design chips? Is there anything within reach of a small company, or something open source?


Front-end is HDLs — (System)Verilog, VHDL, etc. Implementation and formal will be Jasper & its ilk. Backend (physical, etc.) use fab-specific bespoke software from the majors (Cadence, NXP, MG, Synopsis, ...).

The front-end stuff could be done by one person; Verilator is a great example (although it's now "in house" to NXP). Implementation, LEC, etc. are mathematically intimidating -- they're proof engines -- but doable by a small team.

Physical requires inside knowledge of the fabs. The fabs aren't going to let you participate unless you're a major, because it costs them a lot of money, and each additional participant is another potential leak of their critical IP.

The tooling is all "vertical" and starts on the backend. If you can't do backend, you're not a player.


> The front-end stuff could be done by one person; Verilator is a great example (although it's now "in house" to NXP).

Huhwhatwho?

Verilator is in-house for NXP? When did that happen?


Verilator is still open source; some of the authors work at NXP and/or supported by NXP.


The commercial tools are indeed very expensive but the required data files can be as much of a problem. Normally you have to sign a bunch of NDAs (non disclosure agreements) to get your hands on the design rules and standard cell libraries supplied by the foundries and required to make the tools work.

One effort to organize several previously available open source tools into a practical system is OpenLane, which is based on the DARPA OpenRoad project:

https://woset-workshop.github.io/PDFs/2020/a21.pdf

Recently, Google has financed a project where a foundry has made its data files available without any NDAs:

https://github.com/google/skywater-pdk

The combination has made it possible to have completely open source chip designs.


Sounds like OSH Park for silicon...

Anyway, I'm still not sure why SiFive is doing this. Seems like a waste of money even as a prototype


SiFive is in the business of selling IP cores and back-end implementation services. The gold standard for IP core validation is "silicon proven" i.e. that it's not just a nice theoretical design on paper, but someone has actually turned it into a physical chip and tested the real life performance.

Lots of people will try to sell you their designs and services. Picking the wrong ones can waste millions of dollars and months/years of time.

The money spent on this a prototype buys SiFive credibility for both aspects of their business (assuming the chip works) - "we were able to do this for ourselves, so you know we'll be able to do it for you".

So it's not a waste, it's a marketing expense, and a necessary one.


The article mentions that is is from the OpenFive division of SiFive. OpenFive used to be Open Silicon and their business model was working with other companies to take their Verilog RTL and do all of the physical design (synthesis to logic gates, place and route of the standard cells, timing analysis, test vector generation) and then work with the foundries to deliver all of the data for manufacturing.

Since Open Silicon is now OpenFive and part of SiFive they literally have all this experience in house and don't need to depend on another company between them and TSMC.

https://en.wikipedia.org/wiki/Open-Silicon


A lot of chips are made on mature fab lines because they don't need the performance of 5nm lines or can't justify the mask costs.

No one is investing in mature fab lines because they're not leading edge and they're being run to amortize the initial investmnet made into them years ago. Therefore not much additional capacity for mature lines.

So yes you can see 5nm chips being taped out but the 40-130nm chips are squeezed for capacity. Also this chip is likely not running in the same crazy volumes that ST microcontrollers. It is easier for TSMC to squeeze in a few dozen to a hundred wafers for SiFive on their line.


> A lot of chips are made on mature fab lines because they don't need the performance of 5nm lines or can't justify the mask costs.

Alternatively: they're car-scale products dealing primarily with high electric currents (10s or 100s of milliamps) and/or higher voltages (5V instead of 1.3V).

Smaller chips use (and therefore output) less current than larger scale chips. But if your goal is to output 10mA to better drive an IGBT or other transistor anyway, then you really prefer 40nm to 130nm ANYWAY, because those larger sizes are just a lot better at moving those large currents around.

Bigger wires mean bigger currents.


High voltage MOSFETs and IGBTs are built on a completely different process. Size is definitely not an issue with them. It is about exotic doping to create the desired characteristics.

They're built using much larger feature sizes but on completely separate lines.


I'm not really in the industry. But I know that high-voltage MOSFETs / IGBTs need substantial amounts of current to turn on / off adequately. Under typical use, there's a dedicated chip called a "Gate Driver" that provides that current, between a microcontroller and the IGBT.

Its not that the IGBT / MOSFETs are built on these microcontrollers. Its that the Gate-Driver can be integrated into a microcontroller (simplifying the circuit design and reducing the number of parts you need to buy).

Under normal circumstances, a microcontroller can probably source/sink 1mA (too little to adequately turn on an IGBT). You amplify the 1mA with a gate-driver chip into 100mA, and then the amplified 100mA is used to turn on/off the IGBT.

By integrating a gate-driver into the microcontroller, you save a part.


Gate drivers are discrete parts and aren't integrated into microcontorllers because they require high voltages (12/24V vs 3.3V) amongst many other reasons.

They also have to be located close to the IGBT and the designers would want it as a separate part because it gives them enormous flexibility. You aren't saving much by adding a gate driver to a micro and I can't think of any micro that has it integrated. There are simple things like motor drivers, etc. that may have a gate driver integrated but that's not a micro and those are also built on large nodes -- much larger than nodes used for microcontrollers.

FWIW: I design advanced node (20-5nm) ASICs for a living, and have designed on older nodes before. I've also worked in power electronics.


Your point is valid, but this is almost certainly a shuttle run, so it won't be even one full wafer.


You're right. Definitely a "hot" wafer for the engineering samples.


> Also car companies are supposedly the culprit, even though their volume is only in the low tens of millions per year, and the dustup is apparently over only six months of capacity? What? I get that the auto industry is a nice reliable long-term source of revenue for chip companies, but fabs should barely be sneezing at that sort of volume.

I agree. I think the blame on automakers has been blown out of proportion. It doesn't make any sense that automakers cancelled orders, then reinstated those orders again with some extra demand, and now the entire chip market is stalled.

It's most likely due to the fact that consumer demand is up everywhere. The pandemic didn't hit the economy nearly as hard as expected, and we piled a lot of stimulus on top of that. Savings rate went up a bit, but much discretionary spending was diverted away from things like dining out and toward buying consumer goods.

> STMicroelectronics can't make any of their 40-130 nm microcontrollers for the next year

They're almost certainly making huge volumes of microcontrollers, but they're all spoken for with orders from the highest bidders.

We won't have inventory sitting on shelves again until fab capacity isn't being 100% occupied by existing orders. Need some surplus before we can get parts at DigiKey.


ST fabs their own chips. If their fabs don't have the capacity, it's a huge slog to tape them out to a radically different process at another company.


IIRC, 32-bit RISC-V is only intended for deep embedded workloads, with 64-bit for general purpose compute. So a SoC w/ a single 32-bit core would seem to be a less-than-ideal fit for the cutting-edge 5nm process.


Routers / Switches have extremely weird performance characteristics, and I think that's what SiFive is targeting with this chip.

* HBM3 for the highest memory bandwidth (10Gbps switches need tons and tons of bandwidth. That's 10Gbps per direction per connection, 8x ports is 160Gbps, and then that's multiplied multiple times over by every memcpy / operation your chip actually does. You need to DELIVER 160Gbps, which means your physical RAM-bandwidth needs to be an order of magnitude greater than that)

* Embedded 32-bit design for low-power usage.

* All switches have small, fixed size buffers. Memory capacity is not a problem, its feasible to imagine useful switches and routers (even 10Gbps, 40Gbps, or 100Gbps) that only have hundreds-of-MBs of RAM. As such, 32-bit is sufficient and 64-bit is a waste (You'd rather half your pointer memory requirements with 32-bit pointers rather than go beyond 4GB capacity)


While these are all good points, this really does not appear to be a competitive NPU design on any axis that matters. I don’t know what this chip is for, but a router NPU it is not, nor a switch. Maybe some soho switch or smart NIC, but those have moved on far along the performance spectrum away from the place where this would fit.


Routers need quite a bit of memory to handle IPv6.

Switches as an application of this makes sense.


IPv6 address comparison on a 32 bit design is fairly awkward. Switches won't care, but routers need to make routing decisions.


It's E76 with F set, and F set is huge compared to RV64I. And the article proposes HPC as possible application.


The core is supposed to compete with the Cortex M7. The smallest process M7 I can find is the STM32H7, which is 40nm.


I rave for stm32 with high end processes (10nm or less), whether that makes sense or not. I just love stm32..


With my industry / product management / business strategy hat on, totally agree from SiFive's perspective.

With my early days electronics hat on, the 5nm process adds additional energy performance gains that in conjunction with RISCV in an embedded environment, especially in a battery powered remote operation use case, has me salivating at what could be achieved from a would-be customer perspective.


I think it means 32 bit floating point, not 32 bit CPU, as it mentions "other relatively simplistic applications that do not require full precision" but its a bit unclear.


The quote that stands out to me is that the core is "ideal for applications which require high performance -- but have power constraints (e.g., Augmented Reality and Virtual Reality , IoT Edge Compute, Biometric Signal Processing, and Industrial Automation)."


Yeah, this seems like an odd move to me.

For this kinda of applications the static leakage of the newer & smaller node will probably hurt rather than help.


HBM might be an interesting idea. I would love to see multiple bandwidth levels of memory becoming a norm, with computers a very fast small amount of memory, and a larger set of DRR4 or DRR5. We already have multiple levels of cache, why not having multiple levels of RAM? Operating systems and software would need to accommodate a new reality where NUMA is the norm though. But it’s good that we even have the concept of NUMA, so this is not entirely uncharted/unfamiliar territory.


You would love to see computers become harder to program?


It canbe nice to have the opportunity to program something harder but faster. Counter example: Itanium, which was too hard to program (compilers) for.


It is kind of ironic that compiler theory has advanced and now we can target Itanium no problem. It was a bit (well, a lot) ahead of its time.


I would try to build a new compiler (or a LLVM intermediary processing layer) that does NUMA optimizations.


HBM2 is like 2Tb/s, how is HBM3 7GB/s ?


HBM3 was expected to be like 4GB/s per pin which was seen as double HBM2 per pin, so this is therefore almost even double that, which is good news

The HBM2 total memory bandwidth is like 2TB/s, just different scale

Anyway I could totally be using wrong nomenclature and terminology, feel free to discuss, these aren't assertions or aren’t strongly held assertions


HBM3 wasn't just supposed to be about speed. It also offers a 512-bit option that doesn't require a silicon interposer. I'd guess this was added to make cheaper consumer GPU designs possible.

I suspect they're using the HBM2 spec for the narrow bus and cheaper interposer while keeping speeds lower and only using a couple stacks instead of the 16 or so HBM2 stacks required for those 2Tb/s speeds you mention. It makes sense given that their chip likely couldn't use a huge amount of bandwidth anyway.


I think that’s per-Pin bandwidth?


How did SiFive get anywhere near 5nm TSMC?


They pay money just like any other customer of TSMC. SiFive has a lot of buzz in the industry. I wouldn't be surprised that TSMC wanted to work with them.

But there are other intermediary companies that help startups group multiple chips from multiple companies together into a single mask. This is called a "shuttle" and allows the companies to split the costs of the masks (I've heard up to $30 million for 5nm)

SiFive is probably building about 2,000 of these chips for development boards. They aren't trying to order a hundred million like Nvidia.


Thanks that's very interesting. No intention in any way to belittle SiFive - just puzzled as to how they managed to get onto this process when it's obviously so much in demand. Good for them!


Adding to this, it's in TSMC's interest to have cheaper licenses for high quality processor cores for it's customers, and therefore have strategic reasons to prioritize small runs of RISC-V chips right now.

Pretty classic "commoditize your complement".


Can you list some of these shuttle companies? What are the most popular/reputable ones?


When I have worked at large companies we handled everything ourselves and dealt directly with the fabs.

When I worked at startups and smaller companies we have interacted with:

Global Unichip Corporation (I believe TSMC owns part of the company)

https://www.guc-asic.com/en-global

Open Silicon (they literally merged with SiFive to become OpenFive and made this chip in the article)

https://en.wikipedia.org/wiki/Open-Silicon

Uniquify

https://www.uniquify.com/

Socionext

https://www.socionext.com/en/

There are others but these are the 4 that I have dealt with.


It’s also in TSMC’s marketing interest to product a small number of RISC V parts with their latest process.

Plus it’s probably fun for some of the people there.


For test chips there is something called shuttle.

Other than that, foundries are known to sponsor IP development on their processes.


"The tape out means that the documentation for the chip has been submitted for manufacturing to TSMC, which essentially means that the SoC has been successfully simulated. The silicon is expected to be obtained in Q2 2021."

Would this mean the actual chip delivery may still be delayed?


Chip manufacturing has many steps. For a new leading edge process it may take 3-6 months to get silicon back after submitting the design to a silicon foundry for manufacturing.

For a small volume ‘shuttle’ run hopefully there won’t be delays, but this is not the same as having working chips!

The foundry will do initial checks it is manufacturable at ‘tapeout’ when you submit your design, but you don’t know for sure if your chip works with intended functionality until you get it back! You are relying on lots and lots of simulations up front before your ‘tape-out’.

Sometimes issues are found and a chip requires a re-spin - basically another go with the bugs fixed. You want to do this as few times as possible (ideally right first time) due to cost and time of these iterations.


perhaps paid some money when the process wasn't booked till the end of time


What is the use case of this chip? I have the feeling it is some way away from a general purpose CPU / SOC like the Apple M1?


RTFA?

> The SoC can be used for AI and HPC applications and can be further customized by SiFive customers to meet their needs. Meanwhile, elements from this SoC can be licensed and used for other N5 designs without any significant effort.

> The SoC contains the SiFive E76 32-bit CPU core(s) for AI, microcontrollers, edge-computing, and other relatively simplistic applications that do not require full precision.


So it is a proof of concept / demo of subcomponents someone else may license? Is that a correct interpretation?


Yes.


Good work Chris Lattner and co.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: