Hacker News new | past | comments | ask | show | jobs | submit login
Amid chip shortages, companies bet on RISC-V (allaboutcircuits.com)
210 points by tomclancy on Aug 26, 2022 | hide | past | favorite | 125 comments



I suspect most of RISC-V development is more towards moving away from western controlled ISA's that can be sanctioned for any reason. Points about avoid shortages from any fab being allowed to fabricate RISC-V CPUs is just the "cherry on top".


I don’t know if sanctions are the reason, but moving away from foreign-controlled IP is certainly a thing. Chinese companies make STM32 clones (even fixing the errata in the original):

https://hackaday.com/2020/10/22/stm32-clones-the-good-the-ba...


I think it's more about a potential future usage of ARM in a sanctions package. Somewhat similar to how Hauwai was was hit with a list of narrow prohibitions like access to the android store.

Having an openly implementable ISA is just one less thing to worry about


I doubt that matters considering China stole a complete copy of ARM (the company) and has already released new IP under their rebranded name.


The stolen architectures cannot be legitimately sold outside of China, due to patents held by ARM ltd. RISC-V avoids these headaches.


Indeed, but conversely, China will have access to ARM no matter what. This also means that we can sanction all we want, they locally have the fabs and IP to create ARM, RISC-V, MIPS and even x86 CPUs (and they are already doing all of that). Granted, their fabs can't do top-of-the-line lithography (yet?) but since they have already created 64-core ARM server CPUs and some custom AI silicon they can get horizontally scaled performance regardless. This is of course their focus: make sure they can make computers and related equipment domestically no matter what.


They do have access to ARM, sure. But they also have access to RISC-V, which is technically superior and free of the troubles associated with ARM.


What is superior about RISC V technically?


Code density is a good one.

L1$ is always starved, thus it is always good to fit more code in it.


China wants to sell chips and gadgets with chips inside. Legally questionable access to ARM will not help with that. RISC-V solves the problem.


Aren't the countries that would care about the legalities and patents the same countries that would be sanctioning them? I don't know who will adopt risc-v for high performance. Nvidia maybe? Intel-lattice-altera-intelgpu. Amd-ati-xilinx. Nvidia wanted to be nvidia-arm but as that didn't work, maybe risc-v? No big fpga players for them to pick up though.


FWIW it most likely doesn't matter if they can't be sold outside of China. the domestic market is large enough.


Does anyone outside of China want Chinese spy chips regardless of architecture?


Why do you call them "spy" chips? Have they been found to generally have backdoors?

In an embedded context, a lot of that doesn't matter either way - your dumb coffee maker doesn't have a lot of opportunity to spy on you anyway. Smart devices are a very different story. However, Chinese embedded chips are often very cheap for the capabilities.


And all Intel and AMD processors have "spy" capabilities too with the built-in black-box Intel Management Engine and AMD Platform Security Processor. And it's only becoming more powerful and commonplace with remote attestation and Microsoft Pluton. In a few years PCs might become just like mobile phones - the bootloader locked down requiring signed images from Microsoft, etc.

Given all of that, China seems like a great option if they produce RISC-V processors.


In a few years PCs might become just like mobile phones - the bootloader locked down requiring signed images from Microsoft, etc.

The number of Windows-exclusive programs are rapidly dwindling. In a few years, it may not be such a big deal to simply avoid that with Linux (or FreeBSD).


If they produce "spy" risc-v processors, no.

State sponsored extortion is still a thing.


Tons of hobbyists buy PINE hardware. All of their SoCs are Chinese (Rockchip, Allwinner). Hardkernel/Odroid stuff is a mix of Amlogic, Intel, and Rockchip.

Outside the hobbyist space people do seem to be shunning Chinese SoCs AFAICT. Maybe no name TV boxes have rockchips, but other than that not much.


What non-Chinese options do hobbyist boards have? NXP chips seems almost impossible to get for the hobbyist market. I’m not sure why. Broadcom for the RPI’s we’re probably only possible because the creators had worked at Broadcom.


Amlogic and NXP mostly. Some mediatek boards are popping up, although they tend to be pricier.


Arm China majority shareholders announce the company’s corporate governance issue has been resolved

https://www.arm.com/company/news/2022/04/arm-china-majority-...


Unless they actually get the company seal and remove the armed guards and gain control of the building, it's all just PR. On top of that, even then there is nothing to prevent this from happening again, and anything that already has been copied or 'exported' really isn't going back into the box of company secrets.


Yes, that resolution has occurred in favor of the CCP. ARM Ltd folded. The person who controls the corporate seals is a CCP apparatchik.


Remember this is a joint venture with 51% Chinese ownership. Meaning of course the Chinese have the final word.


...but my fiduciary duty compels me to maximize short-term profits for my shareholders </s>


This one I am really torn on. On one hand I really like that RISC-V is open. However, on the other hand I do not like the idea of it being used to avoid sanctions. The problem is an ISA that can be sanctioned is not truly open.

The two desires are definitely at odds with each other. Like the RISC-V foundation moved to Switzerland to avoid the possibility in 2019. So the foundation is definitely trying to keep things more open.

At the end of the day it's just ISA and not a micro-architecture design or set of cell libraries and fab processes. So it's not a complete bypass of all possible sanctions. ARM for instance does provide designs and not just an ISA.

The main problem is that a chip made in a sanctioned country with RISC-V can still would have value outside the sanctioned entity unlike some organically developed or a chip made without legally licensing some-other ISA. So sanctioned entity could easily make the chips and then make them look like they are made elsewhere or by someone else and still have something to sell the wider world.


Speaking a common language is a good thing. This is no more difficult than a sanction on something already common like food crops.


Gigadevices GD32 has a successor in GD32V, which is the same but RISC-V based.


Yes. The (commercial) interest in riscv comes from China and is driven by the us china tech war. The riscv foundation moved to Switzerland as a reaction to the initial round of Huawei sanctions.

You will find Chinese companies at forefront of riscv development (ie Alibaba) and Huawei harmonyos supports riscv.

I don’t know why the article doesn’t mention this aspect.


If you're manufacturing chips under sanctions your most difficult finds are going to be the manufacturing equipment, expertise, and raw materials to produce the chips. It's not going to be an ISA - there have been a litany of "open" ISAs and well-documented industry standard ones you're likely going to do unlicensed copies of.

Secondary challenge here, going beyond the ISA, are pre-defined blocks of functionality already implemented (eg: an ethernet controller, internal CPU busses, memory controllers, etc). Even in the RISC-V world many of these are commercial and require a license.


You want to have lots of software for your computer. Like Linux, Chromium, compilers, JIT, etc. It reduces the number of options.


Most hardware is not field-programmable. You can't update a CPU's DDR3 PHY to DDR4, or switch to DDR3L if your needs change.

The single-purpose nature of ASICs and hardware blocks is what makes them fast and power-efficient.


Yeap! I've worked with analog and digital chip designers, along with some digital designers that now work on fundamental IP for an FPGA company. One thing they all had in common was "we avoid all abstractions". It's all as direct as possible. You don't use extra silicon to appease some tidy block diagram that makes the problem easier to reason about, because the silicon isn't made for humans to reason about, it's made for holes and electrons to do their thing!


Maybe. It's not like sanctioned countries tend give a shit about copyright law, so how would sanctions prevent the use of something they already know how to make and use? I'm just not convinced this idea holds any water.


Not breaking copyright makes it easier for a eastern block to form around RISC-V fabrication.


Can you explain how? I'm not seeing it.


I doubt that countries like Iran, North Korean and now Russia care much about the legality of implementing a ARM processor without a license. Or did you mean more in terms of a chip that can be sold to those countries without risk? In that case you just sacrion the sale of the equipment required to make the chip, or block the manufacturer from your markets.


Western Digital was probably more excited about the no-royalties model.


Very nice to see RISC-V growing like that and being in the area of interest by big names such as Google and Intel. Open solutions are critical for risk management but I wonder if desktop/server RISC-V processors are also planned.


There are server specs but embedded was the initial priority. I don't think there's been a lot of attention paid to desktop which isn't a super interesting area in general although it could of course piggyback on server work from a spec perspective if anyone were actually interested in giving it a go.


Without a company like Apple to put in the resources and engineering effort for something like Rosetta, there's really going to be next to no demand for a RISC-V based desktop computer outside of specialized development and maybe competing with ARM-based Chromebook-likes. Desktop systems are absolutely beholden to their platform and architecture because that's what determines which apps you can run. Any serious use of a desktop (that is not programming) basically needs to use Windows or Mac OS (think CAD, professional video editing, etc.) so you're not just convincing hardware manufacturers: you're convincing thousands of app vendors, and that's just not going to happen.


It's not the biggest subsegment of the desktop space, but there are a good number of people using Pi-level devices as a second desktop. RiscOS, Linux, NetBSD, and even Windows run on Raspberry Pi. Some of those run on several other similarly powered boards. In the open source space, plenty of apps already support AMD64, ARM32 and ARM64 and the distros distribute for them. If I can get Debian or Ubuntu on a system with even 1/10 the package repo of AMD64, it's worth considering for a cheap laptop or a small low-power desktop.

Now I know that doesn't sound like much. Don't kid yourself into thinking Apple Silicon M1 and M2 came from nowhere, though. If it wasn't for growing capability in the ARM lines in other products Apple would not have been so likely to invest in it for their new technology, Rosetta or no. Exynos Chromebooks and such led the way to ARM Macbooks the same way the IBM PC led to displacing DEC and Sun workstations, then minicomputers, then x86 servers replacing most other servers in the DC.


> Exynos Chromebooks and such led the way to ARM Macbooks the same way the IBM PC led to displacing DEC and Sun workstations

I think chrome books had very little to do with it. A lot of the work had already happened with the PowerPC switch. On the processor front, Apple’s arm processors aren’t at all like exynos chips that use standard arm cores. I would say that the apple silicon macs are more influenced by iPhone and iPad success than anything else, especially since iOS already runs a lot of macOS


Apple wouldn't have used ARM for the iPhone and iPad if the cores hadn't been proven in other similar platforms. ARM goes back a long time. My Psion palmtops have ARM cores. Many of the WinCE systems have ARM cores. The ARM processors in fact go back to 1985, with the ARM Development System for the BBC Micro and then the Archimedes in 1987.

There's a whole world of ARM processors out there. The ISA, packaging, software, and expertise around it everywhere in the world helps make that ecosystem stronger. Before ARM there was Intel, and before Intel was PowerPC, yet even before that there were the 68000 series Macs. And before the Mac, there were the 65816 in the IIgs and the 6502 in the Apple II. Don't be surprised if Apple is an early adopter of RISC-V for support processors. If they decide they've made them performant enough after a few years of that, don't be surprised if they use them as CPUs and stop needing to license cores and ISAs from ARM at all.

But I can promise you one thing. Apple didn't look at the 18 MHz v7 cores from Cirrus Logic in the Psion Series 5 and immediately decide they could make a mainstream desktop CPU out of it. The competition of companies like Samsung, Qualcomm, and Broadcom in consumer electronics has a lot to do with how ARM cores became suitable for Macbook.


The bit you’ve missed out here is that the ARM64 ISA is very different to earlier Arm ISAs and that Apple almost certainly was deeply involved in its development and was first with a production core.

Given that and in the absence of a clear rationale I find it hard to see why Apple would want to to incur the costs of a move to an ISA it’s had no influence over - certainly not to save an immaterial licensing fee.


At the very first moment that there was a company called "ARM", in 1991, Apple owned 1/3 of it, the other partners being Acorn(who invented it) and VLSI Technology (who made the chips).

Psion adopted a CPU Apple was using in, AND A COMPANY THEY OWNED developed for, the Newton and eMate, not the other way around. The ARM710 used in the Series 5 was the same as Apple used in the eMate.


Since that time, there was a point Apple was nearly bankrupt. Also since then they partnered on another new processor technology and put that one in the Mac. Then they got rid of that one and went to Intel chips. Only recently did they develop an ARM core they considered superior to the desktop competition. The whole Newton/eMate line were nice but were marketplace flops.

Apple plays a long game, but saying they funded ARM in the mid 1980s so they could slowly grow those cores by themselves for themselves in 2020 and later ignores a whole lot of history about both Apple and ARM.


The long arc of history is towards performance per watt.

Apple knew that when it invested in Arm and chose Arm for Newton, for the iPod and the iPhone (note Apple has been selling Arm based products continuously since 1993 apart from 1998-2001).

It’s no accident that there is an Arm based CPU in the Mac now. Apple’s most important Mac is the MacBook and performance per watt is key. Until someone can offer an architecture that demonstrably does much better on this metric Apple will stick with Arm.


The performance per watt point is a more than fair argument. It also cuts both ways. If Apple decides ARM is still the performance per watt leader in 20 years, there's a good chance they'll still be using it. If any other processor tech reaches a better spot, they have the expertise and the semi-closed ecosystem to switch processors quicker than pretty much anyone else in most of the spaces into which they sell. It's something they've not only done multiple times, but done well and are known for doing well.


100%. Look what happened with GPUs and Imagination. Personally I think CPUs and ISA will become less important as more functions are passed to special purpose engines. Perhaps RISC-V extensions might be a trigger but I suspect Arm will offer Apple whatever they want to keep them (both on the ability to add new extensions and on price - if they are paying anything at all at the moment!)


> Apple didn't look at the 18 MHz v7 cores from Cirrus Logic in the Psion Series 5 and immediately decide they could make a mainstream desktop CPU out of it.

It’s more likely that Psion looked at the 20 MHz ARM cores that Apple shipped in the Newton and decided they could make a Psion with that.


I would imagine that "good number of people" are mostly Linux hobbyists, and from my personal experience most people use them as a tinkering or IOT device rather than a full-blown desktop due to the lack of performance. If you're mostly in the terminal that's fine, but for running complex web apps a used x86 would make more sense.

I can definitely see that hobbyist market and future Pi-like devices moving to RISC-V, but I'm less certain about mainstream use unless Windows and Mac (or maybe even Android and ChromeOS) really decide to move over.


"Good number of people" is many thousands, maybe even hundreds of thousands.

And I know of 3 families that have pi based desktops at home, and use them as desktops. (One of those has a person that works in IT in it.) I don't know anybody that has "experimental desktops" that they use only to thinker with, AFAIK, when people assembly a desktop, it's because they want to use as a desktop.


The Archimedes was way slower than today's mainstream systems, too. The more applications a processor family gets, the more attention gets paid to making it performant.


A good example was Windows NT. Over its lifetime, NT has supported Intel i860, x86, x86-64, Itanium, MIPS, Alpha, PowerPC, ARM, and ARM64. But today it only supports x86, x86-64, ARM, and ARM64. Alpha even had a Rosetta-like JIT to run x86 applications, though it was pretty slow.


True.

But I think more importantly, there will be no demand until someone makes a RISC-V CPU that can actually compete with Intel, AMD and Apple on performance.


I don't think that's necessarily a prerequisite. At least you could conceivably see demand for low-power, moderate performance Chromebook(-like) devices. For true desktop computing, yeah, I agree.


A fees dollars saved from not paying ARM will guarantee adoption in low end laptops, tablets and smartphones.

$100 Laptops, $70 phones and $50 tablets will be a big target for RISC-V.


All of those things you mention take advantage of the economies of scale of ARM production. They exist because a few years ago the basebands they're using were top of the line and were used in upmarket devices. They can buy an old baseband/board design, attach a screen and battery, and have a cheapo device for essentially zero development cost.

If RISC-V doesn't see the development for upmarket products it's not going to magically take over the downmarket segments. No one footing the development bill is going to selling $50 tablets.


RV is already taking over the downmarket in embedded, and moving upmarket from there. There's no reason why they couldn't repeat this in other device classes, including mobile.


ARM could just start price-matching the various RISC-V vendors, making themselves the easier choice due to the software ecosystem


They would have to offer the core for free to the SOC designers.


The (good) riscv cores designs that implement the spec aren't free, it's the ISA spec that is free.

There are some open source riscv cores, but the paid ones make money for a reason


Over the next five? Probably not.

But over ten or fifteen years? Very possibly. And if not porting apps directly to RISC-V, then porting them to WASM and letting browser vendors optimize WASM performance on RISC-V.


> ... next to no demand for a RISC-V based desktop computer ..

One thing that RISC-V enables is open source hardware CPUs. There are quite a few people who're upset by stuff like the Intel Management Engine IME making them distrust their personal computer.

These folks don't really have any options that fit their criteria right now.

Some RISC-V CPU could fit in there.


One datapoint: I don't know how popular OnShape is, but it seems to have a lot of features and I'm happy running CAD in a browser for hobbyist stuff.


What's the difference between desktop and embedded? I reverse-engineered some medical device. It houses tiny CPU and 3" touch screen. Inside it runs Linux with X Window, Chromium, Electron and software written with JS. It's definitely embedded device, it's tiny, works from accumulator, hand-held. But its software stack is not any different from desktop.


Traditionally, the main difference was a full-featured MMU which allows virtual address spaces.

But these days, you have advanced 600MHz microcontrollers with simple GPUs, and full-featured CPUs which get used as an embedded platform. You can even build a Linux kernel for no-MMU platforms.

It's a fuzzy line.


Could you please name some of the microcontrollers that you are referring to?


check out NXP i.MX RT1060


For desktop you'd definitely want things like GPU and at least video decode. So that instantly makes it much more involved.


> super interesting area in general

deterred by the size of the profits for the winners, and the losses for those who do not compete in global markets.. yet I suggest there are few things more interesting than a personal, general purpose computer


The flexibility that makes RISC-V so compelling in embedded roles (what if I want 64 bit address but don't need hardware floating point?) makes it a harder target for the sort of workloads that you'd usually run on a server or desktop. If I were going to create an open high performance core to challenge x86 and ARM's A series cores I'd probably use PowerPC as a base rather than RISC-V. But I do think that RISC-V has a bright future in other segments.


RISC-V is very flexible, yes, but most 'desktop-class'/application processors are expected to implement at least RV64GC. G is short for IMAFD, so mul/div, atomics, and floating point are all in there, as well as compressed instructions (C) to reduce code size.

Other features you're likely to want are also included in the specification, so if you want to write code that uses for example the B bit manipulation extension or the V vector extension (which is scalable with vector width as well, unlike SSE/AVX) you just have to check a standardized 'CPUID' bit and can run your code, and otherwise fall back to other code.

I also believe that the spec may let operating systems hook these instructions and provide fallbacks so application developers don't have to, but I'm not too sure on the specifics of the privileged ISA of RISC-V.


What are the trade-offs of PowerPC vs RISC-V?


In addition to RISC-V being more flexible than ideal here, PowerPC is more complicated as an instruction set which is a drawback if you're a student learning to implement your first processor or in a deeply embedded role. But given the huge amount of effort that goes into a high performance core it gets lost in the noise and it lets you execute tasks in fewer instructions without crazy difficult to implement levels of instruction fusion.


RISC-V does not in any way depend on instruction fusion. I don't why this meme persists, especially as no RISC-V cores in the market do instruction fusion.

High end OoO cores in other ISAs are BREAKING DOWN complex instructions into µops. RISC-V is pre-broken down. The one exception to that is current x86 and ARM cores DO do instruction fusion, to combine a compare with a following conditional branch. Which is a singe instruction in RISC-V in the first place. SO it is actually the other way around.

Low end cores shouldn't be doing either fusion or breaking down into µops. They are supposed to be as simple as possible, and doing either of those is a complication using significant silicon area and energy.

There might be a place for instruction fusion in mid-range cores. Things in the ARM Cortex A53 / A55 / A510 range. The most popular RISC-V core in that segment, the SiFive U74 (used in the HiFive Unmatched, BeagleV Starlight, VisionFive v1, VisionFive 2, Pine64 Star64 .. and probably more yet to be announced) doesn't fuse multiple instructions into one instruction, but it does pair a forward conditional branch past a single instruction with that following instruction. Both instructions still exist, they each go down one of the two execution pipelines side by side, the same as if the branch was predicted not-taken. At the final pipe stage if the branch turns out to be taken, instead of taking a mis-predicted branch penalty and flushing the pipeline, the U74 simply does not write the result of that following instruction back to the destination register.

That's the closest any currently shipping RISC-V core I know of comes to instruction fusion.

"crazy difficult to implement levels of instruction fusion" does not exist anywhere, and is not needed. It's just an idea from an academic which doesn't actually exist in the real world, at least at present.


The meme persists because the ISA has a lot of “simple” instructions that don’t actually match what real world software would like to make performant, so a bunch of RISC-V enthusiasts handwave it away as “oh the chip will just fuse all the instructions and make it efficient”.


Using RISC-V doesn't let you bypass chip shortages... Do they actually understand what they're talking about?


If you can't have the existing chips, why not use your time to think about better chips?


Why is it better than say ARM?


No, but the journalist needed to create some content. Here you go, it’s written. I really can’t connect RISC-V and the fact, that I can’t buy some digital and analog components. There is no processor in voltage regulators, PHY, nor in ADCs/DACs I use. These dumb parts do not benefit from RISV-V in any way.


Eventually, if you have pin-compatible chips from several manufacturers, they could replace chips from lets say Taiwan with ones from Europe perhaps?


More likely to be the other way around, actually.

Boards are getting designed for a specific chip and you are extremely unlikely to change that - unless the original is no longer available. Re-engineering for a new chip can be a massive pain - even if the new one is pin-compatible. If you are delivering high-quality products, you are not going to switch.

With the exception of some trivial ones, western manufacturers don't really make pin-compatible chips. Creating a chip which is electronically identical is not easy at all - and don't forget you probably have to make it firmware-compatible as well. The end result is that you now have exactly the same product as your competitor, so you are now competing primarily on price and making it easier for your customers to leave you. Oh, and you open yourself up to lawsuits too. Creating unique products is way for those manufacturers.

On the other hand, eastern manufacturers are more than happy to create exact clones. A lot of shitty electronics don't really care too much about things like longevity, warranty, compatibility, or even regulations. Just make sure it functions well enough to make it out of the store. So a manufacturer like GigaDevice creates the GD32F103, which is pretty much a clone of the STM32F103 by STMicroelectronics - down to firmware compatibility. Won't get used in any product whose brand you recognize, but with the ongoing chip shortage they are definitely selling like hot cakes.

But implementing an instruction set is not easy, and ARM might actually try to do something about it if you do it without their permission. With RISC-V, you can just grab any random implementation! Perhaps even an open-source one? GigaDevice has already released their first RISC-V clone: the GD32VF103. Again a clone of the STM32F103, but you now need to recompile for a different ISA.


RISC-V isn't, fundamentally, about pin level compatibility between manufacturers. It's about the ISA and its design. It's closer to AMD/Intel both providing X86 and x64 chips. Software will (mostly, modulo proprietary extensions) run on both, but they are not physically interchangeable. A EE/CMPE (or, more likely, team of them) still has to design the actual physical chips, and that's not necessarily going to be given away so freely.


Any fab is allowed to produce a RISC-V CPU. You will also have far more core designs that can work across different fabrication possess.


How many fabs are sitting around doing nothing saying "Oh gosh, I wish we were allowed to make something!". Less than zero, because new fabs being built already have orders and plans for what they'll be making Day1.


The limiting factor of a Fab is the process node. RISC-V doesn't magically boost the capability of large process nodes.


It’s not obvious that any RISC-V CPUs that might be able to compete with ARM/x86 will be free in the future. It would take a lot of money to design them, why would the company that does that just give it away for free?


https://github.com/MoonbaseOtago/vroom

One-man project. It works, right now, at A76 levels. The aim is M1 level, and the bottlenecks to that are known and being addressed.

There is NO WAY one person could do with with a complex ISA such as x86, ARM, or POWER.


> It works, right now, at A76 levels

That’s hard to believe. Are there any benchmarks?


Sometimes it's not about an actual shortage, but more of an imposed (political/business motivated) shortage. If RISC-V can alleviate these concerns, the alleged 'shortage' may disappear.


That’s a shortage on the reseller side not the production side.


There's so much more to a chip than its ISA. I'm not sure how the ISA changes anything. If I design a board around an STM32, for instance, even within that family I'd have to find a chip that has the same exact footprint to be able to replace one that I couldn't find. And even then, I'd like have to reconfigure the software ...

So unless there was a pin-for-pin/electrical standard for chips, the ISA is of no effect.


What I don't get is there's millions of "good enough" e-waste motherboards, RAM, CPUs etc. out there. Wouldn't recycling be better than having nothing or waiting indefinitely? Are there that many workloads that can only run on brand new hardware?


It depends.

First of all, we aren't really talking about motherboards or CPUs here. It is embedded electronics, not desktop computing. They are highly specialized application-specific electronics, which require a lot of engineering time to design, validate, and certify. It is nothing at all like the computer ecosystem, where you can just swap in a different motherboard. Boards are designed to use very specific chips, with a chip swap easily costing tens if not hundreds of thousands of dollars.

Second, the chip shortage is mostly affecting "legacy" chips - which have often been available for a decade or more. The applications they are being used in do not really require a lot of processing power, but they do need to be extremely reliable. We are talking about things like Atmel's ATmega32u4, which was initially released in 2008. Can't really do a lot, but plenty of power for some obscure automotive module.

Although recycling is technically sort-of possible, it is extremely labor-intensive. Even with the current shortage and associated price hike, it isn't really economically viable. Even worse, the resulting chips are of unknown quality: you simply don't know what happened to them! And exhaustively testing them isn't really possible either. Are you willing to buy a car with an airbag controller which contains a chip they dug out of a landfill? Newly manufactured hardware has a known quality, which means you can guarantee it works properly.

On the other hand, we are wasting a lot of opportunities on the other side of the usage cycle. Electronics can often be repaired, but we throw them away instead. Look at smartphone and laptop manufacturers, for example: often they just throw out an entire logic board when a single chip is defective. A skilled technician could replace that chip, but smartphone and laptop manufacturers are actively trying to obstruct this. It is "reduce, reuse, recycle" for a reason: recycling should be the the last resort - not the first.


> Electronics can often be repaired, but we throw them away instead.

A skilled technician replacing a 2¢ part on a $10 board costs more than a new $10 board. Just disassembling that board to recycle parts off of it will cost more than the board originally cost to manufacture.

You also run into the same argument against landfill airbag controllers. A factory that produces a million boards can have very good reliability metrics. A skilled technician not only has more variable output but less accurate quality metrics unless they put a lot of extra effort into process controls.

A recycled board will cost more and be statistically less reliable from a brand new board. It would be more efficient to just mechanically separate them to extract raw materials.


Oh, absolutely! Which is why I explicitly specified "smartphone and laptop".

Replacing a $0.02 part on a $10 board doesn't make sense, but why aren't we replacing $0.02 parts on $500+ boards?


Refurbishment already happens. If you drop you get a replacement iPhone there's a decent chance it's a refurbished phone and not just a produced-for-replacement phone. Apple doesn't just toss a broken phone in a wood chipper. Same with most manufacturers. But that's the whole item and not logic boards and such.

A whole board needs to be pretty valuable to replace a 2¢ part because the technician's time is expensive. A 2¢ part that takes an hour to replace and test is $40-50 in skilled technician labor. That assumes you know exactly what 2¢ part needs to be replaced. Every technician hour adds a premium onto a refurbished board. Even a skilled technician can also screw up a repair so the rate of failed repairs also adds a premium. As does shipping and storage on the repaired units.

If you design boards to be more easily be repaired by a technician you're adding test leads and headers that cost money and take up space. In a phone that eats into your envelope budget meaning a smaller battery, a larger device, or tighter thermals. Anymore the same is true for laptops.

Even stuff like the Framework laptop doesn't envision people replacing surface mount ICs. They design around LRUs and have people send back old parts to refurbish or recycle.

There's not really any 2¢ parts for smartphones and laptops. There's $100.02 parts including all the expenses. So the part needs to be well over $100.02 in value to the manufacturer to make it worthwhile for them to refurbish it rather than just replace it and recycle the broken unit. A consumer isn't going to spend $100 to fix a $200 phone. They're better off spending $100 on a used replacement and sending the broken device to be recycled.


Nice writeup, thanks.

> It depends.

I hardly ever give up a computer, we still have a '98 Windows laptop doing recipe duty in the kitchen. (It's getting harder to find a small 32 bit Linux distro these days, though.) Energy efficiency is another concern, one machine I took to be "recycled" (I know--maybe or maybe not) was a Mac G4 which was good as a space heater in the winter, but that's about it--and I didn't feel like moving it 1500 miles with our latest relocation.

I started with electronics many years ago, and would balk at replacing a surface mount chip, but people could learn basic electronic repair literacy for things that commonly break like cords which would help a lot. I also don't tend to buy products like smart phones which are glued together and difficult to repair.

As for RISC-V, it is hard to find even a dev board with the chip shortages (I bought a HiFive Inventor kit to experiment with as a first project):

https://www.hifiveinventor.com/


I think that's a good idea but then companies doing that wouldn't be able to sell their products as new, and we probably need a cultural shift before widespread adoption of refurbished and used products is doable.

Additionally, and not knowing much about the hardware side of things, if I put myself in the shoes of a manufacturer, it seems challenging to ship a product where the expected lifetime of some of its components is unknown. Support and warranties would be affected too.


Those are fair points. With the amount of e-waste currently in existence I'd think we would want to address those sooner rather than later, but it's easy for everyone to just kick the can down the road.


It may be more prudent for us to perfect our material extraction process to reuse these metals and plastics, as opposed to reusing the same piece of hardware. Once that is done, we could probably have e-cycling be picked up by trash collectors just like regular recycling.


I feel like this could be reasonable for boutique production, but if you're dumpster diving it may be difficult to order millions of exactly the same part this way.


You can't really slap an AT motherboard in an LED light bulb and call it a day. Fabs produce all manner of chips, not just GPUs and CPUs.

Over the last 40 years all manner of circuits composed of discrete components have been replaced with chips. Voltage regulation is a chip, battery protection is a chip, rectification is a chip.


Good point. What about repurposing FPGAs?


Barely anyone uses FPGAs. They are pretty much only in use in highly specialized enterprise-grade hardware. Think a €5000 SSL accelerator.


For many FPGAs, the cost of the additional power supply controllers will be more than the cost of a full microcontroller solution.


Not all Chips are digital logic. In fact most are not. An FPGA is reprogrammable logic. You can not replace a rectifier or regulator with an FPGA.

That's kind of like suggesting someone use a stapler (not a staple) when they need a lag bolt because "well they're both steel".



not just the chip shortage. The rise of RISC-V coincides with a couple of other events in the industry. The first is the slowing of Moore’s Law, meaning that increases in total processing power no longer comes along with each new fabrication node. The second is the meteoric rise in machine learning, demanding massive increases in processing power. https://semiengineering.com/why-risc-v-is-succeeding/


Language models and image generation make fun demos, but do we have transformative use cases that'll actually require large ML compute in the future ? Voice recognition and translation are the only ones that comes to mind, yet don't require that much power.


Hmm, Is there any discussion of how RISC-V designs could be incorporated into a GPU or TPU that could train deep learning systems? Your link doesn't say anything about that but it's an interesting question.


I am personally very excited for RISC-V. I like the boot process of OpenSBI/U-Boot, can also directly boot a Linux kernel. So far I have only used the qemu virt machine w/ riscv64 cpu. OpenBSD and Debian/Ubuntu have great support, I have ordered a VisionFive board, curious to see how well it runs on physical hardware


Russia's new main arch, I guess.


I found it to be completely orthogonal how switching to a different ISA will solve the issue of lack of underlying physical semiconductor manufacturing capacity.


Access to high purity quartz is the real bottle neck. You can have your own fabs and designs.. but if America and Russia refuse to sell you some Quartz.. you are out of luck!

https://www.persistencemarketresearch.com/market-research/hi....


High purity quartz is definitely not a bottleneck in the electronics industry.


May be you are right, but can you add some color and context to your commentary. Like a good article link etc.

I am more than happy to chance my mind, if provided evidence.


According to your linked article its main use is silicon crucibles - which are required to make the source material for ICs and solar panels.

But solar panel prices continue to drop rapidly, and quite a few silicon-based semiconductors are still widely available at low prices.

The chip shortage is a direct result of a covid-induced demand shift. Manufacturers of cars and consumer electronics seriously mispredicted consumer demand. The resulting mass-cancellation followed by mass-ordering resulted in a demand shockwave for mature microcontrollers, which manufacturers were unable to absorb with existing stock. Due to plant shutdowns and the general multi-month production time, immediate replenishment is not possible. The resulting shortages in turn made downstream manufacturers switch from JIT to hoarding, which made the problem even worse. Microcontroller manufacturers in turn aren't able to adjust supply because fabs are really difficult, expensive, and time-consuming to build.

Nothing about this has anything to do with quartz.


Considering that quartz has a ton of limitations, MEMS and other digital oscillators are ready to step in the moment that quartz is cost prohibitive.

We're already at a breaking point for low frequency quartz. You literally can't get it small enough for modern packages until you up the frequency. Find a 2x2mm 8Mhz part, I'll wait while you fail.

We're JUST NOW starting to get standardization on SMD oscillators in common packages, and these packages have typically pin incompatible (an enable pin instead of a two pin pierce oscillator setup) but same footprint digital alternatives about.

Maybe you read an article, but is incorrect to say we're being held up by quartz availability.


Best I can do right now is TXC AV08000301, which is 3.2x2.5mm.

I'm surprised you even need one like that. Most modern electronics to use clock dividers or multipliers anyways, especially once your chip is small enough that you need a 2x2mm xtal. Or they just have an internal low-accuracy oscillator. In my experience one of the plentiful 4-pin 2520 or 3225 ones usually does the job, and the market has done a decent job standardizing their footprint.


Isn't the clock generator a discrete component? Do modern CPUs and GPUs have integrated clocks?


I do not know -- but when I talk about High Quality Quartz, I am talking about the raw material that is needed to manufacture Silicon Wafers.

https://www.waferworld.com/post/9-things-you-might-not-have-...


Ahh... I read another comment about alternative sources for clock signals and assumed the article was referring to the use of quartz crystals used in occillators.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: