Andy Grove gave a keynote at hot chips, I think around 1999. Back then, almost everyone had a fab. Arm wasn't a big thing, the iphone didn't exist, smartphone generally didn't exist, and many companies had fabs. ,
He basically outlined how each process shrink would allow more transistors, but also get more expensive. His conclusion was that fewer fabs would use the leading process in each generation and that costs would almost double for each generation.
It wasn't a particular opinion at the time. Generally it was thought if you were serious about making CPUs, that you would have your own fab.
For those curious about the origin, the earliest source I have on hand is https://www.gwern.net/docs/cs/2003-ross.pdf "5 Commandments [technology laws and rules of thumb]", Ross 2003, which attributes it to Moore in the 1990s who attributes it to Arthur Rock at an unspecified time:
> Sometimes Called Moore’s Second Law, because Moore first spoke of it publicly in the mid-1990s, we are calling it Rock’s Law because Moore himself attributes it to Arthur Rock, an early investor in Intel, who noted that the cost of semiconductor tools doubles every four years. By this logic, chip fabrication plants, or fabs, were supposed to cost $5 billion each by the late 1990s and $10 billion by now.
(I shamelessly call it Moore's second law to arrogate the prestige of the first law into its invocation, and because it's hard for me to remember 'Rock's law' - who's that? Plus it puts it in good company by making it an instance of Stigler's Law, which was not first described by Stigler and thus renders Stigler's law autological.)
The difference between tool price doubling every time density doubles, and tool price doubling every time density quadruples, is pretty vast. Even though those two predictions resemble each other, I don't think it's fair to call them the same thing.
"I predict that within one-hundred years computers will be twice as powerful, ten-thousand times larger and so expensive that only the five richest kings of Europe will own them."
I'm sure that's true, but I think this decision was ultimately made based on GloFo losing AMD as a big customer. GloFlo has relied for too many years on AMD's business.
Intel, Samsung - They design chips as well as fabricate them. Design starts from a very high level specification down to individual transistors.
Fabs such as TSMC, Global Foundries etc.: They manufacture chips in addition to specifying the rules to be followed when laying out the logical design (circuits) on silicon. Rules differ by the given node (7nm, 10nm, 14nm etc.) and specify things such as the width of a gate, thickness of a metal wire on the chip, spacing and tolerances etc. The actual number just indicates technology advancement. 7nm means the manufacturing process is more advanced than the 10nm process and has little to do with physical reality. Hence Intel's 10nm process might not be the same as TSMC's 10nm. While Intel can in principle adhere to TSMC's 7nm process, they already have their own R&D and develop their process in-house. Switching to a different foundry's node would involve redoing a number of steps before the chip can be manufactured. Besides, fabs guard their process jealously and are very protective of their IP.
Design companies such as AMD, Qualcomm, Broadcom etc. just design the circuits and adhere to the specification of their chosen fab and node to layout the design on silicon.
The first set is doing chip design and layout (think of this like the circuit diagram for the CPU). The second set is doing the actual manufacturing after being delivered a complete layout.
Intel can buy a "7nm" processor from TSMC and in fact for products that aren't their processors they actually do get other companies to manufacture their products occasionally. They don't want to release the complete design of their processor to external suppliers for IP reasons and have been leveraging their manufacturing advantage to sell more CPUs. Intel could build AMD chips in their fab but then Intel CPUs have to compete with AMD on architecture alone and lose any performance edge attributable to the manufacturing.
Technically this is the gate length of the transistor (https://nptel.ac.in/courses/103106075/Courses/Lecture21.html) but it's becoming mostly a marketing term to refer to the generation of manufacturing tech since the transistor density depends on enough other factors now that gate length alone isn't a good measure. Intel's 14nm ~= TSMC/GF 10nm and Intel's 10nm ~= TSMC/GF 7nm.
It's just incredible how complex and advanced the hardware got, but at the same time, how crappy and inefficient the software on top of it is. I know it's simple economics and the chip foundry has an economy of scale that the Electron developer lacks. But it still feels like we're all eating garbage out of golden plates.
By the end of the next decade we will probably reach the end of what is physically possible in device fabrication, so competition will have to move to design and software efficiency.
It's a good example of the same underlying economic incentives. An EDA title is used by a few tens of thousands of professionals, at best. Large investments into making it more user-friendly and stable is not really worth it in the grand scheme of things. It needs to be just the minimal viable product that can get the chip out the door; it will be complex and feature packed rather than usable, and the engineers will learn to navigate around it's idiosyncrasies and bugs just like they solve other technical problems, it's what engineers do.
The chip itself though, that's a whole nother thing. Especially the process that works across multiple designs and becomes an asset for the company. Any nanosecond shaved here improves the capacity of the foundry to compete and helps a billion people complete the task 1% faster or have 1% more memory. The customers have no reason to work around a less performing product, they will just buy from the competition.
So you get this strange combo of supernatural hardware assembled with atomic precision by garbage software written in Perl by an outsourcing guy with 6 months total programming experience.
There is a little more complexity that bears on why companies like Intel would have their own fab.
In practice, a processor design isn’t just a circuit diagram (or a netlist: https://en.m.wikipedia.org/wiki/Netlist). A circuit diagram is an idealized model that hardware does not totally reflect. For example, wires have capacitance and transistors leak current. In fact the smaller the process node, the less the real circuits reflect idealized circuits. High-performance designs thus are tuned for the specific properties of the process.
Historically, Intel has not only led in process technology, but has reaped benefits from having the process engineers next door to the processor designers. Intel CPUs have been tuned for the exact process that will be used to make them, including by laying out transistors by hand to maximize performance.
Custom design happens with foundaries as well, of course, but there have historically been synergies from putting processor design and process design under the same roof.
I have heard the argument? Rumors? that Intel has suffered from the closeness of the chip designers to the process engineers... for a long time if the process is not robust, the designers could workaround, but that made it hard for non-in-house designers to develop. The need for robustness has forced a discipline in the tsmc process engineers to be more disciplined, which scales well downward.
Well for decades Intel's competitive advantage was the fabs. They executed well and it was hard to compete. There's been dozens of projects to compete with Intel that failed to deliver, largely because they couldn't compete with Intel's fabs. I always felt bad for AMD, they often ended up competing with Intel using fabs that were a generation behind.
However the latest stumble does show the weakness of that approach. Nothing is stopping Intel from using a 3rd party fab, however every chip they farm out is one less to amortize their fab related R&D over.
Seems pretty much impossible for anyone but AMD and Intel to make an x86 without causing IP related issues, at least in the USA. AMD did recently try to subvert this with a complex setup of partnerships for a CPU that will only be sold inside of China.
Not sure what x86 has to offer Apple though. Their control of their platform would allow them to switch architectures, just like they have twice before. It's widely rumored that Apple will migrate their x86 products to Arm based products. They have invested pretty heavily in Arm.
It's not like AMD and Intel agree on how to do performance monitoring and virtualization. What's a third implementations in the grand scheme of things?
For the roughly 150M New iPhone Unit sold every year, Apple needs just one SoC. One. And it could be reused next year.
For the roughly 25M Mac sold every year, Apple needs lots of design with different TDP. Fanless TDP sub 10W for MacBook, 15 - 25W for MacBook Air like. 35W for MacBook Pro, Upto 100W for iMac, and up to 200W for iMac Pro and Mac Pro. That is around 5 design variation for the 25M unit.
I still don't see how Apple would do it. Especially for the Pros, why would you design CPU aimed at 100W+ when they represent less than 2.5M unit per year. A lot easier just switch to AMD Zen 2 when the timing is right.
Apple doesn't have the rights to design x86, and it will never* get them. Only Intel, AMD and Via can do that. Intel and AMD cross-license each other's patents. Not sure how Via got the rights, but I think a successful blind reverse engineering effort and a court was involved at some point. So Apple would either have to get licenses from Intel _and_ AMD or it would have to go the same route Via did. Both sound unlikely. At the same time, Apple has an excellent ARM architecture going.
Patents run out. I should be able to design something using ideas from more than 17 years ago. Companies try to block this by filing continuing new patents but the time limit should allow only using the old ideas.
Not possible from my understanding. The only reason Via has an x86 license is due to a successful settlement (forgot details of the case) with Intel in 2003[0] after acquiring assets from National Semiconductor and under the terms of the settlement got a 10yr license. That got extended to 15yrs and is due to expire this year last I heard.
Licenses to the x86 ISA can’t be bought or transferred via acquisition. Re: the other question someone asked about patent expiration, a patent expired version of the ISA is pretty worthless. Each new iteration gets new patents.
As for right now, speed and features. Apple is already pretty invested in ARM and have the in-house experts. Managing and developing two processor architectures in parallel is a massive duplication of effort, just to avoid paying ARM a licens, a licens that Apple can easily afford.
In the long run, no there's no reason why Apple couldn't use RISC V in place of ARM, but not without a large upfront cost.
AMD's licenses from Intel have some strings attached. I think they basically cannot be bought while keeping the rights to x86. A bought AMD could possibly threaten to no longer grant licenses to Intel, but I'm sure Intel's lawyers have thought of that.
Apple buying Intel could work, I guess (anti-trust might get involved), but I don't think it would be a wise investment. Apple doesn't have the volume to keep Intel busy alone.
I don't think they'd do it either, or that they'd even need to. I just think it's kinda funny that Apple has enough cash to outright buy both Intel and AMD.
They'd surely try an in-house ARM design long before considering an x86 chip of their own design, especially given Apple's enormous success with their own ARM designs in the cellphone space. Rumor mill has been suggesting such a move for a while too.
Apple have successfully managed major instruction set transitions in the past, I think x86 to ARM would probably be the "easiest" one yet.
Intel could bet on technologies that allow leverage their design-fab close relationships.
Like integrating more non-CPU things into the CPU. FPGA, neural coprocessors, DSP, and stuff. FPGA is on the way, but while just couple together CPU and FPGA connecting them by QPI is great, but it's not the things I'm talking about.
What we could wait from Intel is coprocessors real-time offloading, automagically done by (very sophisticated as on Intel x64) CPU control unit. Tight coupling might allow very fast (partial) reconfiguration of that FPGA-like coprocessors - and that's when you need your own fab to make that coupling really tight, and optimize everything down to the last bit.
I've been wondering what Intel's moves will be now they own Altera (FPGA). Was tinkering with dev boards with Cyclone V FPGA with embedded ARM - but if Intel kills that then they lose me to Xilinx zynq, I'm not interested in the Intel ISA.
Intel released Xeon Gold 6138P processor with a built-in Altera Arria 10 GX 1150 FPGA recently. Considering the all obstacles on the way to merge two big companies, that was almost fast.
It's good (and expected) that Intel merge their cores with Altera's FPGAs. My worry is that Intel will drop ARM (given the earlier StrongARM market exit) and leave the market that Altera created for FPGA-ARM SoCs. Of course, they can assuage that fear by making a concrete public commitment to a 5 year roadmap which includes ARM, for example. But in default of that, you have to wonder if investing in learning their tooling and ARM integration will be a waste of time. Which will be their loss, and Xilinx's gain.
Intel have historically been very good at fabrication, which represented a considerable competitive advantage. Outsourcing fabrication is much less capital-intensive and can be more flexible, but it's potentially less profitable because the foundry needs to make a profit. The relationship between a chip design company and their foundry partner is fairly complex (especially at cutting-edge nodes), with the foundry providing considerable expertise and design support.
Node size names have become fairly arbitrary, with no direct connection to any particular feature size. Intel's 10nm is effectively the same as everyone else's 7nm. In either case, the actual pitch between transistor gates is about 54nm. There are ostensibly standards set through the IRDS, but there's a clear marketing imperative to advertise a smaller node size.
It's not a concrete distance, but moreso a range. I think 7nm is the bottom of the range so the distance would be 7nm-14nm. 10nm chips have distance ranges of 10nm to 20nm.
I think at that size it is very difficult to get exacts.
Fab refers to a plant than manufactures the chips, versus designs them. I think Intel does both?
The design of the processor is specific to the fabrication technology. Notably Intel has been the industry leader in fab technology and capacity for some time. It's simply very difficult to compete with their ability to crank out CPUs at their volume and performance levels. Intel could "simply" crank out CPUs via TSMC but it wouldn't be to their advantage.
The size of pixel in an IC, or it used to be so. For convenience, just think that it still is.
>Why can't Intel just buy a "7nm" processor from TSMC?
Intel can dual source fab capacity, (they make part of Atom line at TSMC, part in house) and did so in the past. Besides technicalities, a lot has to do with pride.
At my time at Intel, the company was fab-constrained. Intel ran some parts on outside fabs. The decision as to which ones was easily made by sorting projects in order of gross margin per wafer. Highest margin parts ran on Intel fab, when the fab was full, the project had to hunt down fab capacity elsewhere.
Each generation of fab is significantly more expensive than the last. Pushing the edge of physics is expensive. Transistors are so small these days they can't use the normal light frequencies any more. Things like precisely focusing light gets trickier when extreme frequencies are used.
The result is you have to almost double your volume with each generation, as a result there are less and less fabs running the current process. Makes AMD decision to split off Global Foundries look pretty good in hindsight.
Even those companies with the leading process make a substantial number of chips on older process. So the bleeding edge CPU gets the latest greatest, but the chipset, flash chips, and memory chips are often a generation or more behind.
Seems realistic that if you are behind and don't have a huge customer (like apple or nvidia) lined up that you just save a few $billion and let TMSC have it. TMSC will of course charge more without competition, and make chips using TMSC less competitive. If Samsung can't compete with TMSC (which remains to be seen) TMSC might well delay future shrinks.
The market loves Moore's law, but the stress is really starting to show. Physics is starting to interfere with what the market wants. Things like CPU clock speeds stagnating, power per chip doubling for the first time in the newest generation, and of course the ever lengthening product cycles.
It does make you wonder when AMD and Intel double the normal CPU socket from 95 watts to 180 watts or so. What are they going to do for the next generation?
That 95 watt number they provide is a minimum to provide the advertised speeds more than anything else. Current stock chips from both AMD and Intel already use much more than their rated amount if there is headroom thermally and there is power availability. The 8700k for example has no problem drawing >150 watts when turbo'ing all cores and being well cooled. AMD does this as well.
Advertised heat output should also be taken with a grain of salt. Both die size and material between the die and heatspreader can make two 95 watt chips have very different cooling requirements.
> It does make you wonder when AMD and Intel double the normal CPU socket from 95 watts to 180 watts or so.
could they, and still expect to sell? it seems hard to tell the data center and supercomputing customers that they're going to have install massively more cooling capacity.
ThreadRipper 2 is 250W and I don't see people complaining, so I wouldn't be at all surprised if K SKUs move up to higher TDP.
In servers it's already rumored that Intel will increase TDP to ~300W per socket. This doesn't require more cooling per se, you just fit fewer sockets into the data center.
These days TDP is a lie and everyone knows it. Both Intel and AMD are measuring TDP at base-clocks and their processors will happily pull another 75-100% over TDP while boost clocks are active, even without overclocking.
The idea that you could run an octocore in the same TDP as a quad-core is obviously not correct, and it's largely because Intel and AMD keep pushing the baseclocks down farther and farther.
Not necessarily, I actually think this is a smart move on their part, TSMC is ahead on 7nm and FD-SOI, RF, SiGe, etc. are big, profitable and rapidly growing markets in which GF has a differential advantage.
Agree this a smart move. They should be targeting lower volume markets where the 30% decrease in per-die cost at 7nm isn't offset by the increased upfront costs. If they focus instead on decreasing fixed costs of lower volume ASICs, they will have competitive advantage in long-tail markets.
And maybe, if they will push their resources really hard on decreasing costs of lower volume ASICs, that would be a big win over GF in long term. It depends on progress in hardware synthesis tools and software mainly. If the tools like auto-generating optimized RTL from Haskell code would became mainstream, and cost for production of custom, reasonably small ASICs on modern fab processes drop by an orders of magnitude - we would get something similar to what happened to PCB recently, when there is no point to made it by yourself even if you need a dozen, just send gerber files to OSH Park, and get your PCBs for $5 in two days.
GloFo was never long-term sustainable, so they basically had a choice between going bankrupt in 1-2 years trying to do 7 nm or coasting on 14 nm and slowly withering away over 5-10 years.
GloFo is long-term sustainable, just not as a leading edge foundry. Old process tech doesn't go away, they just move to producing lower-margin products. 65nm is still in mass production. 14nm and, more importantly the fdx processes (as they have lower chip development costs and therefore are more attractive for the low-end market) will still be in full production 20 years from now.
Exactly. GF is making lots of chips on 130 and 180 nm process. There’s a lot out there that isn’t bleeding edge CPUs. We’ve got power chips, port controllers, high speed linear re-drivers. Not every ASIC needs tiny transistors, in fact many use comparatively huge ones and it makes no sense to use an expensive process.
I don’t see GF going away any time soon. They’re a huge player with a lot of customers.
I think it is a great short and long term move. The new smaller processes aren't getting us to faster clock rates, they are making things cheaper. GF can get to the same point by optimizing their current processes.
This is the type of news that will be relevant in 20 years.
There are only 3 foundries left: Intel, TSMC and Samsung.
If (when) Intel gives up, none of them will have strong roots in the USA. One of the biggest shifts in technological expertise from West to East in history.
Doesn't seem to be obvious that Intel will blink first, before Samsung.
Intel might start making noises to try and get government subsidization though, and I would expect Samsung to do the same to the Korean government.
Having a up-to-date fab is pretty clearly a national security asset, else you are at the mercy of whoever you are buying silicon from. Maybe in the 90s or early 00s when everyone was on the globalization train we would have just let things go, but it's hard to imagine that happening now that hardball realpolitik is back in fashion.
The only chance of this happening realistically is if Intel decide to open their fabs to let others produce their own CPU designs in Intel factories. For example, instead of Apple going to TSMC for their next gen ARM design, Intel decide to do a deal to win that business and produce iPhone CPUs in their fabs. Intel I believe still holds an ARM license from the XScale days, so I suppose an in house design for Android devices could work too.
Historically Intel has been hugely reluctant to do this, as part of the deal when buying Intel chips was that to get access to their advanced fabrication you have to buy their own high margin chip designs. Building other people's chips is traditionally a much smaller margin business, and would be an enormous change to Intel's business model.
I could still see this happening, especially with the current outlook for Intel not looking as rosy as it once was, but it would be a major loss of face for the company that once ruled strong on x86.
The fact that Intel effectively have no market share in the hottest consumer computing market in history (smartphones) is a major failing on Intel's part. Intel selling their ARM business (XScale) to Marvell the year before the first iPhone launched increasingly looks like a pretty terrible decision in hindsight.
> The only chance of this happening realistically is if Intel decide to open their fabs to let others produce their own CPU designs in Intel factories.
Intel Custom Foundry has been a thing since 2010.
That said, it's incredibly low volume compared with the competition.
Intel Custom Foundry historically has not offered the same tech as their internal CPU foundry. Last time we engaged with them their IP offering (memories, serdes etc) was very uncompetitive. It's possible that's now changing but if you are wondering why it's been low volume, that's why.
The more I read about this, the more it seems likely the next big architecture could come from an aliance between Apple and Intel (similar to how PowerPC was owned).
Although Apple are big on ARM, I can’t see them completely abandoning x86 on the desktop. For Apple themselves it’s not that big a deal, but for 3rd party software vendors and users, switching to another architecture barely a decade after the last switch would be painful. On the hardware side Thunderbolt 3 is still Intel only, and clearly Apple are pushing big for that to be the future.
On the other hand mobile has shown the best designs have a mix of processor cores, some optimised for power and others optimised for performance. Intel haven’t shown anything like that for x86 (other than p/c-states), but clearly the same would be beneficial there too.
So why not Intel performance cores, paired with ARM low power cores? Apple have the low power design know how, and Intel have the fab know how. Both companies have a lot they could benefit from this partnership. Maybe more importantly, from a political side it would be an “all American” chip.
There are plenty of fabs left. The problem is that each iteration comes with a staggering cost that takes time to make back. A lot of plants quit chasing the latest and went to niche markets where the money is better. That said, if some of the remaining competitors drop out, and prices rise enough, you'd see a couple reemerge.
What unusual thing was happening as of recently, is that legacy processes up to 90nm began "coming back back from the dead," because the bleeding edge became so "congested" by very few behemoth consumers with exclusive deals with fabs: GPUs, top tier phone SoCs, and high end network switch chips.
On my memory, 65nm was the last process on which a "cookie cutter SoC" was still a good business. But with more opportunities coming up in "niche microcontroller" market today, thanks to boom in "smart things," a generic cheap low power process might too become a viable business again.
GloFo might have just noticed that, and are trying to capitalise by being first in the new niche: tier 1 fab service on cost optimised legacy process.
They were already the biggest fab for companies to whom always getting the best process is not raison d'etre, who can't afford gigantic MOQs of last gen processes.
So they were picking up whomever TSMC was losing due to MOQs, and lack of first class treatment. TSMC was too greedy putting so much focus on work with tier 1 superplayers.
Question thought, what will this mean for their no. 1 customer...? Though they announced them moving to TSMC, their 7nm might still not materialise for quite some time, and they still have to make their low end chip tapeouts somewhere.
There was an interesting slide from a few years back that showed the net profit (less investment) of all the leading edge contract foundries since the founding of TSMC. It was very easy to see that since then, TSMC has earned more than the total net profit of the industry. Everyone else invests a lot of money to maybe stay somewhat competitive and still lose money, while TSMC is swimming in it.
It's actually great news for AMD. Due to the terms of the spinoff of GloFo, AMD had minimum order volumes they HAD to send their way, and that caused issues when they were ramping up Ryzen, causing GloFo to have to license the Samsung process so they could keep their end of the bargain up.
This frees AMD to go exclusively with TSMC, or split their fab needs between Samsung and TSMC.
It's possible that AMD abandoned GloFo first (they've been coyly saying "a variety of fabs" for months) and then without a lead customer GloFo decided to cancel 7 nm.
AFAICT this means TSMC has no competition for 7nm for their fabless customers, which means they have no incentive to invest in getting to the next process node — their customers' BATNA is now "spend several billion dollars building your own <7nm fab." So I think this is the end of Moore's law for everyone except possibly Intel and Samsung.
They need their customers to be competitive with their competitors, right? If Intel and Samsung by are still in the game, then AMD and Apple can only survive if TSMC remains competitive?
That's sort of AMD and Apple's problem, not TSMC's. You can't expect TSMC executives to risk TSMC's future by building a multibillion-dollar 5nm fab, which may fail, when there are plenty of other customers out there for their unique (?) 7nm process, on the theory that not moving to 5nm would be bad for the overall health of the semiconductor industry.
What are the political implications of the entire world becoming so reliant on TSMC, with nearly all of its manufacturing concentrated in Taiwan?
It's no secret China is eyeing semiconductor manufacturing as sort of a last frontier they need to cross before they become a vertically integrated powerhouse. If China took over Taiwan, and assumed influence over TSMC, wouldn't this be a major achievement?
1. Why aren't TSMC scattering their fabs across different continents - not only for political, but also to protect against natural disasters etc?
2. How much of the USA protection of Taiwan takes into account semiconductor manufacturing?
Taking over Taiwan would be extremely tough. There are geographical challenges which would be tough to work out and the population would be extremely hostile. It would be impossible without a beach landing, which China doesn’t have the right equipment or training for and it’s unclear how willing the PLA would be to go along with it without a very good reason. Additionally, what’s the last war China was involved in? The short lived 1970s invasion of Vietnam? You can’t consider suppressing the tianammen protests anything similar to a land invasion of Taiwan even without the US security guarantee and potential nuclear umbrella coming into play. This scenario seems increasingly to be a non-starter for a variety of reasons.
I've met and worked out with both Chinese and Taiwanese soldiers. This is a single, unprofessional perspective but you might find it interesting.
Both countries do a sort of military training for their youth, but only Taiwan (last I checked) does a year of required service. That said, there are almost no actual combat vets in either country, and Taiwanese military service is basically "sweep gravel, peon."
BUT! The Taiwanese will come with far greater discipline, based on my experience in on-base gyms. Chinese soldiers spend their days derping around, playing ping pong, occasionally running around a track. Some of them get sent out to the fringes of the country to "suppress dissidents" but the majority sit around twiddling their thumbs (information told to me by privates, so, grain of salt).
Taiwanese soldiers, volunteer and conscripts, do seem to get a great deal more actual military training, in terms of equipment and exercises. And, maybe unrelated, but their general strength levels are way higher than their Chinese counterparts (like I said, I hung out in their gyms).
In any case, an invasion of Taiwan would "fail" by most measures of military success. China could probably nuke the island into Ash, or rain shells on it until there was nobody left to surrender, but there is a highly volatile generation or two growing into a distinctly "Taiwanese" identity. They were political enough to take over their own parliament buildings, while being supported by local businesses with food and water. My unprofessional opinion is they sure as shit would be an aggressively horrifying guerrilla force, especially because ~80% are already trained.
But a smoking crater the shape of Taiwan is not that useful to China. A big part of why they want Taiwan is for the businesses, which you don't really get if you nuke the island from orbit.
My individual data point: I spoke recently with a Taiwanese who finished his military service in the past couple years. In terms of physical strength his account totally corroborates what you said. He wasn't in shape before conscription but got jacked during it, and has stayed that way afterwards.
That said, what he mainly emphasized is how outdated Taiwan's military equipment is. He said most of it is stuff they got from the USA in the 60s and 70s, and not the USA's newest weapons at the time. Their current vehicles break down a lot, and he didn't train much with artillery during his service.
If China were to invade (again, highly highly unlikely) the Chinese would have a huge technological advantage. The Taiwanese identity of the younger generation is legit, but given the relatively cosmopolitan/capitalist/individualist cultural climate, I'm not sure everyone'd be willing to lay their lives on the line for national identity. A more likely outcome is a mass exodus over many years of anyone who disagrees with Chinese ideology and has the wherewithal to get out.
Sorry but what good is it to kill the people that actually staff TSMC's foundries? The premise of this entire thread is based around the idea that the PRC would want their hands on TSMC's foundries, that means an invasion that actually preserves Taiwanese infrastructure and lives, not an impersonal shelling of the whole bloody island, even if they could do it without triggering a war with the US (debatable, but still a genuine consideration).
And I'm sure the US thought that the technologically poor North Vietnamese would fall quickly too. You can't underestimate the role of the people's ideological identity when it comes to taking over a country.
Unlike in Vietnam, I suspect that the Chinese leadership will be committed to a long term conflict.
America never had national identity at stake in Vietnam. China has claimed Taiwan since Mao Ze Dong. Taiwan is a way bigger deal for the Chinese national identity than Vietnam ever was for the US.
More importantly the Chinese leadership doesn’t need to worry about an election cycle. They can easily think in 20-40 year timeframes, while American leaders can go 4 years ahead at most.
China is not going to invade Taiwan militarily and they don’t need to. They can gradually encircle them with alliances with their neighbors and dominate them economically until they’re forced to unify with them.
China is bolstering their marine and amphibious technology. If they do not already possess the ships necessary for large scale sea-to-land assaults, they will in short order. On top of this, China is also militarizing their coast guard assets.
I hope the "peaceful rise" claims holds true into the future, but a lot of the military technology they're developing are not for defensive purposes.
Considering that Taiwanese engineer programs were, when I was a recruiter, equivalent to USA schooling (depending), they shouldn't have a hard time finding a job here either...
Although China is increasing its amphibious capabilities [0], it's unlikely that the US would allow Taiwan to be taken over militarily. It's much more likely that China would go for a political takeover, and I doubt that'd happen in the next decade or so, by which time China may very well have their own leading edge fabs.
> A physical blockade in some form could be coupled with intense economic pressure, and cyber operations. This conflict escalates, and because of domestic nationalist pressure as well as factional infighting, Xi chooses confrontation instead of a backdown.
> PLA cyber, missile, air and maritime forces are deployed against TW in order to neutralize its ability to defend itself. At this stage, an invasion is still not the preferred plan because that would entail a costly campaign involving amphibious assault and urban operations.
> Instead, precision conventional missile, counter space, maritime/economic blockade, and cyber operations are used to bring TW to its knees. For the #CCP this is not only a campaign to obtain limited concessions, but to achieve its vision of unification.
What would be the point? Taiwan isn't going to roll over without a fight. TSMC's physical and intellectual/human assets would be blown to bits. The world would boycott China. The US would shut off China's shipping lanes. China's economy would nose dive. The number of dead PLA soldiers would be enormous. All for what?
If China is going to take Taiwan and hope to profit from it, they have to do it with a soft approach through engagement, business, and diplomacy -- a scaled up version of Hong Kong and Macau.
TSMC was founded by the Taiwanese government to bring jobs and boost the economy of Taiwan when the economy wasn't that great over there. Why would they want to scatter their fabs. Its a major part of their economy. Its not just the fabs, but all the ancillary companies that feed into TSMC. I have been there. There are a lot of people work there and or work in business that support it.
Well, I met a number of TSMC oldtimers in Singapore when I was there on an exchange program.
And I had a talk with them once on this exact topic. Among them, "the black safe" is an euphemism for the worst case scenario for the company. Nobody they asked exactly knew what is inside, but some people were instructed decades ago that in "worst case scenario," men with military training should open them with code that will be provided, without anything specific said on what will happen next. They said that judging by construction, they look exactly like a something for explosives storage.
I once had my dream of working in semi industry after being fascinated with electronics from early years. I studied physics and electronics engineering by myself like hell through my late teens, and had mentor who connected me with companies in Taiwan, and I almost got my leg in a Taiwanese fabless, though only in an "office bellboy" position.
Despite all of that, and me having a letter of recommendation for NTU and preliminary approval in Nanyang, parents sent me "to study businessy things rich people do"
No chance of Apple buying all of TSMC - that would make them a huge target for regulators and place them on a precarious position with competitors in regards to IP protection (IP Chinese walls etc). If Apple buys something, it keeps it to itself and honestly it really doesn't need all of TSMC (not to mention it's a bit of a big fish at $216bn market cap).
If I were Apple, I'd start my own joint venture with someone, developing exclusive fabs/nodes for future Apple processors - if only to exert pressure on their fab manufacturing partners like Samsung.
He basically outlined how each process shrink would allow more transistors, but also get more expensive. His conclusion was that fewer fabs would use the leading process in each generation and that costs would almost double for each generation.
It wasn't a particular opinion at the time. Generally it was thought if you were serious about making CPUs, that you would have your own fab.
Impressive how true his predictions were.