I'm in agreement. nVidia's history of not playing nicely with others does not make this sound like a good deal, especially for entrenched ARM vendors. (Apple won't be phased; they basically design everything themselves, so they don't really even care about the direction of ARM or the architecture. They could hard fork tomorrow, or two years ago, and nobody will notice since they're entirely vertically integrated.)
But, on the plus side, it's the biggest opportunity RISC-V will ever get. Almost overnight anyone who was building ARM cores will suddenly see investments in RISC-V prudent as a hedge against nVidia's future core designs not being licenseable. SiFive will very quickly become a multi-billion dollar organization...
Apple’s ARM chips contain an awful lot of different components, some of which are highly customised. The neural engine stuff they use for image processing for example. The CPU cores are custom tuned, but I believe they are still 90% based on ARM CPU core designs. For example the Secure Enclave is actually an ARM developed CPU extension. Apple focuses their efforts vary specifically on areas that give them features that grant a competitive advantage. They don’t reimplement stuff for the sake of it. One of their big performance wins over other ARM licensees isn’t even really in the CPU, it’s the very well designed and generous cache memory systems.
Having said that, looking to the future I think Apple is clearly willing to pour more resources into their CPU programme than any of their competitors. This means the risk to them is probably a lot less than to say Qualcomm or Samsung as they have a much better capability to go their own way. As time goes on their designs will probably diverge more and more from main line ARM anyway.
Apple has an architecture license from ARM, which allows them to design their own cores yes, but also includes all the benefits of all the lower tiers including rights to all the ARM designs.
This would have to be the case, because Apple still uses many of their earlier ARM designs in products, which we know used ARM designed cores and would definitely require core design licenses.
I think nVidia could run into some serious anti-trust issues if they tried to interfere with the ARM ecosystem. There's far too much at stake here. Is there a phone on the planet that doesn't run ARM?
The Asus Zenfone 2, released 5 years ago, ran on x86 (Intel Atom) [1]. I know someone who was still using it this year Eventually switched but the phone still works. So, to the question:
> Is there a phone on the planet that doesn't run ARM?
Well, technically, yes ;-)
This does not change anything to your point though.
I bought the Zenfone 2 which had an Intel processor, and had a similar experience to the sister comment. The only super annoying thing that happened was when Pokemon Go came out all my friends had it but it didn't support Intel. By the time they made it available for the intel phones (which was a few weeks) the game wasn't cool anymore. :/
I used the ZenFone 2 for about a year ~3.5 years ago; it worked surprisingly well. It was impressive that it really "just worked" like a normal ARM device.
There were some annoying software aspects (the Play Store only allowed me to download x86 apps even though it could run ARM apps), and the device was 100% a battery hog - but cool tech.
Intel gave up because Atom for smartphones were sold at a negative price after marketing incentives. They couldn’t even get volume from manufacturers when it costed less than zero; because no one wanted to invest upfront R&D costs.
> MIPS was pretty popular among early Shenzhen tablets.
> Then they just disappeared. Either MIPS screwed up or ARM did something really smart. My money is on the former...
MIPS SoCs simply fell too much behind. They kept presence in the STB, and router boxes for some time, but even there they lost due to overall "deadness" of the entire ecosystem:
Want latest compiler work, and mainline GCC support? Go for ARM
Want to not to have to support your own Linux fork? Go for ARM
Want more IP coming from a single vendor that just core, interrupt, controller, and DMA? Go for ARM
Want to work with an IP vendor whose sales are not part time lawyers? Go for ARM
When MIPS understood that they are losing, they simply gave up on competing, and improving.
And in the end, the lunch went to MediaTek because it was the only SoC vendor that was ready to ship competitive SoCs with 4G baseband without rapacious licensing deals like Qualcomm do.
Actually at that time MIPS suddenly came back to life and had a big push with improved designs and a few deals resulting in pretty decent MIPS socs. Performance was okay, battery life was good, and they were first to market with the latest Android release (I think it was 4.0 or 4.4).
But then everything suddenly stopped and they left the market altogether.
Edit: Regarding this:
> Want to work with an IP vendor whose sales are not part time lawyers? Go for ARM
One of the founders of OpenCores (RIP) told me that ARM lawyers contact him _within hours_ of someone pushing a design they dont like. While MIPS has a bad rep, ARM is probably more aggressive.
> Want to work with an IP vendor whose sales are not part time lawyers? Go for ARM
Er... not convinced about this one.
ARM has been quite lawyerish sometimes.
For a while, parts of QEMU could not be published, at ARM's request, because they implemented instructions ARM did not permit a software emulator to emulate.
At least one non-commercial design on opencores.org was removed at ARM's request.
By comparison, for a while MIPS ISA was completely open for anyone to implement, software or hardware. That's been retracted now, but it was nice while it lasted. ARM has never done that as far as I know.
Even before, I used devices based around MIPS without a licence a long time ago, so long as they avoided implementing one or two "special sauce" instructions that was fine. The instructions weren't necessary, just a neat, patented optimisation, and I have a GCC patch somewhere for building the target without those instructions.
So I don't know about today, but until recently I would have consistently regarded MIPS as much less lawyerish than ARM.
I have not seen any activity from MIPS on GCC for years.
Making a Linux port for a given MIPS based SoC is a known pain. They should've worked towards not requiring any hardcoded configuration for each SoC at all, like Linux on ARM does these days.
>I think nVidia could run into some serious anti-trust issues if they tried to interfere with the ARM ecosystem.
I seriously doubt it, Broadcom - Qualcomm sale was blocked because there were concerns that the former would inhibit the telecom technologies development of the latter, resulting in strategic disadvantage to U.S. But, in the case of supposed Nvidia-ARM sale I see the quite opposite i.e. huge strategic advantage to U.S. and so I feel the anti-trust might not be at least State initiated.
But, would other U.S. Big Tech who have invested heavily in the ARM ecosystem oppose this is what to be seen. But even there, I wonder whether their fear could be substantiated? What if Nvidia just signs a good agreement with them to alleviate their fears? After all Apple and Qualcomm were strangling each other throats, now see where they are now.
In the end, I think the deal would just go through and it's us who are worried about the future our Raspberry Pis among others things who would feel lost(and actually loose).
> In a merger of this scale all the big powers (US, EU, China, and I assume also UK now) have basically veto right.
> There are countless examples where country 1 has stopped a merger between a company from country 2 and one from country 3.
From what I've seen, the only method of enforcement for a third-party country is to ban the sale of goods in their respective markets following the merger (like GE/Honeywell in the early 2000s). Would that even be possible in this case?
Not sure how this is enforced. Generally the merger falls apart if there are any indications of this happening.
Don't forget that these are public companies and a stopped merger can affect their stock. Besides, major investors would eat the board alive if they put the company in such situation.
Antitrust regulators power is determined by the size of the market, not the location of the company HQ.
Antitrust regulators for big markets like the US, EU can effectively block mergers from any internationally operating company. Both AMD and Nvidia need access to EU - their customers certainly do.
> Antitrust regulators power is determined by the size of the market, not the location of the company HQ.
> Antitrust regulators for big markets like the US, EU can effectively block mergers from any internationally operating company. Both AMD and Nvidia need access to EU - their customers certainly do.
They could block the merger and if the companies decided to go ahead with it anyway, the EU would ban imports. But, wouldn't that further consolidate the market?
What I mean is, in what practical way could the EU ban ARM/Nvidia from their market, considering these manufacturers' chips are in many devices? Would that mean devices with those components would also be banned?
What I'm trying to get at is, with a sufficiently large merger, the emerging company may produce products so ubiquitous that it would be impractical to bar them from your economy, due to the widespread use of their products and their integration into other products.
Not that I disagree with you, but ARM doesn’t sell hardware—they sell licenses to designs of the aforesaid. Nonetheless, don’t forget that nVidia, insofar as it sells hardware in the EU, would definitely be within the scope of local anti-trust authorities.
I don’t see this being much liked by those authorities, either. nVidia is clearly interested in doing this deal because it will give them competitive advantages, and it’s pretty clear by their behaviour versus other firms (including Apple!) that they have so much of that already that they don’t have to “play nice”. Giving them further purview to do as they please is basically what anti-trust is designed to prevent, whether you focus on industry relations or the ascendant on consumers.
I think there’s some MIPS and SH-3/SH-4 smartphones from the early 2000s still around.
I had an SH-3 PDA in 2001, but Microsoft dropped support for SH-3 and MIPS in Windows CE by the time “Windows Smartphone 2002” came out (my 2002 HTC Canary runs ARM, IIRC) so it’d have to be a non-CE platform. What did the Palm Handspring use?
What’s the current roadmap for CE? I saw there’s a new CE environment in Win10 IoT, but it feels like too-little-too-late to save the myriad of embedded enterprise CE systems.
I saw the WPF-esque XAML demos for CE7 - I was really impressed and also hopeful that it would lead to smooth fluid UIs in embedded devices like car infotainment systems and home thermostats.
Oh how naive I was!
(I assume the push for WPF-like XAML in CE7 was because at the time (2011-2012?) there was still hope of a CE7-based Windows Mobile competing with iOS and Android, and Microsoft realized the mistake of not modernizing CE's UI system at-all between 2002 and 2012? Windows Mobile 6.5 was a hideous eyesore (and all the pretty home-screen functionality was literally just skin-deep).
Actually, as long as there are a number of arm design licensees, the situation would be way better than it currently is for x86: beyond Intel and AMD there is basically no other player in that field (Via? Transmeta? are they even noticeable?).
The difference is that Nvidia would be a customer and the owner of ARM at the same time. An Nvidia controlled ARM will primarily exist to make Nvidia happy at the expense of all other ARM customers.
nVidia already makes CPUs and GPUs (they make the CPU that goes into the Nintendo Switch, for example). Buying ARM doesn’t change much in that regard, except that they can save on licensing fees and it puts them in a position to disrupt competitors who also produce ARM processors.
I would think not licensing new IP would be the main concern. basically hogging ARM IP completely for themselves and their products. It would take a couple of years for companies and devs to switch to a different architecture, but you can be sure that there are competitors willing to fill that gap.
They need to pay $30bn to buy ARM which means they want to maximize the return for that purchase, which would be selling more and not disrupt for competitors to look elsewhere.
It’s not that they will go out of their way to harm it. The ecosystem will change from being the top focus of arm to becoming more of an afterthought. Nvidia arm could end up taking more of a take it or leave it stance because those customers are no longer the top priority.
>Also, it's odd that they refer to the new stuff as Apple Silicon.
Apple supposedly pays intel more for their CPUs than other OEM to avoid having their sticker on Macs; So I don't see this as odd, it's usual Apple branding tactic and from the looks of it(ARM being under threat of sale) it seems to be a wise decision as an average Apple customer wouldn't mind if it switches the architecture for its silicon.
>Apple supposedly pays intel more for their CPUs than other OEM to avoid having their sticker on Macs;
It would be more accurate to say Apple does not receive the Marketing Discount from Intel.
In reality i seriously doubt those discount dont materialise. Intel could have given the TB3 controller for free as part of the deal. Likely less than the Sticker Discount, but still a decent cost saving.
> "Apple ... could hard fork tomorrow, or two years ago, and nobody will notice"
Could they really do this? Start making incompatible designs? I would have thought that Apple's ARM architecture license would include terms to preclude incompatible forks.
> The company was founded in November 1990 as Advanced RISC Machines Ltd and structured as a joint venture between Acorn Computers, Apple Computer (now Apple Inc.) and VLSI Technology.[17][18][19] The new company intended to further the development of the Acorn RISC Machine processor, which was originally used in the Acorn Archimedes and had been selected by Apple for its Newton project. […] The company was first listed on the London Stock Exchange and NASDAQ in 1998[23] and by February 1999, Apple's shareholding had fallen to 14.8%.[24]
I’m (half-)British and I consider “could care less” a hideous American corruption. Recently I successfully argued my case with an American friend, not coincidentally a physicist, by analogy to zero-point energy.
Both "I could care less" and "I couldn't care less" make sense. If you emphasize the second word ("I could care less") and phrase it in a tentative tone of voice it conveys the meaning - i.e. it's possible that I could somehow care less, but it's a theoretical possibility only.
FWIW Professional Linguists think it's perfectly fine to say "I could care less" and that people who peeve about it are just pointless peevers who don't understand how language works.
That's not the point: "I could care less" is grammatically correct, but the meaning is not what was intended. If I had a cup and wanted to tell you there's no more beer in the cup, I wouldn't say "there's beer in the cup", even though the sentence "there's beer in the cup" would be accepted by professional linguists.
> That's not the point: "I could care less" is grammatically correct, but the meaning is not what was intended.
The intended meaning is obvious. There's plenty of colloquial phrases which say one thing but mean another: "I could murder him", for example, doesn't literally mean that. That is how language works.
And, if you saying "there's beer in the cup" was understood by everyone as meaning "there's no more beer in the cup", professional linguists and dictionaries would be A-OK with that interpretation.
By analogy to vectors though, 'I could murder him' is directionally correct but magnitudinally wrong. Whereas 'I could care less' (as used) is directionally wrong - no you couldn't, your point is how little/not at all you care, so stop talking about the existence of an amount that you do care!
> And, if you saying "there's beer in the cup" was understood by everyone as meaning "there's no more beer in the cup", professional linguists and dictionaries would be A-OK with that interpretation.
The fact that "I could care less" is now clearly understood is not intentional. It's an adaptation to a mistake. It's similar to how people say "irregardless", which is clearly an error. It's unambiguous, and everyone knows what you mean. It's still a mistake, and using "regardless" is strictly better.
> It's similar to how people say "irregardless", which is clearly an error.
It's an error that's been being made since 1795, though, and people have been understanding it which pretty much qualifies it as being incorporated into the language.
(I should note that I am strongly descriptivist and think that prescriptivists are silly people.)
> using "regardless" is strictly better.
I think "regardless" is better, yes, because it's a lot easier to say but that's just my opinion, not a Gilt Edged Rule Of English.
that's kind of a stretch; I would imagine "I couldn't care less" is vastly preferred, and the people that use "I could care less" are most likely doing so out of ignorance and not an intent to convey the subtle "theoretical possibility"
There was the apple gpu fiasco. Apple laptop GPUs were failing at an extraordinary rate in 2010-2011, and Apple were getting harangued in the media about it (which was justified). Apple eventually announced a program to cover replacement of the GPUs, but it was very delayed. I'm 100% convinced the delay was caused by Nvidia refusing to accept responsibility. When the program was released, in order to have the repair completed at no cost, your computer had to fail a very specific graphics diagnostic, and the failure would be linked to your serial number in Apple's repair CRM. At the time, no other apple repair program had this requirement — Don't get me wrong, it makes sense to use diagnostics to verify hardware issues, but they aren't the only tool you can use. I'm convinced Nvidia only would reimburse apple for computers that had a recorded failure of this specific test.
Since 2014, apple has almost exclusively used AMD graphics cards in computers that do have discrete GPUs, and I don't think it's unreasonable to suggest this was motivated with their terrible experience with Nvidia.
(I'm aware I'm not really backing this up with evidence, but I think the publicly available facts about the whole situation support this narrative)
Apple also refuses to sign the kernel extension necessary for NVidia GPU drivers to run. NVidia had made a GPU driver for High Sierra, but literally a version later Apple locked down the kernel so that only signed drivers could be loaded. Anyone who was using Nvidia GPUs through, say, an external GPU box got screwed.
I had really hoped that the external GPU thing would takeoff, and give us a good option for gaming on Macs. I bought the dev kit, and it was nothing but a headache, which stopped working at High Sierra too, because they stopped supporting TB2. I'm bitter about it. It really seems like we ought to be "there" by now, but it's starting to look like it will just never be a thing.
It was reported that the problem was NVidia switching to lead free solder for their chip to motherboard connections but not also switching the potting material to one that had the same thermal expansion coefficient as the new solder.
Don't get me wrong, it makes sense to use diagnostics to verify hardware issues, but they aren't the only tool you can use. I'm convinced Nvidia only would reimburse apple for computers that had a recorded failure of this specific test.
In 2010, Apple wasn’t exactly a beleaguered company. It wasn’t like NVidia not reimbursing Apple would break the bank. Apple should have repaired it just to protect its reputation.
Jobs himself was famous for saying to high level employees that at a certain level of authority, you don’t get to make excuses.
I do think Apple should have just undertaken the repairs much, much earlier. I do also think that it was incredibly complex situation, and my guess is Apple was flabbergasted that Nvidia initially wouldn't reimburse them for the failed parts. So, internally at apple, my guess is that it was about the 'principle' of the matter. They also probably recognized that if they started replacing the boards of their own volition without an agreement in place, they would never be able to get any compensation from nvidia at all.
Again, I agree with you, apple should have launched the program much, much earlier.
But, I mean, that's besides the point because this isn't a post about how Apple handled these failing GPUs, it's about Nvidia's inability to work with others.
100% not true, my Dell Latitude at the time also died. There were major class action lawsuits at the time and basically all the manufacturers had to accept liability and fix them.
Discrete GPUs break all the time, customers are just used to it. Maybe it’s improving since GTX1xxx/RX4xx era but they were extremely unreliable when Apple was using them/industry was switching to RoHS compliant Pb free solders.
I recall the GeForce Partner Program raised some controversy, although I couldn't say whether or it was bad or good or why.
There's also the issue with them refusing to open source Linux drivers. Supposedly this is because they throttle workstation GPUs to create a market for higher-end versions while reducing the manufacturing diversity, but I haven't heard a definitive source for this.
Not sure about this, but they definitely prevent the loading of drivers for GeForce cards if they detect a hypervisor (Quadro cards work). Nvidia claims that it's a bug, and that they're not fixing it only because GeForce cards are not sold to be run in virtual machines. At the very least it's a dubious claim since for a short time there was an arms race as workarounds were figured out and the next version of the driver would detect them...
Of course it's market segmentation by obscurity so it only lasted a few weeks and it's trivial to work around, but still it's quite shady since there's absolutely no ill effect from the work around.
It is absolutely not a bug, Disassembly of NVIDIA drivers shows that its looking for specific KVM signatures, and if you modify those signatures, magically it starts working again...
Removing KVM signature is now the standard solution. But there's still a working driver patch to remove the virtualization check, it's available at here.
* sk1080/nvidia-kvm-patcher: Fixes "Bug" in Nvidia Driver preventing "Unsupported Configurations" from being used on KVM
"Fuck you, pay me". Nvidia explicitly forbids "datacenter usage" of their drivers outside of specific licensed-for-datacenter product stacks. (The only exception is for crypto mining, for some reason.) This isn't the only product segmentation in their lineup, either. CAD software optimizations are only available for Quadro cards and up, virtual GPUs are only available for GRID cards, etc. The same GPU they'll charge $600 to gamers for will often go for $3000 or more for businesses.
> The only exception is for crypto mining, for some reason.
I think this is because they were outcompeted by AMD in the GPU crypto days, so they didn't want to get in the way of that (nor were they likely to succeed in blocking this use).
Why does Intel cripple ECC and I/O virtualization [0] in high-end consumer desktop systems? Force people to buy Xeon, even if the performance is otherwise equivalent or higher for the use case. Why does Nvidia restrict virtualization? Force people to buy Quadro and Tesla.
> nVidia
FYI, a few years ago, nVidia has officially changed their name to "Nvidia"...
[0] IOMMU is not only a virtualization feature, it's also an important security feature to protect the host from DMA attacks of malicious peripherals (e.g. 1394, ExpressCard, Thunderbolt, USB 4). Fortunately Intel no longer cripples IOMMU (VT-D) since Skylake, but ECC is another story.
I stand corrected. I also found a clarification from Wikipedia on the situation.
> "From the mid 90s to early-mid 2000s, stylized as nVIDIA with a large italicized lowercase "n" on products. Now officially written as NVIDIA, and stylized in the logo as nVIDIA with the lowercase "n" the same height as the uppercase "VIDIA".
They don't want people using geforce cards in compute servers/clusters. It's artificial market segmentation since the workstation cards are higher margin. They want customers to pay the big bucks since hypervisor setups are basically only used in business and big budget settings.
> I recall the GeForce Partner Program raised some controversy, although I couldn't say whether or it was bad or good or why.
It was about NVidia wanting to make it so their 3rd party GPU partners, (EVGA, Asus etc.), could only sell their popular GPU brands with NVIDIA in them, so for AMD they'd have to come up with something customers are entirely unfamiliar with.
> here's also the issue with them refusing to open source Linux drivers.
It's not just that. AMD didn't open-source their original Linux driver, presumably there could have been some 3rd party licensing issues, but they wrote a new one that is open-source.
NVIDIA doesn't even let others write an open-source driver for them, they make it purposely difficult to reverse-engineer, sign their firmware that they only release with a massive delay and generally refuse to cooperate.
> It's not just that. AMD didn't open-source their original Linux driver, presumably there could have been some 3rd party licensing issues, but they wrote a new one that is open-source.
Even further than this, they created such a healthy environment for it that there are now two competing open source Vulkan drivers, and the third party one (RADV) is usually winning by a bit, and is now directly supported (with staff) by Valve.
NVIDIA can compete based on the superior performance of their products, but that's not enough. They try to attempt lock-in by encouraging software writers to use CUDA and not use OpenCL.
In many projects that support both, OpenCL and CUDA often give similar performance, and in some, OpenCL is faster than CUDA. So CUDA isn't about getting more performance - it's about excluding everything non-NVIDIA.
I suspect that NVIDIA's gameplaying is what led to Apple deciding to not use NVIDIA in any Macs for the last several years.
They don't need to do any lock-in when the choice is between plain old C with printf like debugging, and a set of programming languages able to target PTX, with out of the box support for C, C++, Fortran and several managed languages, and nice graphical debugging tools.
C++ support only came to OpenCL after it was clear to Khronos that it was time to move beyond C only vision of the world, yet almost no one bothered to implement OpenCL 2.x, which is why OpenCL 3.0 is basically OpenCL 1.2 to make all those OEM compliant, and SYCL is now moving into an backend agnostic C++ stack.
So the competition did not move a finger to make OpenCL eco-system beat CUDA tooling, Google did not cared one second to make OpenCL available on Android instead pushed for their own Renderscript C99 dialect, but it is easy to blame NVidia when competition isn't up to their stuff.
Just like Apple, after creating OpenCL and offering it to Khronos, moved away from their own creation after disputes on how OpenCL roadmap should be managed. Ironically, just like with CUDA, Metal Compute happens to be multiple language aware and having a modern API.
The problem with OpenCL, just like OpenGL, is that it relies on vendor support for essentially the full stack. That means if a single vendor doesn't support the feature, it's basically impossible to ship any code that uses that feature to arbitrary users. Nvidia lagging behind in its support for OpenCL 2.x was probably not explicitly stated as "let's sabotage this competing open standard by dragging our feet" but that was the result. AMD and Intel both put together OpenCL 2.x support, so clearly Nvidia could have finished their incomplete support but didn't.
The fundamental failing Khronos made was repeating the GLSL mistake and not specifying a compiled IR that would decouple OpenCL kernel code from program execution. This was clearly known to be a mistake at the time, given ample evidence from CUDA and HLSL that an IR was possible and could greatly improve the developer experience.
These are both things that are unlikely to ever be able to run on anything other than nvidia hardware. It's hardly a mark in their favour over abusive lock-in practices.
Well, of course. They're written in cuda. Why would Nvidia try to write it in a language like opencl that even AMD is moving away from, and instead making fft/blas library calls? At least the source code is there if you wanted to port it.
I think it's very clear that I wasn't commenting on what framework they choose to write their projects in, and you're rather deliberately missing my point.
> These are both things that are unlikely to ever be able to run on anything other than nvidia hardware.
I think it's very clear that the framework was important here. Because you were pointing out that they don't open source much, I pointed out that they do, then you nitpicked by saying it's cuda.
The person I replied to said nvidia doesn't contribute much to open source. I pointed at their github repositories. If you care about the context, it is there, and there is no risk of me summarizing it unfairly.
And now, hilariously enough, they have ended up with a lot of their growth coming from Linux-based systems.
To be fair, I recently bought a new Thinkpad, and with Ubuntu 20.04 and Nvidia CUDA just worked, with absolutely no tuning (I didn't even need to set LD_LIBRARY_PATH, which I found pretty odd). So things seem to be improving, because there's money in it for Nvidia.
The Nouveau drivers have a throttled clock speed due to firmware signatures. And the proprietary drivers are complete mess on wayland, where most compositors and other GPU drivers use GBM, but Nvidia did their own thing and used EGLStreams. Maybe someday in the future the situation will improve, but historically, it hasn't been good.
This sentiment comes entirely from their lack of open source drivers for Linux. It became pretty much a meme after a rant that Linus had about them in 2012.
They have made it even more difficult for open-source to be able to reverse engineer their drivers with signed firmware blobs, push their own solution on Wayland, (ELStreams), over everyone else adopting GBM etc. - in short they refuse to cooperate in an official capacity.
Both Intel and AMD do have an official FLOSS Linux driver and regularly work with the wider community.
Google it maybe? Here's one example thread to pull at: why did Apple stop buying nVidia GPUs across their entire hardware line? EGLStreams is a recent pain in the ass. nVidia's infanticide of OpenCL with CUDA being another painful reminder that nVidia doesn't care about your software needs, just their own.
It's really not hard to find a company that nVidia's pissed off with its dealings.
Apple just spent a billion dollars buying Intel's wireless modem business. If there's still a hardware reliance on Qualcomm, it's certainly coming to a quick end.
The Observer proposed an interesting solution last weekend:
"The government could offer a foundational investment of, say, £3bn-£5bn and invite other investors — some industrial, some sovereign wealth funds, some commercial asset managers — to join it in a coalition to buy Arm and run it as an independent quoted company, serving the worldwide tech industry..."
Actually the UK government has started taking stakes in technology firms where it deems it has a strategic interest. [1]
I can easily see the man now running the UK (Dominic Cummings) deciding to take a stake in ARM. Maybe other ARM customers would also be willing to take (non controlling) stakes to keep it away from Nvidia (Apple of course used to be a big shareholder).
No, no, no. Government should not be in the business of picking winners and losers and this is more about denying someone else that benefiting the whole. Oh its wrapped up in pretty paper with a ribbon on it but the true reason is they don't want "those people" to have it. (being Nvidia but I am certain there are a few more suitors they don't like either)
I'm not generally in favour of government picking winners but this is an exceptional case where keeping ARM independent really would 'benefit the whole'. The argument against Nvidia buying is that they are likely boost their own business at the expense of ARM and the wider ecosystem.
It seems perfectly valid for the UK government to step in to protect its most successful technology firm.
Valid for what reason? There is no evidence ARM would be anymore successful under UK control then nVidias, in fact the opposite maybe true. The history of government owned businesses, esp. in UK is desultory.
According to the article, at least Herman Hauser believes that ARM will likely do poorly under nVidias control. I think the idea is that nvidia is a particularly bad owner considering arm's buisness model (for a similar reason that having Nokia as the main Symbian participant was a bad idea), rather than that the UK would exert beneficial influence. Indeed, the suggestion is that the UK government invests as part of a group rather than takes a controlling interest.
How would the UK governments influence be beneficial in any way? The governments influence will be the agenda of individual politicians, who know nothing of chip design or care about it.
In the US that’s the type of thinking that’s given us the SLS, an $20B launch system that will cost more than twenty times as much as commercial alternatives, and is required by law to use obsolete 50 year old components from key congressional districts.
I think you're agreeing with me. The suggestion is not that the UK government will exert any influence over them at all, just as the Norweigan sovereign wealth fund doesn't exert management control over the 1.3% of global equities it owns.
The point made in the article is that that absence of influence will be better than the negative influence that nvidia would apply.
There is no such thing as absence of influence. Government funding always comes with political influence, and almost always negative.
Norwegian sovereign fund is an investment fund that holds a tiny fraction of each public companies stock in its fund. For the UK to engineer a purchase of ARM would require raising $32B, offering to kick in 1.3% would go nowhere. Anyone with 99% of $32B doesn’t need the UK.
An alternate deal is only possible if the UK offered to kick in a large part, probably at least $10B, and taking a large minority stake. At that level they’d have to have significant influence, or the politicians who engineer it will get run right out of office. What if the new owner moves R&D from the UK?
Because Nvidia will find a way to use ARM to strongly favour its own SoCs at the expense of everyone else’s - that’s why it would pay $32bn. That’s not good for ARM or the UK (or for everyone else for that matter).
NVidia isn’t going to intentionally destroy a $32B acquisition to throw that money down the drain. They are likely to fully support every customer, to protect their investment, and fund it do new things strategically important to NVidia.
If the UK owns ARM, over time every investment decision they made will be required to benefit the party in charge, it’s supporters and their districts. That will certainly destroy ARM.
So they invest in Tegra say, turn Tegra into a hugely profitable success story (which would be great) and then the next moment turn to Tegra's biggest competitor who is also using ARM. Will Nvidia owned ARM provide 'full support' to that customer or will they tip the playing field to favour their own products? We would just have to trust Jensun and his successors. I'd rather not have to rely on their sense of fairness.
I don't like state ownership either. No one is suggesting that UK govt should buy ARM outright - just that it takes a stake alongside other long term investors so that ARM can remain independent.
ARM isn’t independent if the UK takes a stake. It’s beholden to those politicians, and those politicians are beholden to their key supporters.
NVidia will be contractually obligated to support ARM customers. If over time they do a poor job, they just flushed $32B down the drain.this is an imaginary problem that requires zero preemptive government action. It’s like trying to stop someone from buying an expensive car they can afford on the unsupported theory they won’t take care of it.
> NVidia isn’t going to intentionally destroy a $32B acquisition to throw that money down the drain.
It's not just an assumption that they will manage their investment poorly (although that wouldn't surprise me), it's what the likely behaviours of other market participants would be if a key supplier was controlled by a competitor.
He's absolutely right. Unless there is a 100% firewall between ARM and Nvidia then Nvidia will influence ARM to move in directions that help Nvidia's wider business and not the ARM ecosystem as a whole - and that will adversely affect everyone who uses an ARM CPU - and that's basically everyone.
I don't believe that such a firewall is possible in any event. Who will enforce it?
For those who are saying that it's great as it will give RISC-V a boost. Fine in principle, but how long will it take before the RISC-V ecosystem is anywhere near comparable to ARM in mobile?
One final point: the UK government is now considerably more interventionist than at any time in the recent past. Not completely impossible that they intervene in this in some way.
I agree that this deal is good for Nvidia only and will hurt ARM CPU usage.
Nvidia is right now not in the same league as Intel/Apple, but having the most successful consumer and corporate GPU business along with control of Arm architecture CPUs puts them in the same league in my opinion.
That kind of dominance is never good for consumers.
Having worked for a company that was acquired and became a daughter company, such a setup is not enough of a firewall.
Slowly, the acquiring company will modify internal systems and internal processes until there is not much left of the original company. And many key employees will leave as they don't identify with the way things are being done.
I would like to think that acquisitions can be a good thing, but that's not the case in my experience.
Seems like alarmist FUD to me. While I agree that having Arm owned by any one of its major customers creates some potential conflicts of interests, I think the notion that NVIDIA or any other company would start to drive licensees away with draconian terms/costs with some poor business planning pretty silly.
The more interesting question to me is what would any single (major customer of Arm) company gain from owning Arm? For instance, if NVIDIA wants to be competitive in data center CPUs (which would put them at equal footing with AMD and Intel, not more monopolistic), why couldn’t they go the Apple approach with a perpetual license and design it entirely custom (or not) as they see fit? (AFAIK they already design custom mobile CPUs.)
What do you gain by buying out an expensive, large licensing company that essentially makes no money that leaves you in a weird position with various rivals? Doesn’t quite click for me.
If they are buying the perpetual license and hiring the ARM expertise anyway, why not just get the company, which also includes giving nVidia majority control of the ARM roadmap as well.
You get the people who literally wrote the book on modern ARM, you get all the IP which builds up the patent chest (improving chances of counter-suit and cross-licensing in the event of patent infringement issues), you don't need to pay for the license, and you know you aren't going to get fucked by roadmap or licensing changes in future. And your competitors literally end up paying you for privilege of competing against you.
Not saying it's right, but there's certainly multiple arguments for it.
> If they are buying the perpetual license and hiring the ARM expertise anyway, why not just get the company, which also includes giving nVidia majority control of the ARM roadmap as well.
Because the perpetual license is much much cheaper than $32B.
Afaict that seems to be an agreement that a patent holder enters into a third party with to promise to license their patents to implementors of the standard. In this case there is no standard, just the patents and associates technology, so I'm not sure that applies. I would be shocked though if big players didn't have similar long term agreements with Arm.
nVidia already owns Mellanox which today seems very monopolistic for any HPC-interconnect. Them buying ARM means, they could just stop supporting everything else someday.
I for one would welcome this. Firstly, as mentioned, it gives people incentive to look into RiscV.
That aside, for far too long QCOM has had a monopoly on the phone chip market. Patents aside, which increasingly became irrelevant due to the death of Sprint, their major advantage has always been in Adreno, which as far as I remember AMD basically did for them. Mali is terrible. If Nvidia could revamp ARM graphics at the spec level, we'd have real competition again from Samsung, Mediatek, and potentially others.
Samsung recently entered into a licensing deal with AMD to use their graphics technology in a mobile design. Early performance leaks/rumors are very promising (if true).
I don't see this as much of a problem, competition is already on the way.
> has reverse engineered a bunch of popular Mali ISAs
As a FOSS user and developer, this is already a big problem to me. Although the situation of Nvidia's mobile GPUs (e.g. Tegra) is a bit better than desktop GPUs, but not much better.
It will be the end of Linux on ARM. They will choke platform with blobs until the inevitable death. NVIDIA and open-source is like matter and antimatter - they simply cannot coexist in the same place and time.
ARM is all about licensing and selling hardware IP, they will use Linux to the extent that it helps to sell their IP, just like most companies that actually pay for engineers to contribute some parts of their crown jewels to Linux, while leaving the rest for their own in-house distributions.
Also in case you haven't noticed, the IoT domain where ARM thrives is getting full of BSD/MIT POSIX clones, guess why.
ARM mbed, Zephyr, RTOS, NuttX, Azure RTOS ThreadX, ...
Having an liberally licensed code-base with make it very easy for entities to publish a benign version on github, and then deploy an "enhanced" version on the real hardware, cryptographiclly signed, and not dumpable.
They'll find something, that can support the argument, that not all functionality can be open-sourced, and that is why the published binary will never have the same checksum as something you have compiled from the public sources.
From my comment history you will see I am pretty much into commercial and dual licensing, so I do agree with your point, however kind of feels like the way copyleft licenses have been managed has brought us back to the commercial licenses with source code, just under a different name to make them more appealing to younger generations.
> the way copyleft licenses have been managed has brought us back to the commercial licenses with source code, just under a different name to make them more appealing to younger generations.
I think, I'm following along (&agree), but to be sure, could you perhaps elaborate a bit on that? (Also for other? readers)
Basically Linux kernel, GNU utilities and GCC are the only projects left with a copyleft license, almost everyone else migrated to non-copyleft licenses in the context of making money with some form of open source.
And not everything gets upstream, in name of keeping the main business at the company's soul, for example those optimizations used by clang on PS4? Sure all of those that don't reveal PS4 architecture features that could eventually be "misused".
The large majority of those business have moved into clang, thus reducing GCC usage to copyleft hardliners, Linux is visible on ChromeOS and Android but not on a context that it can fully profit from and then there is Fuchsia waiting on the backstage.
So in a couple of generations, when copyleft software is just a memory in digital books, we will have gone full circle to the days when buying developer software would entitle you to an extra floppy with a partial copy the source code.
The only difference is how that partial copy gets delivered, which will be just the upstream of non-copyleft software.
Huh?? ARM is probably the most popular CPU architecture in use today -- unlike x86, which is largely limited to desktop/laptop and server computing, ARM is used in a lot of embedded applications.
You're underestimating the number of ARM microcontrollers in deeply embedded devices. For example, most hard disks and SSDs have at least one ARM core -- and often several -- running the device. They're also quite common in peripherals, like keyboards and mice. The firmware for these devices is often either a realtime embedded OS (like FreeRTOS), or is entirely bespoke. It's virtually never open-source.
Even if you're only looking at mobile phones, the cellular baseband is often implemented on ARM as well, and it definitely isn't a Linux system either. (Nor is it open source.)
Also, Android, I think, by now is an edge case of open source. The kernel is, but there’s a thick proprietary layer on top of it without which it wouldn’t be Android.
I'll just reply once here. Thanks for the source. Keep in mind each cell phone can/does have multiple ARM chips, which I attribute still to 'Linux', maybe that's not fair.
I tried to get real stats and didn't get far. The most I could find was that ARM was actually declining somewhat in embedded, and that embedded was less than half of revenues. Also, that ARM had 90 percent of the 'phone/tablet/laptop' market, but nothing amounting to close to a majority anywhere else.
I think I was incorrect saying that it isn't close, it likely is. I still think more go to supporting a Linux device than not, but am happy to be proven wrong if such data exists.
I don’t have those numbers, but would guess that, the smaller the ARM CPU, the more of them are sold.
Also, the more are sold, the more hardware costs become important relatively to development costs, making Linux a poorer choice than smaller OSes or even bare metal.
You seem ignorant of exactly how many embedded systems out there are not running linux but smaller OS or just bare metal custom kernels. You should research it. Looks like someone down below is less lazy than me and gave some source so do some reading. ARM cores are everywhere buried deep, where you can't see, waiting for skynet to come along
You nailed it. Nvidia's refusal on releasing sufficient technical information and noncooperation with the FOSS community is already self-evident on multiple platforms.
> Blob was popular at school he was helpful too / He could get your motor runnin' / with a drop of goo / He was givin' it away never charged a dime / But by the time he graduated / Blob was business slime!
> He was a blah blah blah blah blah blah / blah blah blah blah blah blah blah blah / blah blah
> He's givin' you the Evil Eye!
> Now everybody had it / they was drivin' around / They was givin' up their freedoms / for convenience now / Blobbin' up the freeway, water black as pitch And somehow little Blobby was a growin' rich!
> Now it was out of control / n' fishy's came to depend / on Blobby's Blob Blah, seemed to be no end / Then his empire spread and to their surprise / Blobby been a growin' to incredible size!
> Then along came a genius Doctor Puffystein / And he battled the Blob / who had crossed the line / He was 50 feet tall — Doctor said "No fear" / I got a sample of Blob I can reverse engineer!
> But it was too late! / Blob was takin' over the world! / He wants your video! / Ya he wants your net! / He wants your drive! / He wants it all!!
> since they only licence but don’t actual fab the chips
From the article: "It's one of the fundamental assumptions of the ARM business model that it can sell to everybody. The one saving grace about Softbank was that it wasn't a chip company, and retained ARM's neutrality. If it becomes part of Nvidia, most of the licensees are competitors of Nvidia, and will of course then look for an alternative to ARM."
Implying NVidia's competitors who use ARM would fear losing their license.
> Also what’s the difference between ARM and RISC-V. Why chose one over the other ?
They have technical differences, but TBH I don't know them and I'm not sure they matter. However, conceptually they have two big differences IMO : RISC-V is open (don't need to buy a license to use it) but ARM is way more mature and supported (RISC-V based computers are still very rare).
RISC-V sounds like a good idea, but (imo) could lead to a situation, where all/many device manufacturers make custom processors based of the liberally licensed code-base.
At first, this sounds great, "Yah, no more reliance on Intel/ARM/, no more ME/Trustzone".
But then you realize, that although you, the user/buyer, can build/run some custom code on there, you'll never be able to tell for sure, if you did not get a version/series with a gimped HWRND, or other (security-reducing) goodies.
Targeted attacks will become very easy for anybody, who can pressure companies/engineers into adding something for (a subset of) the customers, or can reroute/intercept device shipments to you.
ARM is a "known" situation. They have TrustZone, we know it exists, but buyers/end-users do not get full control of it (outside of some niche products, or dev-boards). That situation will not change. With ARM, get already got used to the idea, that you'll never "own" the machine, you are merely allowed to execute some code in a walled-off area of your Telescreen.
So, with RISC-V, you have even less assurance, that the processor IC really contains what you expect, and nothing more.
> But then you realize, that although you, the user/buyer, can build/run some custom code on there, you'll never be able to tell for sure, if you did not get a version/series with a gimped HWRND, or other (security-reducing) goodies. Targeted attacks will become very easy for anybody, who can pressure companies/engineers into adding something for (a subset of) the customers, or can reroute/intercept device shipments to you.
I fail to see how this is different from today's situation. What's stopping targetted attacks via custom modifications of ARM or x86 chips?
> What's stopping targeted attacks via custom modifications of ARM or x86 chips?
Nothing, if you have power over the manufacturers. (Usually governments)
But now, with RISC-V, anybody with deep enough pockets to spare a few million can create fake silicon, where you'd need an electron microscope to prove, that you got an "enhanced by TAO" chip.
RISC-V allows companies to make closed systems, but it also allows them to make open systems, which wasn't viable before.
Most people will keep using closed systems because they don't care about computer architecture, but us hackers will finally have a chance at having a (small) portion of the market. A small industry of open hardware.
> RISC-V allows companies to make closed systems, but it also allows them to make open systems
True, but can you recall any moment in history, where market forces did not actually make producing "open" devices unprofitable? So, (imo) in theory, you are right, but in practice, (successful) companies will use the technology to create "even more closed" system, and worse: Make systems, where the user can't even tell, if he got an "open" system.
> A small industry of open hardware
That you will only be able to use in a lab/isolated_env, or illegally, because mandatory signing of FWs/BLs/OSs/APPs is just around the corner for consumer devices capable of speaking to the internet.
> True, but can you recall any moment in history, where market forces did not actually make producing "open" devices unprofitable?
I'd say that the early era of personal computers proved it. The Apple II and IBM PC were very open platforms-- thoroughly documented, developer friendly and made with very few parts you couldn't buy out of your favourite electronics distributor catalogue.
They were riotous successes in large part because the open platform meant they didn't have to guess or corral the entire market. Remember that most of the killer apps of the era (VisiCalc, WordPerfect, dBase, 1-2-3, etc.) were third party, and it was largely aftermarket firms selling "six-in-one" cards, Sound Blasters, and Hercules graphics cards that patched the gaping holes in the PC's hardware design.
Conversely, look at a machine like the TI-99/4 series: while on paper a decent machine for the time, it was as "closed" as a machine of the era could be: an unusually limited built-in development platform, and many of the desirable upgrades involved buying a very expensive expansion chassis.
It's interesting to consider what the business tradeoffs of today's walled garden model are. Apple could choose to do a "pure hardware" play, or emulate the early 1980s IBM, where they offered some popular software via their own product line as a trusted source. The result would probably be a marginally larger total addressable market, as it would bring some customers and developers to the platform who needed something that could never clear the App Store before. But would the growth in market size compensate for the reduction in the average cut they can command from the market?
I think they made a commented disassembly available officially, and there were definitely aftermarket books with blow-by-blow analysis.
You could argue it was imperfect (not exposing every feature it could, and having performance costs, sent a lot of software to bypass it and pound hardware registers directly, creating that weird world of early clones that would run MS-DOS but not 1-2-3 or Flight Simulator) but it doesn't seem like a hostile gesture to external developers.
Now it was definitely a legal hurdle if you wanted to make an outright clone, but by 1984 or so, Compaq and Phoenix had solved that problem.
I don't know, where you live, but in most of the world, certification/signing/encryption of RF-capable Devices/Firmware has been established rule for decades, with a few exceptions for dev-boards.
TPM, ME, SB, the_recent_grub_fixes,etc are all steps putting in place a pervasive infrastructure, that will enable truly draconian control over our computing devices for the key-holders.
And I have not reason not to believe, that somebody somewhere is waiting for a pre-text to implement full vertical stack control in law for his (region/country/domain). Perhaps "so that we can ensure, that the contact-tracing app gets installed everywhere, otherwise we'll all die!!11!". Or protecting us from Terrorism; that usually does the trick. This argument also works surprisingly well to argue against backdoor-less encryption protocols. See e.g. Australia (iirc)
Do you not perceive a similar evolution?
Note, TP,SB,ME could in theory be used to build "secure" systems, but currently they are (also) used to secure the PC against you, the buyer/user.
If your device isn't hackable then it's not because of ARM imposed limitations - it's because of decisions made by the device maufacturer and there is no reason why that would change if they were using RISC-V instead.
If you re-read my post, you'll see, that we are not contradicting each-other.
If you have the time/money/infrastructure/knowledge/trusted_personnel to design/make/test/etc entire processors, then now with RISC-V you can in theory make secure systems.
Also remember, that your effort needs to already start at the level of tool-chain, and other tooling, all the way down to the transistor. Otherwise you'll have a trust problem with regards to your compilers/synthesizers/etc.
But that is quite a high barrier for entry for anybody smaller than a decent sized country.
I enjoy learning every day. And -generally- I'd like to know, where my knowledge has limits.
You seem to care, and claim to be wise, so please, do enlighten me, where I'm wrong. We may be misinterpreting, what the other one is trying to say.
I'll try to re-word what I mean: If your "system" uses standard off-the-shelf parts (CPU/SOC/Mem/...), then the fact, that you can go out to the shop, and buy standard replacements, means that you can be reasonably sure to be able to thwart/detect a targeted attack on your supply-chain easier, than if your device contains a e.g. custom, specific CPU for that device, that could contain god knows what extensions to the instruction set, and that you can only get a replacement for from that one specific vendor.
I find it baffling that SoftBank isn’t simply taking ARM public. If ever there was a company that should be independent, it’s ARM. Is there a good explanation out there for why this isn’t the obvious choice?
There would be a certain symmetry if Apple took a (non controlling) stake in ARM to keep it independent given that sales of ARM shares in the 1990s were a significant factor in giving Apple financial stability in the 1990s.
NVIDIA's stock is so massively pumped up. They will simply make an all-stock offer that's pretty much impossible to refuse and that will be the end of it (most likely).
NVIDIA is the only hardware company that takes software serious.
In every hardware thread, someone complains about hardware people being behind on software technology. And then there is NVIDIA, the company that provides the best development environment for GPU development, the company that used chip simulation for their first processor.
AMD was kept alive by oil money and console contracts. NVIDIA made it on their own. They were innovative in the past, they can be innovate in the future. To me, that's an opportunity for much more future growths.
How much is too much if NVIDIA can take Intel's market? And we are only at the start of machine learning and product simulations.
Sounds like a protectionist who doesn't want ARM to be owned by any foreign entity.
"most of the licensees are competitors of Nvidia"
Is this true? It seems like quite a stretch. Nvidia doesn't offer a chipset for the mobile, battery powered market(it tried, and gave up because of competition from Qualcomm).
>Sounds like a protectionist who doesn't want ARM to be owned by any foreign entity.
Yeah, so? If the 21st century taught us anything, is that protectionism makes sense for many products/services (not that big countries really forgot it - they only "forgot" it when for the time that their products was the only game in town).
Not really. Except if one confuses free trade and protection with planned vs market economy (which is a different distinction).
The big powers that profited most in the 20th century, weren't following free trade - they were colonial and post-colonial overlords, enforcing their trade deals and terms on lesser countries.
Free trade was OK for those big countries like the US, UK, France, Germany, etc they were running the shots, and were net top exporters themselvess...
And even at that, they themselves could not care less about "free trade" doctrines. Sponsoring your own country industries, using your political force to get global deals, subsidies, etc, were OK for the mid-20th/early 21st century, when they had done it.
When that changed, and their trade partners got increasingly bigger, "free trade" got too much for them and pressured their partners countries (like with the Plaza Accord).
Now that the US can't do the same to China (due to its sheer size and power), old globalization and free trade flag-bearers discovered suddenly that free trade might not be the "be all end all".
>Now that the US can't do the same to China (due to its sheer size and power), old globalization and free trade flag-bearers discovered suddenly that free trade might not be the "be all end all".
This is one of the main reason why US is fighting China.
Tegra started out as their mobile chip, but has evolved into their robotics/automotive/industrial-SoC platform through the Jetson series of modules which are alive and well.
[Sidenote: this has resulted in the hilariousness of taking an Android kernel and development infrastructure and them slowly trying to make it back into something that looks like a normal Linux system. Their board support package is a hot mess, but gets used in real products because the hardware actually works well. A software company NVidia is not.]
Only the older chip series they could not sell to tablet makers. The newer chips are too power hungry for mobile and are targeted at automotive where Nvidia has gotten traction. Nintendo better hope Nvidia has something new coming up in time for a next generation.
Competition breeds innovation they never should have sold in the first place. If an investment firm won't hold them they should start an IPO instead of seeking value validation from contending clients and players in the hardware industry.
I'm tired of the tech industry's standard duopolies. ARM breaks that trend by licensing to Qualcomm, Samsung, Apple, MediaTek, & others; surrendering independence today could hurt the future of tomorrow.
I would dearly love for the industry sh1tstorm that would happen if Huawei would buy ARM (please!)
Would like to see Donny try to ban a Huawei ARM, or order Google to only supply to hot slow Intel Atom mobiles, with everyone of the asian makers walking away from the US market, while the world switches to an OSS Android fork.
Although I still for the life of me can't understand why Miyoshi son is holding on to wework and selling ARM.
Yep - I was always told, keep your dividend-paying winners, sell your losers.
Wework is definitely a loser, ARM is a winner - its completely upside down. If he wants cash, he should sell his trash. And avoid future losses by not buying trash in the first place.
Rather Alibaba and Yahoo Japan are Softbank's dividend-payers. ARM has proven nothing more than a bargaining chip to kickstart the vision fund and now a crunch zone to keep the group liquid. ARM's valuation has barely moved despite the continued dominance of the platform.
Selling ARM lets Softbank keep WeWork living for a couple more years, enough time to bring the non-japan markets into cashflow positive status. Japan by some miricle was cash flow position from the start, he must think the model can work overseas.
That would never happen. Huawei has made their own bed by consorting and being inextricably tied to the Chinese Communist Party, they deserve everything they're getting, while on a personal level I despise Trump, I don't mind that he's sticking it to China on privacy and industrial imbalances between the USA and China, while also stating the obvious that it's bad to use Chinese state controlled hardware at the heart of your communications systems
Surely a sale would be allowed if regulatory compliance compliance is met? I was part of a private semiconductor company bought by a public American company and the buying company had to convince the US gov they would not turn the product line into a captive portal. I expect it's the same here?
They had to get clearance and satisfy the USG the products would still be available to third parties. Had to do this in Korea and China too. Once the governments were satisfied, the acquisition was completed.
I think he's right. A sale to Nvidia would be detrimental to ARM's current business model - now Nvidia may have other plans but the thought of that would certainly scare existing licensees even more.
Should the UK government intervene? Yes. Will they? Probably not.
The detail here is more nuanced than protectionist vs free market - but I don't think most people realise that and see it as just protectionism. It isn't I don't think.
I’m aware of them but do they really develop any mainstream devices ? I might be wrong but aren’t those companies working in their niche eg cell towers etc. Isn’t that a way tooo narrow application of such a great tech ?
Me too in agreement. NVIDIA got interested in ARM after the launch news of Japan Supercomputer. They want to suppress, I think.. as it's bad for their business. (Twice faster than First Supercomputer ) ...
British government should not bother to clean up the mess capitalists make. They will have a hard enough time living with the consequences of brexit and corona without buying up companies.
So you're arguing about the level of concern? I can think of lots of reasons to be more concerned about a chip manufacturer buying the design/licensing business literally at the core of the majority of mobile phones than of a technology investment firm.
Or maybe you should just admit that your original comment was not well founded?
But, on the plus side, it's the biggest opportunity RISC-V will ever get. Almost overnight anyone who was building ARM cores will suddenly see investments in RISC-V prudent as a hedge against nVidia's future core designs not being licenseable. SiFive will very quickly become a multi-billion dollar organization...