I have been a big fan of the RISC-V efforts. Arm's attack is actually heartening in the sense that it confirms the perceived threat from RISC-V is real. This is a moment on the path that makes me realize real progress is being made for RISC-V.
The development of new technology seems to go so slow while it's happening; it's only in hindsight that it seems it's always and inevitably been this way. Still a long way from running OpenBSD on a 17" laptop with nothing but open source hardware inside, but it's a nice moment.
Sadly it is still very long way to open source laptop. Remember, processor or SoC contains not only RISC-V core, but also many peripherals. Some peripherals are easy to design (like UARTs and I2Cs), but gpu, pcie, ddr memory and networking aren’t. The question is who will deliver these as open source implementations. IMHO these are not attractive problems to solve in academia, because they are already available as commercial closed source products and lacks novelty.
Designing a decent CPUs without violating any patents is easy. Almost all CPUs today follow design patterns which were established back in the mid 80s and early 90s. All those original patents have now expired and you can sidestep around the newer ones.
Things like GPUs have evolved massively since the late 90s, gaining support for pixel shaders, vertex shaders, moving to unified core designs, gpu compute and establishing new threading models.
And all these things have bled back into the Graphics standards. It would be really hard to design a gpu implementing even a 10 year old standard like OpenGL ES 2.0 without violating any patents.
And it will be even worse with things like pci-e, ddr and networking controllers. You can't design around the patents without breaking compatibility because there are standards enforcing you to follow the patents.
An extremely basic (but power efficient) framebuffer would probably be enough. At least with that and some less encumbered output stack (maybe display port?) real and secure foundations could be made for truly verifiable voting assistance systems. It should produce a voter readable paper ballot that is the /actual/ poll and include a 'quick count estimate' for the next morning reports.
We do not have to follow the standards that prevent open source solutions. If the price for having an open source computer is old technology or "nonstandard" solutions, then I'm more than happy to pay it.
Okay, so let's just get on with starting at an incredibly basic level then!
Consider the Ericsson MC218 / Psion Series 5 (https://www.google.com/search?q=psion+series+5). The software (EPOC, forerunner to Symbian) could drag windows (whole windows) around the little screen faster than the LCD crystals could keep up/redraw.
It's really reassuring to me that such a slow CPU gave the LCD a run fo its money, because let's face it, doing $tons_of_pixel_manipulations on a CPU always chews power, so if you want battery life you're going to need something that works well at the low-power end of the spectrum. (The Series 5's monochrome LCD was 640x240.)
So, well-designed software would meet the power consumption-versus-feature challenge halfway. Such software doesn't exist yet though, and it appears this status quo isn't going to change anytime soon; the (financial) investment required just doesn't seem to be materializing.
I'd say the main problem is market effects, and consumer interest. "Yeah 70% of CPU time goes to drawing the screen" is NOT particularly attractive.
usually in these cases where the CPU is so slow, it simply doesn't have the bus bandwidth even to feed an LCD. so the LCD is an SPI device or 8080 MCU compatible memory-mapped peripheral, that has its own internal SRAM as a framebuffer. a good example is the HX8357D (now obsolete sadly) https://github.com/torvalds/linux/blob/master/drivers/video/...
technically speaking it's perfectly possible to use an 8mhz 8-bit arduino-style processor to make a phone, if all you want is calls and SMS! the processor inside the actual GSM/3G modem is far, far more powerful than that.
i added up the number of actual processors in a smartphone once: it was insane. DSP in the Audio IC. ARM core in the GPS. ARM core in the 3G Modem. ARM core in the main processor. 8051 core in the capacitive touchpanel controller. the list just goes on and on!
FWIW, I did see a weird black blob type thing on the LCD ribbon cable, but I don't think I got any photos of that. That is very probably the LCD controller you speak of.
I wouldn't be entirely surprised if the main CPU (and specifically EPOC) had a framebuffer of what was on the screen. (I'd be surprised if it didn't have any such (double-)buffering.)
Yeah, baseband processors have a fair bit of oomph to them. I've long been curious what sorts of specs they have.
We could just license the patents though, right? They have to license them for a fair price by law, so I don't see how this stops an open source computer. It's not like we're getting the gold wafers for free.
> They have to license them for a fair price by law
What law would that be? A patent holder can set whatever terms they want. Standards orgs try to impose FRAND for their contributors but it is not legally binding.
It's what my patent lawyer explained to me, but I may have misunderstood him. The impression he gave me was that if someone patented something super useful they couldn't simply ask for a trillion dollars to stop their competitors from using it.
ARM are not worried that RISC-V will replace Cortex-A72. They license far more (by quantity and value) of the tiny Cortex-M embedded cores, and RISC-V can already replace just about every use of Cortex-M. The foolishness of this website was ARM telling everyone that RISC-V is a viable alternative. It could only have been worse if they'd provided a link to https://github.com/freechipsproject/rocket-chip
Of course, there is probably a large difference in licensing and royalty cost between the A series and the M series. Based on the numbers above, it wouldn’t surprise me if the A series results in higher revenue than the M.
Oh sure, but I still wouldn't feel happy telling my shareholders that circa 75% of my product volume could disappear. And if that happened it would necessarily leave Cortex-A vulnerable (the Innovator's Dilemma in action).
SiFive's cores are Rocket, which is open source. lowRISC is based on Rocket too. SiFive include proprietary IP for DDR4, ethernet PHY and other things, although I'm pretty sure they'd jump at the chance to replace those bits with open source equivalents if they existed.
As for your claim "I don't think most RISC-V cores are open source" it depends how you define "most", but it's a complicated picture. There are competitive open source cores (Rocket for in-order, BOOM for OoO, Pulpino, PicoRV32).
There are also several completely proprietary designs, as there should be, such as Western Digital's two different 32 bit RV32 core designs which we expect will ship a billion units next year.
As far as I can tell, SiFive refuse to say what open source code (if any) their commercial cores such as the E31 are based on, or what changes they've made to it if it is based on open source code. See https://forums.sifive.com/t/more-details-for-e31-core/1165 Basically, it's a proprietary blob which you don't know the internals of just like ARM. They've confirmed that U54 and some of their FPGA-only cores are based on Rocket, but those aren't the cores they're offering commercially.
RISC-V is open instruction set, not open microarchitecture.
You won't see high performance open source micrarchitectures that implements RISC-V core.
Nobody is going to invest $300-$500M to develop new competitive open source microarchitecture (patent free?) every 5-10 years for RISC-V that can compete with Intel, AMD and others.
I imagine China and India will eventually go with RISC-V, too. Maybe Nvidia will have "another go" at designing its own CPU microarchitecture, too, if they get tired of Arm's shenanigans. Nvidia has already chosen RISC-V for its GPU microcontroller.
R&D from EU, China, Japan and India are not going to result open source microarchitectures. Government supports basic R&D but the end product will be commercial closed source IP and patented.
>While we do plan to tape out a few variants, given the foundry NDA requirements, we will not be able to publish any layout/backend data. But the ASIC syntesis and P&R data to teh extent possible will be published to allow others to replicate our ASIC flow.
That's like saying u won't see a general purpose, high performance, highly reliable open source operating system for servers and consumers, with a total development cost of more than a billion in terms of total person hours.
> Once you have a decent base and sufficient interest, peoples incremental changes will get there eventually.
The size of a viable increment in a software project is much smaller than it is in a hardware project. Developing a functional, non-trivial microarchitecture requires significant capital expenditure and coordination of efforts. This requires at least a patron of some sorts to manage the process, where the requirements for that patron are much more significant than "somebody with a listserv".
Nvidia is not ignorant of the realities of hardware design. Google spends capital to get what it wants. Western Digital is not just "somebody with a listserv".
Once those players front the money for the ecosystem the incentive to coordinate is there on the fab side. With that in place and good simulation possible, the size of an incremental change is an exercise in release management.
You repeat very common misconception and false analogy.
You don't get high volume processor core into silicon by just hacking Verilog/VHDL in your computer and then send it to the foundry for tape-out and manufacturing like you do with some custom ASIC chip.
Intel and AMD start new microarchitecture design work every 2-3 years. The work must progress in tight time schedule or it becomes outdated. It's highly coordinated effort that requires significant capital investment.
You have to lock in goals, select implementation technology, methodology, have expensive tools etc. Behavioral design, physical design and silicon ramp interact constantly. Problems are solved and there are many optimization and validation phases. You need expensive hardware and access to engineers in the foundry and process you are targeting.
I'm sure there will be many completely open RISC-V cores, but not for the high volume uses or latest processes.
You yourself repeat a very common misconception and false analogy.
Intel and AMD start new microarchitecture design work every 2-3 years because (Intel, at least) base their sales on offering the latest and greatest. They seek peak performance, which is exactly what gamers/scientists/prosumers want.
But for large swaths of the market, you don't need that. Think of your average consumer/small business owner. By and large, most of them are using 3-5 year old PCs, and even then the processors within those 3-5 year old PCs were probably last-gen when they were purchased.
Or think military -- the ability to have a home-grown, openly-vetted processor that you can tape out with a trusted domestic supplier would eliminate a number of security concerns.
Don't let great be the enemy of good here. If you can build a decent RISC-V core that can compete with an Intel Core 2 Duo, you've got all the horsepower you need for the average consumer.
I did not deny that there will be completely free designs for small volume applications and specific niches. They will be very low performance and not going to be competitive in mass markets in low price or high price.
> Think of your average consumer/small business owner.
Optimizing processor for low-cost price is also very demanding.
nonsense. rocket-chip has been taped out at 45nm and achieved 1.5ghz, years ago. india's shakti core is going into 20nm and as a 6-stage pipeline will run at 2.5ghz. as that one uses only 120mW, even 16 2.5ghz cores would only consume around 3 watts (!!!).
caveat of course: the L1/L2 cache power consumption isn't included in that figure, but, crucially, with the Compressed Instructions reducing cache misses by 20-25% that's equivalent to having approximately double the I-cache size.
basically by a fresh start they're on to a winner.
Maybe a Pentium IV then? I don't think RISC-V -- or any processor really -- gets taken seriously in the desktop/laptop space unless it can compete with one of Intel's earlier chips.
Unlikely that anyone will even try to target desktop/laptop market with a RISC-V CPU, it is a very established market without much future growth. The plays will be in embedded devices, especially areas which requires pheripherals with tight integration with the CPU, this is where an open and extensible ISA can be a significant benefit.
Examples today are controllers inside SSDs/SDcards. In future maybe also microcontrollers with integrated machine/deep learning co-processors.
Writing software doesn’t require massive investments of capital like making chips does.
And if open source processors feature the same attention to detail and high engineering standards that most open source software projects evidence, they’ll frequently catch on fire or stop working without multiple bug fix patches.
Low level changes in the kernel will probably take a long time and effort before they are deployed. Many kernel devs probably don't do their work because they need some particular fix for themselves, but rather to advance the overall product.
I'm not trying downplay the most amazing community effort humanity has ever seen. I'm just saying, if you want even a basic CPU, it will take far more generosity than that of a few humans to get it off the ground. Insanely more to fix one, if it's even possible, let alone practical.
The initial versions of Linux weren't useful except for people playing around with it -- kind of like RISC-V cpus aren't useful today, except people playing around with fpgas. But it does allow you to relatively easily deploy your CPU using this route.
I'm saying that much linux development nowadays is done far away from actual deployment, just like open source cpu design would be.
Also, it's often paid and driven by companies who have an interest in the existence in these open platforms, so it's not all just based on generosity (and we do see companies are interested in RISC-V cpus).
> You won't see high performance open source micrarchitectures that implements RISC-V core.
You don't have to spend $300-$500 to get to a reasonable performance. There are multiple people who are interested in this.
The EU and India have processor initiatives (including high perf) that will use RISC-V. Many universities, non-profits, companies will continue to work on the current open-source chips and combined with open source.
Of course it will take time but blanked 'You won't see high performance open source micro-architectures' is just an assumption.
Yes you have. It's completely different to design small volume ASIC chip with RISC-V processor and high volume microprosessor core that is cost effective.
Processor initiative in EU and India have are providing R&D money for basic research and competence building for local companies. They will be customers, not manufacturers.
The EU and India will both build actual chips and those can be tapped out. They will also of course consume their own stuff.
There are startups like Esperanto and many university groups and open source who are all interested.
Of course you still need to find costumers for a mass tapeout, but once the whole linux tool-chain is running there are lots of potential applications and costumers.
> You won't see high performance open source micrarchitectures that implements RISC-V core.
India's Shakti Core is a 6-stage pipeline that runs at
2.5ghz in 20nm and uses only 120mW. Their roadmap includes
VLSI variants which will piss all over ARM and Intel,
which is funny because the India Goverment has given them
unlimited resources and backing to do exactly that.
> who will deliver these as open source implementations.
And who's going to pay for this, given that (until production runs are huge) every unit is going to be more expensive for less features than a comparable "closed source" unit?
If there really was a deep market for this that was also prepared to accept compromises along the way, it could be kickstartered into action. I don't believe there is, just a small but vocal minority on message boards.
I'll be presenting a talk at the Chennai 9th RISC-V workshop about doing precisely this: a Libre RISC-V SoC. http://libre-riscv.org/shakti/m_class/ and yes, it is indeed as complicated as you describe.
Fortunately there are people out there who are not intimidated by the prospects of tackling such large projects, projects which in their own right are licensed by proprietary companies for enormous sums of money. DDR3 costs USD $1m minimum to license a SINGLE-USE controller and PHY. So I tracked down libre-licensed controllers and PHYs libre-riscv.org/shakti/m_class/DDR/
It turns out that there's around two or three libre-licensed controllers, and Symbiotic EDA are happy to do a libre-licensed DDR3 PHY for USD $300k, and to convert it to DDR4 for another $300k. It'll take them a year but that's fine.
Richard Herveille has RGB/TTL (VGA) and that can be connected externally to an SSD2828, SN75LVDS83b or its china clone NT7181, or a TFP410a, or a Chrontel CH7036 or equivalent. https://github.com/RoaLogic/vga_lcd
Interestingly whilst SD/MMC is available as libre-licensed, eMMC is not. So that has to be dealt with.
Video processing blocks are available for constructing all sorts of algorithms (without running into hardware-level patent licensing issues, because they're blocks not full algorithms) https://opencores.org/project/video_systems
3D is however really tricky, you are absolutely right, so I have a place-holder page here http://libre-riscv.org/shakti/m_class/libre_3d_gpu/ where I'm inviting anyone with expertise to step forward, to collaborate to get something done.
And yes, your comment that it's not particularly exciting in academia, and that they totally lack novelty, is spot on! SoCs with 3D and Video on which you and your kids can watch Monster School, FGeeTV, and play Minecraft, RoBlox and Neko Atsume is indeed neither academically exciting nor novel.
With the exception of the 3D part, which Jeff Bush kindly resesarched extremely well, in the form of Nyuzi, by replicating the Intel Larrabee Team's paper in which they developed a recursive rasterisation algorithm. https://github.com/jbush001/NyuziProcessor/wiki
Who the heck are you, anyway, you seem to be quite insightfully well-informed? :)
I think you're more enamored by the idea of open source hardware than to actually benefit from it directly. And working in semiconductors long enough, I can guarantee that an open ISA without licensing restrictions does not for a fully open source chip, or even core make
Wanted to say just about the same thing. RISC-V presents a lot of possibilities. Whether they turn into reality is a whole different matter.
Same goes for the following statement:
"Proprietary products can be severely insecure, and because they can’t benefit from years of scrutiny from open source developers and industry experts"
Just because you can benefit from something doesn't mean you will. Heartbleed taught me this lesson again to make sure. So while I'm excited about the new possibilities, I'm managing my expectations at the moment.
Pretty sure not even RISC-V is "open source hardware", in a meaningful way, unfortunately. There's a lot more to building a platform than the instruction set of the CPU. I've yet to see anyone building an open source hardware platform using RISC-V.. I would be thrilled to be proven wrong, so please do if you can!
lowrisc and pulpino are just that (SoC and microcontroller respectively)
There's also a few open CPU core efforts, but most relevant are perhaps Rocket, the reference implementation (single-issue in-order) and boom (out-of-order), a performance oriented implementation.
they ignore you
they laugh at you
they fight you <=======
you win
Now, for something substantial to say... I was looking into computers that can run heavily parallelized grep, and I looked at Adapteva's 16-core Parallella and their 1024-core Epiphany thing, which I think are based on RISC-V (possibly an extension thereto) (EDIT: I'm wrong, see below). It seems that the sole engineer of Adapteva had "left Adapteva in January [2017] to take a position as a program manager at DARPA", and that's the last blog post from the company's website. So... Is anyone working on anything like that, that is currently available or close to available?
I don't know of anyone doing something similar. GreenArrays is the closest I'm aware of in terms of number of cores, but their cores are far more minimalistic. XMOS Xcore is another one that is similar-ish.
One of the problems Adapteva kept facing, and that I think XMOS did too (and that's caused their change in focus to specifically market voice processing solutions rather than focus on their chips), is that they're suited for a very specific nice:
Larger CPUs are way better for things that need high single-core performance and are hard to decompose.
GPUs are way better for anything that's mostly single-instruction multiple-data. That is, anywhere where you have a small number of instruction streams doing the same two large arrays.
Things like Epiphany and XMOS Xcore do have some appeal in really low power applications, but they'll do best in cases where you have multiple fundamentally divergent instruction streams (lots of branching). Even then, you need enough cores that you can't just pick a bigger CPU and timeslice.
Conceptually it really appeals to me, but ever since the days of the Transputer, we've struggled to decompose problems well enough to make many-core designs like Epiphany compete well with ever-faster single-core performance and now coupled with GPU's to slice away the SIMD type problems.
I'm still a fan, and have two Parallelas, and find it really sad that he wasn't able to get more traction, as I think you need to get it to an inflection point with more RAM per core and more cores per chip (the Epiphany V might have done that if it had become commercially available, or e.g. on a PCIe card) to allow people to more realistically find the right problems to solve on them.
Part of the challenge is that unlike for GPUs, there are no problems that are really screaming out for this architecture, especially not as single core performance for CPUs have skyrocketed, and you can compensate for the lower number via multithreading. For Epiphany type architectures to make sense you need problems where slow-ish single thread performance can be compensates by an ability to run far more in parallel and/or where deterministic latency for communication that is easy to reason about can compensate (e.g. for Epiphany you can relatively easily count "hops" to know how many cycles accessing the memory of any given other core will take, so if you're careful you can use that to do lock-free memory accesses across cores).
I remember the arguments that "free" software must not be any good. That nobody stood behind it. That is was only free if your time was worth nothing. And on and on.
Now Microsoft's best days are behind it. Microsoft is openly embracing Linux. Who would have ever believed SQL Server would run on Linux. Or that Microsoft would create a Linux personality on Windows called "Windows Subsystem for Linux". Microsoft even admitted (sorry I don't have a link) that the reason for this embrace was to bring developers back.
Linux is now in everything that is not a desktop PC. From wristwatches to mainframes and everything in between.
To the point: This now seems to be happening with open source RISC hardware.
What I don't get is why didn't Microsoft open source at least some of their toolset even if they didn't make it free? I remember coding in the 90s. It was a nightmare. You were never sure if you hit a bug in the programming language, the operating system, or the database. All three were closed off. It was maddening.
Pretty much anything from the days of yore that was done stupidly by Microsoft can be laid at Ballmer's feet, either directly, or in-directly via the company culture he fostered.
He was the reason they missed out on open source, missed out on mobile, and missed out on a ton of other stuff as he counted his money while the market slipped ahead without them.
They're better now but I don't think they'll ever be the player they were in the 2000's (and that's probably a good thing for everyone).
I've seen suggestions from ex-Microsoft employees that the only reason they were even charging money for Visual C++ and the rest of their toolchain was to avoid killing off the third-party market. They wanted to be as developer friendly as possible and that meant having a variety of choices so everyone could use what they liked most. Now that MinGW and LLVM have made it easy to get free and open source toolchains for Windows (and even MSVC compatible ones from LLVM) they do just give away most of their developer tools. They can't open source the compiler though because they don't write the C++ frontend parser/lexer, that comes from a third party.
>because they don't write the C++ frontend parser/lexer
This is a common story with a lot of MS stuff, especially in the more legacy products. There are so many open tickets about rewriting 3rd party licensed components but they usually get pushed down for business needs.
Exactly this. There are only two ways I can find to read that:
1. You think that Microsoft is dying, and embraced open source as a last ditch effort to right the ship. If this is your opinion (OP) - you must just be an MS hater. They are CRUSHING it, and I haven't seen this much goodwill from their enterprise customers since the dot-bomb of the early 2000s. People are actually proud and excited to use their products again.
2. You think that embracing open source is going to kill them. In which case you just argued against your own premise.
Either way, I don't see how their "best days are behind them".
The idea isn't that Microsoft is dead as a company that makes wheelbarrows full of money. Microsoft is dead as a company that controls the direction of the tech industry. It's dead as a company that everyone is afraid of. Now they're just a company that competes, does it very well, and makes a lot of money.
Yeah, exactly. They must be young. MS isn't going away any time soon, and I like what they've been doing recently, but uh...
The was a time not so long ago when Microsoft meant computer and the Macintosh was only used by architects and wealthy artists.
A time when Bill Gates was the only computer nerd most people could name, and [everyone knew] he was the richest man in the world. Even though he couldn't get a good haircut.
Google? A silly name for 10e100. Amazon? A river in South America. Facebook? What the hell are you talking about? Here, have another AOL coaster, I just got five more in the mail...
In the 90ie and early 2000, 9 out of 10 new computer users were paying customers and users of many Microsoft products, the desktop or laptop being their primary compute device.
With the shift to mobile (phones,tablets), many new users will spend only a small amount of their time using Microsoft products. Within the next 10 years another 1 billion people will come online, lots of lost opportunity.
"To hear why ARM is great, you need to pay a fee first."
I can't help but laugh at how ARM is responding to RISC-V. They're giving the architecture great publicity. Has Softbank forgotten how to run its subsidiaries?
To be fair, the decision makers, i.e. asic architects,designers from top cop firms, most likely aren't having out at HN, it banking on public opinion here. ARM can bloody well do as it pleases with little impact on business
Where RISC-V does have a chance is in getting adopted by a company with a long term Outlook and huge shipments. Ideal customer is Samsung, who use arm in everything from DRAM/NAND to smartphones. If they can prove it commercially on one product line, maybe SD cards, they can shake it to others over there next decade.
Heh heh. I paused partway through the article to look at web.archive.org to see exactly what ARM's website had said, and was going to post it here, and then I checked and saw that the article links to the archived version too. Props to the Internet Archive and to Chris Williams.
It's too late. I call Streisand effect. At least I didn't know about RISC-V. Yesterday I binged on the specification. Thank you Arm for enlightening me.
RISC-V is substantially different in scope than things like OpenRISC and SPARC. They are not designing a single core, they are designing specifications and allowing multiple suppliers (both open and proprietary) to create implementations. Also the standards cover much more than just the core. And they're working with Linux distros to make sure the software ecosystem is there too.
I have a Bluetooth module sitting on my desk that happens to use a CPU descended from OpenRISC - except it's 100% proprietary in both implementation and instruction set. I don't think there's even any public documentation of the instruction set. Basically, some of the OpenRISC folks decided there was more money in proprietary chips; they also blocked the merging of the gcc code upstream, which helped kill it off as a viable open architecture.
That's somewhat of a strange argument. The predecessors were highly successful and influential academic projects. The simply never had the idea/motivation to try to move that stuff into the real world.
All of this stuff also now profits from the change/slowdown in Moors law.
ARM holdings, is one of those stories which somebody in history of science, economics, telecommunications should write up. Amazing outcomes. I feel sad that industry went to an IPR model which says 'nobody except geniuses works in the UK, we send this to fab lines elsewhere' but as a model, it totally worked.
The physical fabs are a separate for good reasons. ARM's problem is that the capture such a small portion of the value. Companies like Nvidia, Qualcomm, AMD and Apple are also fabless, but they get a big chunk of the profits.
This also seems like it'd be RISC-V's problem too. ARM is, from what I understand, pretty cost-competitive - they're not a big fat high-profit behemoth ripe for disruption like say Intel. As it stands, I think the only companies that can save money with RISC-V are the ones with enough volume, staff, and specialised needs to take their chip design entirely inhouse without using the services of companies like SiFive and who also don't need compatibility with the existing software ecosystem.
The current debt-based economy encourages this kind of behavior. I am more familiar with the US, but I think UK is not much better at this point. The worst was probably things like Rambus (not long after US DRAM manufacturing declined).
The first casualty of RISC-V will not be ARM, it will be MIPS. MIPS CPUs are the current choice for low-cost embedded systems that run a Linux kernel, from home routers (where MIPS has nearly 100% penetration) to low-end set-top-boxes. I expect MIPS will be dropping their licensing costs even more to compete at RISC-V matures and becomes a better option.
> ... from home routers (where MIPS has nearly 100% penetration) to low-end set-top-boxes.
I don't think that's true anymore for new installations. A lot of those home routers and set-top boxes are using Broadcom chips, and they switched from MIPS to ARM years ago.
The mnemonics of RISC V are terribly thought out when compared to OpenSPARC or the legendary MC68000. Particullarly galling is the choice of dst, src rather than src, dst (although that could be solved by using UNIX®️ version of as(1)). Terribly thought out design internally. Not that ARM is much better, mind you.
If Intel has taught us anything about ISAs, it's that having a shitty one doesn't much hurt.
If ARM has taught us anything about ISAs, it's that starting with a nice one doesn't mean it won't end up a mess. (There exist 32-bit ARM implementations with precisely zero opcodes in common.)
Oddly enough, these are among the least concerning, least permanent aspects of the specs; not sure why they bother you so much either. It's frankly not a whole lot different from MIPS, except lacking exposed delay slots, and a number of instructions that turn out not to have been used much. My biggest issue is with how many instruction bytes it can take to load an arbitrary 64-bit immediate without an IC, but that can be gotten around by using an IC.
I don't know much about AMP, so could you tell me why or when a non-AMP page is preferable. I read the wiki on it, but it didn't clarify my question. Thank you.
Both versions render okay for me (firefox), although the non-AMP version is scrunched into a newpaper column that only uses half the window width; do you have noscript disabled?
I'm not. I have noscript installed. I whitelist certain things, plus anything run by the site itself (which is automatic, I don't have to configure it per-site).
the main difference is that AMP pages are served directly from google servers and they get your info directly. Non AMP pages are served from random companies servers, and they have to inject google analytics via a script tag.
To me, it looks like that site was not created by ARM, no one was able to prove ARM created it, and it could be just a scam to attack ARM's public image. That's bad journalism right there.
Good find. Also, the article does say it talked to at least one person at ARM. It would require that either Chris Williams made stuff up and wasn't caught by anyone else at The Register (he is Editor-In-Chief, but I hope someone still checks his work), or the ARM spokesperson and anyone else Williams talked to was lying.
"Arm told us it had hoped its anti-RISC-V site would kickstart a discussion around architectures, rather than come off as a smear attack. In any case, on Tuesday, it took the site offline by killing its DNS.
“Our intention in creating a webpage to offer key considerations around commercial RISC-V based products was to inform a lively industry debate," an Arm spokesperson told The Register."
Honestly, for someone who doesn't know TheRegister, even that article looks like a scam to me.
The linked image stored on the ARM server from the comment above is a way stronger proof imho.
Edit : The picture could have still been hijacked from its initial purpose by a malicious actor. Maybe ARM didn't planned to use that picture the same way it was presented on the RISC related domain.
This is the tech equivalent of “CNN is making things up wholesale” - The Register is a well-known, mostly reliable source of news. It might occasionally get things wrong/have biases/etc but they’re not about to start making up sources.
Well, I don't trust CNN either so it's actually a good comparison.
It seems "well-known" sources of news are using their comfortable position to accept a lower quality of their journalism.
I'm not saying their are bad intended, but they took the choice of posting often (with possibly unverified information) rather than posting rigorously checked facts, and cross checked with different sources. I feel like I have to do myself their work of cross verifying everything they post...
CNN doesn’t make up sources. The Register doesn’t either. The Register has a source at ARM who states that the website was theirs. What, exactly, are you suggesting happened?
That statement could be made about literally any article linked from Hacker News, in addition to anything on the rest of the internet - why did you decide to make it on this one?
Had it been a "false flag" operation to smear ARM itself most likely the company would have made public statements to state that they don't own the website.
> The website had and ARM logo and a copyright note:
I can copy paste that in 5 minutes to any website I own.
> Had it been a "false flag" operation to smear ARM itself most likely the company would have made public statements to state that they don't own the website.
Maybe it doesn't worth worrying about a scam that only 10 nerds on internet care about
We have a website that has the ARM logo and professional graphics. We have the main graphic of the site that was hosted on arm.com. We have paid ads. We have an article that claims that ARM apologizes for website which coincides with the website being taken down.
The development of new technology seems to go so slow while it's happening; it's only in hindsight that it seems it's always and inevitably been this way. Still a long way from running OpenBSD on a 17" laptop with nothing but open source hardware inside, but it's a nice moment.