Hacker News new | past | comments | ask | show | jobs | submit login
Android and RISC-V: What you need to know to be ready (googleblog.com)
343 points by tormeh on Oct 31, 2023 | hide | past | favorite | 144 comments



They expect developers to get ready?

They're thinking about this wrong. They need to make it zero work, followed by optional optimizations, like Apple did with their M1.

Remember when Google told everyone to test their Android apps on x86. And then again on MIPS? And then nothing happened. Developers will not waste their time.


Zero work? I guess memory fades quickly sometimes.

Apple rented out DTK systems for $500 for developers to get their hands on ARM hardware and port their applications. Hardly zero work. You couldn't even keep the hardware, you had to send it back (and originally you wouldn't even get $500 in credit to buy a production system with, luckily they fixed that after the easily predictable uproar).

Google is providing software emulators instead, much easier to work with and doesn't require complicated device logistics. Not only that, but the only audience that really needs to do work to get ready, are either those producing NDK applications, or those working on hardware support for RISC-V.

Java/Kotlin applications would not need any porting. Sounds like zero work to me!


If you were initially fine with software emulation (i.e. Rosetta 2), as were many small and large software projects for macOS or Unix, you had no need whatsoever to get a DTK.


> If you were initially fine with software emulation (i.e. Rosetta 2)

You'd need a DTK to know if you're fine with emulation or not.

If you cared at all about your app running, you wouldn't just assume that it magically runs fine on an emulation layer you never touched, and that at speeds that are reasonable.

For a reference point, Rosetta was far from being great, and while some apps could run against it, it was most of the time an ultra painful experience. That pain helped devs to put the effort into making native versions, but it also means you couldn't expect Rosetta 2 to give acceptable performances from Apple's track record.


Are you talking about Rosetta 1 or 2? Because Rosetta 2 was not an "ultra painful experience" in any sense of the phrase.

Rosetta 2 was straightforward and surprisingly fast, requiring zero tweaking or user interaction. Most people never even noticed.


> Rosetta 2 was straightforward and surprisingly fast, requiring zero tweaking or user interaction. Most people never even noticed.

Yes, but if you were a developer who had to make sure your application worked on the upcoming M1 models, there was no way to know that without getting the DTK.

At the time when Apple announced their transition to their own silicon and that there would be a Rosetta 2, the only thing you had to go on other than the DTK was the precedent of the first Rosetta. It was a major feat back in its day (god I feel old just writing that...) but it was nowhere near Rosetta 2's compatibility and speed. Armed with nothing but the first Rosetta's precedent, it was a somewhat risky outlook.


They are saying that given Apple's track record with Rosetta 1, developers had no reason to trust Rosetta 2 would be sufficient to test their applications with, and would thus need to buy the expensive DTK.


On Rosetta 2, from the horses mouth:

Rosetta translates all x86_64 instructions, but it doesn't support the execution of some newer instruction sets and processor features, such as AVX, AVX2, and AVX512 vector instructions.

I can imagine quite a number of users running into the above situation in multimedia related code.


That’s still a small minority of all the apps out there, though.


But we're talking about Macs. That's a huge chunk of their userbase. For the people who use Macs for actual work you can broadly classify them in two groups, Devs who need Xcode and media. Yes there are exceptions but that's the majority. For one of those groups AVX is pretty important.


And those apps had to be ready too. You don't build a reliable platform by randomly breaking "a small minority of apps". You yourself are certanly in "small minority" of at least a few features you rely on.


This is just an endless logic circle.

“Not all apps needed to worry about the CPU change”

“But some did!”

“And Apple made test hardware available for those people”

“But not enough for all apps to be tested”

“Not all apps needed to worry about the CPU change”


To clarify:

The initial claim was, lxgr: "If you were initially fine with software emulation (i.e. Rosetta 2), as were many small and large software projects for macOS or Unix, you had no need whatsoever to get a DTK."

The subsequent claim was, xvector: "Rosetta 2 was straightforward and surprisingly fast, requiring zero tweaking or user interaction. Most people never even noticed."

Posters, myself included, are reacting against these claims, as they both put the cart before the horse, and the second gives only an end user perspective.

Devs had verified with a DTK that Rosetta2 ran their programs acceptably. Keep in mind patches had to be issued for programs which did not check for the presence of AVX, AVX2 or AVX512, else they would crash. This invalidates the first claim. It shows why the second claim is only the second half of the story.

So the logic follows a line rather than a circle.

Also nobody made the claim that: “And Apple made test hardware available for those people, But not enough for all apps to be tested”


The "ultra painful experience" part is about Rosetta 1.

For Rosetta 2, I agree Apple made a much better translation layer. There were still swaths of software that couldn't run on Rosetta 2, but the previous generation of Intel machine was still there kicking and alive, so people who didn't feel like taking the risk didn't need to.


Java/JVM on ARM/M1 had an enormous amount of bugs for the first several years, fyi. Source: I encountered many of them, which eventually got fixed.


luckily many of the things which caused bugs on ARM where related to the weaker memory ordering and code having (invalid) implicit assumptions about stuff like memory barriers

luckily because this has mostly been fixed and is also the biggest stumbling block when going from x86 => RISC-V

Through there was something about the specific LR/SC definition which I found quite problematic when it comes to implementing certain atomic patterns. But that was 2?? years or so ago and I don't remember the details at all.

Eitherway theoretically if you have a correct C++ standard compatible program without UB it should "just work" on RISK-V by now, but then I don't think such a thing exists (/s/j).


> if you have a correct C++ standard compatible program without UB it should "just work" on RISK-V by now, but then I don't think such a thing exists

Fortunately, what does exist is programs that have been ported to work on arm64 (or are native there), which will Just Work on RISC-V which has a slightly stronger memory model than Arm.


Related, Debian initial RISC-V official port sid build status[0] and overall[1].

0. https://buildd.debian.org/status/architecture.php?a=riscv64&...

1. https://buildd.debian.org/stats/


Why is this downvoted? Is it technically wrong?


"Java/JVM" in general or on Android? Seems to be the second b/c I heavily used Java in the 90s and didn't encounter JVM bugs.


Android doesn't use the JVM.


It actually does, the devices might run ART, however the whole developer toolchain depends heavily on the JVM, Gradle, Android Studio/InteliJ, the pleothoara of little CLI tools to transform those .class files into .dex, desugar modern JVM bytecodes into JVM bytecodes that can actually be mapped into existing .dex ones (for pre-Android 12 devices), library calls that need to be polyfilled into something else,....


Stupid question: why does android use ART (and Dalvik before that)? I guess it's more performant for mobile phones, but then, is there a reason why other java apps don't use it outside of android? Has anyone tried using ART/Dalvik outside of the android ecosystem?


Easy, Google screwing up Sun licensing for embedded devices and creating their own Java based ecosystem from scratch.

Note that ART as it is now is the third implementation, first it was Dalvik, then ART pure AOT, and now ART interpreter/JIT/AOT.

Nokia and Sony-Ericsson already had relatively good J2ME implementations for Symbian, and Blackberry OS was powered by Java, when Android came out.

Unfortunately it didn't went for Google the same way, as it did for Microsoft.

Gosling interview on the matter,

https://www.youtube.com/watch?v=ZYw3X4RZv6Y&feature=youtu.be...


Definitely agree about the toolchain, I was referring more to code execution on the OS itself.


Thanks, didn't know.


It's a bit of a question about what you define the term "JVM" as. For many people it just means the thing which runs bytecode java was compiled to, in which case the Android runtime (ART) is a JVM (it has AOT, JIT and can run bytecode prodced from Java if done so in the right way).

But a more nit-picky/correct definition would expect the JVM to follow various specs (e.g. expect the bytecode the have exact the same format, features etc. as the one you find in a classical java application) and have various features which do not apply to the ART at all.

Or in other words, people which have nothing to do with java might call the ART a JVM in a generic way but people which do might be for good reason very insistent that this isn't the case. (Also Google lawyer will be VERY insistent it's no a JVM.)


Which is a kind of nit picking for Google laywers, as the approach taken by Dalvik/ART has been quite common in the embeded space, see PTC, Aicas, Aonix, microEJ, Websphere Real Time,...

Not all of them are available, however all of them do support AOT compilation, and their own bytecode format more optmized for their use cases than regular .class files.

The big difference between them and Google, is that they always played by the Sun/Oracle rules regarding Java licensing.


they probably mean Sun(Oracle) JVM on ARM, as far as I remember there where tons of issues with that one

and in general there where tons of issues with multi threaded code during the time Java did move to the "current" memory model (was that in Java 1.5 or Java 1.6? I don't remember), through Sun(Oracle) JVM having issues on ARM was still a think after a huge part of the ecosystem moved to Java 1.6+ but I think wasn't anymore much of an issue when Java 1.8 was approaching but uh that was many years ago the my memory is a bit vague


"and in general there where tons of issues with multi threaded code during the time Java did move to the "current" memory model"

I wrote code or managed Java development teams from around 1996 to ~2010 and I don't remember any problems with multi threaded code or memory models. But of course you might be right if you did run Java on an ARM mobile like the Nokia.

The only challenge I remember with two high load (thousands of write transactions/sec - back then) websites were GC pauses, where we had to hire consultants to help with tuning the GCs on different machines for the high load.


GNU/Linux applications are typically quite portable across CPUs. Is Android really that different? I would expect that once you have ported it an NDK application to at least two architectures, the third one should be really easy. And with Android, you already get two easily testable architecture (the ubiquitous aarch64, and x86-64 under virtualization).

(It's not that many Debian or Fedora community contributors have a mainframe in their basement—yet the software they package tends to build and run on s390x just fine. Okay, maybe there are some endianess issues, but you wouldn't hit those with RISC-V.)


Assuming you have the source code for all your native code, then yes it should mostly work. 3rd party libraries can contain native code which means that you may be dependent on someone else porting their code.


> Is Android really that different?

no, this has little to do with android

but "GNU/Linux applications are typically quite portable across CPUs" is only this case because a lot of people have a lot of interest and time invested into making that work

but for many commercial apps there is little insensitive to spend any time into making them work with anything but ARM (even older armv7 might not be supported because depending on what your app does the market share makes that not worth it)

still with both ARM and RSIV-V havine somewhat similar memory models and in general today people writing much more C/C++ standard compatible code instead of "it's UB in therefor but will work on this hardware"-nonsense I don't see to many technical issues

on issue could be that a non negligible amount of Android apps which use the NDK use it only to "link" code from other toolchains into android apps, e.g. rust binaries. And while I'm not worried about rust I wouldn't be surprised if some toolchains used might not support RISK-V when it starts to become relevant. Especially when it comes to the (mostly but not fully) snake oil nonsense stuff banking apps tend to use I would be surprised if there wouldn't be some issues.

Through in the end the biggest issue is a fully non technical one:

- if you release an app with NDK you want to test it on real hardware for all archs you support (no matter if there is some auto cross compilation or interpretation (like Roseta)

- but to do so you need access to such hardware, emulators might be okay but are not always cutting it

- but there aren't any RISK-V android phones yet

- but to release such phones you want to have apps, especially stuff like paypal or banking apps available

- but to have such things available the providers much evaluate it to be worth the money it costs to test which is based on marked share which is starting at 0 and slow growing due to missing support for some apps and missing killer features

So I think RISK-V android will be a think but as long as multiple big companies are not pushing it strongly it will take a very long time until it becomes relevant in any way.


At least for my C/C++/ObjC code it was literally zero work, just recompile and done.

In reality I have a lot more trouble with differences between macOS versions than differences between CPU architectures.


> Remember when Google told everyone to test their Android apps on x86. And then again on MIPS? And then nothing happened. Developers will not waste their time.

x86 builds of Android apps & libraries are extremely common to have since it's the most common target for emulators.

As for MIPS, I don't think Google ever pushed it. It was just kinda there.

Also also, this blog post is aimed at device manufacturers more than app developers which is why it talks about building AOSP & pending patches. It's not asking app devs to do anything (in fact it even says at the bottom "Make sure to stay tuned as we look into ways to make it as easy for Android developers writing native to target new platforms as it is for our Java and Kotlin developers!")


MIPS died for Android after Imagination Technologies acquired it in 2013. There is nothing for Google to do when there is no new SOC or new phones. As for x86, it is still widely used thanks to Chromebooks. Unless you use NDK there is nothing on your end to do for new targets.


MIPS died for android well before that - they only had microcontroller-level cores in development, which were rather unsuited to the performance requirements of android.


> They expect developers to get ready?

No, I don't think they do in a general sense. It's in the title: "What you need to know to be ready" --> if you would like to be ready, we have some information/advice for you. You can take it or leave it.

% Note: I'm using the word "you" here as it is used in the title, not you specifically.



"The pronoun one often has connotations of formality, and is often avoided in favour of more colloquial alternatives such as generic you."


That's because nobody shipped devices that mattered with that architecture.


Could the causal chain go the other way? (Nobody shipped any devices with that architecture, because app availability was too low)


It's possible, but I think the more likely issue is nobody shipped very many devices with x86, because Intel cancelled development of x86 for phones[1] before they had many design wins.

Also, and I missed this at the time, it seems like Qualcomm was restricting manufacturers from also using competitors chips[2], although it seems the ruling was reversed on appeal [3], there's still evidence of restrictions. It would be unappealing to market an x86 device if it means losing access to Qualcomm cpus, especially in an era where Qualcomm modems were practically mandatory, so it makes sense that the only x86 (Asus Zenphone) I can think of is made by a company that didn't make a lot of phones.

Personally, I don't think the lack of app support would have been that much of a problem for sales of phones in the 2012 to 2016 time frame. If you got a couple hundred thousand phones out there, app developers would add support, because it wasn't too hard, a lot of developers were running x86 Android emulators, and they may have already had the x86 native code libraries and just didn't ship them because why bother when there's no phones.

Some of the first generation Asus Zenphone ran on x86, and it might have been rough for a while, but I think they sold reasonable numbers, and app developers started shipping x86 libraries. I don't know when Google Play added support for ABI specific APKs, maybe that was a factor too (if you have to have everything you support in the APK, then you'll be careful about what you support)

[1] https://www.pcworld.com/article/414673/intel-is-on-the-verge...

[2] https://www.cnn.com/2019/05/22/tech/qualcomm-antitrust/index...

[3] https://www.reuters.com/article/us-qualcomm-antitrust-idUSKC...


No, For x86, They were fine when all mobile CPU's where Single and Dual core. Intel actually had upper hand for a short time thanks to hyperthreading. But once Qualcomm and other ARM SOCs jumped to Quad core, Intel CPU's where power in-efficient and slow compared to other ARM SOCs. So they gave up Smartphone and concentrated on tablet chips for Android and Win8 devices. Later they gave up the Android tablets mainly due to cheap ARM chips from Mediatek and others. As for MIPS they gave up on making SOCs for Android after Imagination Technologies acquired it in 2013. Also keep in mind Smartphone's where growing double digit market share in early 2010's and developers were definitely eager to support whatever new architecture to get market share. Even though there is no x86 smartphone or tablet anymore, the app support is quite good thanks to Chromebooks, Google Play Games for Windows and Android app support in Windows.


Intel had / would have had few significant issues at the time competing on the CPU side of thing. What killed them in phones was the radios. They sucked at them, and buying a discreet chip for it hurt margins and efficiency too much.


There are quite a few x86 Chromebooks, millions. If you have one it is kinda surprising just how many Android apps run on them. (Or is it all emulated?)


The not-jvm apps should just work. Apps using binary libraries may be AOT-translated by Google or be jitted on device. The tech was there about a decade ago specifically for Android (look up Houdini) but in general is as old as the hills.


Most java only apps will Just Work as you just add another compiler option.


What compiler option would a pure Java app even need?

Android apps aren't AOT-compiled to any specific architecture at build time; that happens after installation on the device itself.


Small correction, on Android 5 and 6, it happened during installation hence why it took ages, and an hybrid interpreter/JIT/AOT toolchain was introduced in 7.

Nowadays AOT compilation happens when the device is idle, taking into account either the generated PGO metadata gathered by the JIT, or the one downloaded alongside the APK at installation time.


Yes, but the point is that all of these are implementation details of the runtime. From an application developer's point of view, Android executes architecture-independent DEX files.

All versions of Android keep the original DEX bytecode around to allow for later recompilation (JIT or AOT) and "later" can mean anything from "seconds after the APK was downloaded" to "during/after the next Android update" or "at any convenient time" (e.g. when the phone is charging and has nothing else to do).


if you don't use the ndk, isn't it, in fact, zero work?

how prevalent is ndk use? games and multimedia apps maybe?


Not every developer is an App-Developer. Some develop platforms, devices, physical products


It's a whole new architecture. Developers have to put code to support it into the AOSP. It's actually pretty similar to Apple and the M1 (although IIRC all of that development was done behind closed doors). Apple had to do a ton of work to smooth out the M1 transition, and it'll take a lot of work to make the process equally smooth on Android


I just expect better compilers. Having a diverse platform is very healthy, but it is completely unreasonable to expect people to keep switching technology if it isn't easy to adopt.


I don't think that makes sense. RISC-V is by definition a new ISA, M1 is based on the existing ARM ISA.


That transition was from x86 to ARM.

This is ARM to RISC V.


From a mature architecture to another, versus going to a "pretty much only used on embedded devices" arch?

Yeah, they really have got their work cut out for themselves!


Pretty bad take given that Android recently transitioned from 32 bit ARM to 64 bit ARM, which are essentially totally different ISAs and 64 bit ARM was used in absolutely nothing at all before Phones -- iPhone 5s first, then Android a couple of years later with e.g. Galaxy S6. The one advantage that transition had was that the same CPU could run both ISAs -- something ARM has dropped in their latest (and all future) 64 bit-only CPU cores. And for some time now apps could still be 32 bit but the OS had to be 64 bit.

A transition to RISC-V will be cold turkey. It's technically possible of course to build one CPU that can run both A64 and RV64, but only ARM can legally do it, and they probably won't unless they're really losing, sometime 5 or 10 years from now.


Android phones are pretty much embedded devices though (unfortunately) so I suspect we'll see a load of phones that have super custom kernels with tons of hard-coded driver configs etc. Similar to SBC other than the RPi.

It's kind of an issue for ARM too but they're a bit further towards solving it than RISC-V.

In any case it's mainly a problem for people that want OS updates or the ability to install other OSes, so it's not going to affect app writers or normal users much.


>It's kind of an issue for ARM too but they're a bit further towards solving it than RISC-V.

Can you elaborate on why you think so?

I understand it is the opposite: RISC-V is much further towards solving it.

This is because RISC-V has put effort from the start into standardizing boot process (spl -> sbi -> uboot|uefi), ISA (profiles), the firmware-kernel interface (sbi, uefi protocol, acpi) as well as the platform itself (platform spec, standards for interrupts, timers, hypervisor support, the uart requirement, the watchdog and further platform standardization efforts).

This has happened (and is happening still) before significant deployment of SoCs using RISC-V microarchitectures as the application processor.

Meanwhile, ARM realized, quite late, that this is important, and is lagging behind, with plenty of non-fixable, bespoke platform hardware deployed, and low vendor uptake as they are asking their clients to change what they already have in place.


I haven't been keeping up to date with RISC-V happenings lately but does this imply that it'll be easier for distros to have live images that can install to arbitrary RISC-V machines like we're used to on x86? If so, that's exciting!


>it'll be easier for distros to have live images that can install to arbitrary RISC-V machines like we're used to on x86?

Yes. The experience will be equal or better over the IBM PC-derived platform, by design.

PC was an accidental platform, RISC-V platform spec is an intentional (and very informed) one.


So will this require the C extension that Qualcomm seems to be so opposed to?

See the subthread at https://news.ycombinator.com/item?id=37996820

Qualcomms latest: https://lists.riscv.org/g/tech-profiles/attachment/400/0/AOS... where they claim that their proposed "make RISC-V a bit like aarch64" Zics extension plus a 32-bit long jump achieves better code density than RV64GC, and without the downsides of the C extension they've been harping about.


I don't know why you're being downvoted, because this question is very much in play. If Qualcomm decide they will produce RISC-V snapdragons without the C extension then it's hard to see how Android NDK apps using the extension would work (except incredibly slowly with some sort of trap and emulate).


Google can always require it to be CDD compliant, and then those RISC-V Snapdragons simply won't run Android. Just like NEON is required for all ARM CPUs since API 21.


And Vulkan, optional in Android 7, became required in Android 10.


Qualcomm did almost exactly that with parts of the floating point on ARM before.


Hm, qualcomms seems to have some point there, but also seems to be driven by wanting to reuse more ARM tech.

I guess google forcing RV64GC or RV64G_Zigs could be the tie braker here.

But while removing or changing the C instruction when it's rolled out isn't really an option as long as it's mostly about emulator testing they probably don't care too much.


Nah, Qualcomm just wants to reuse Nuvia core with RISC-V frontend making minimal changes.

Their proposal is no good[0].

>No clearly discernible technical benefit for native RISC-V cores, and some clear technical disadvantages for both low-end and highend implementations

0. https://lists.riscv.org/g/tech-profiles/attachment/353/0/RIS...


Alternatively, you could say that Sifive just wants to retain their investment into cores with the C extension. Or that Krste is so wedded to the idea of "RISC purity + C + fusion" as an alternative to more complex instructions that he invents justifications for it.

In reality, it seems logical to me that all the players here have an interest in the long term health of RISC-V, including applicability to higher end cores. Qualcomm is showing data supporting their position. And Sifive is showing data supporting their position. Will be interesting to see which wins, and whoever it will be it'll hopefully be the best for the long term health of RISC-V even if it means one of the parties needs to partially redesign their cores.


It's entirely possible to have both in the same CPU. At the same time, with a program using any mixture of C extension and Qualcomm's Znew extension that it wants to. The opcodes don't clash.

This is an application profile question -- what extensions are required in the CPU in order to run shrink-wrapped software -- not an ISA or hardware question.


This looks a lot like the thumb ARM thingy?

The real question is why?

nm: https://www.quora.com/Why-was-the-ARM-Thumb-instruction-set-...


> This looks a lot like the thumb ARM thingy?

Unsure what you asking? The RISC-V C extension has some vague similarities with thumb. But the Qualcomm proposed Zics extension absolutely hasn't. Zics adds things like more advanced addressing modes, load/store pair instructions etc. that are found in the aarch64 ISA, all with 32-bit instructions. Notable that ARM didn't create any 16-bit Thumb variant of aarch64.


Sidenote, but when I learned of the new AOSP emulator, for some reason I thought that to use the Cuttlefish emulator (as opposed to Goldfish) with AOSP, you needed to run `acloud`, which refused to run unless I was on Ubuntu. I tried the LD_PRELOAD hack where you can replace the `uname` function output, but then it started trying to run dpkg (I use NixOS).

The old emulator is a real pain to get working, and the new one comes with many benefits.

Somehow I never knew you could just run `launch_cvd` manually.


If I don't remember, please report a feature request on https://github.com/tadfisher/android-nixpkgs so we can get this packaged, at least in my SDK repo.


Oh, hey! Love your project for running Android Studio on NixOS, will do.

Would also be cool if we could get Android Studio for Platform running as well, I'm guessing it shouldn't deviate too much from regular Android Studio.


it's new to me too, I don't even know cuttlefish, it has been a while


I am hoping that "Android for RISC-V" is more successful than "Android for x86" which was a completely unmitigated disaster (yes I still have a Lenovo x86 Android tablet :-)) Reading this though doesn't really give me a lot of confidence though. High hopes, low expectations.


The market will unavoidably be flooded with cheap consumer devices carrying RISC-V implementations as their main processors.

I'd be more concerned about ARM support stagnating.


just because the ISA is free doesn't mean the chips will become supper cheap

RISK-V is currently not really that competitive on the lower end marked as there are just too cheap ARM chips

but it is e.g. very competitive for companies needing large quantities of simple but slightly customized chips where existing cheap ARM chips aren't an option and customized arm chips are too expensive

and there are currently no competitive RISK-V chips in the high end marked

it can somewhat compete on the mid marked, if it wouldn't be for too low quantities driving the price up

it also could be interesting if e.g. Intel or AMD decides to enter the non x86 market as they could cut out license cost to ARM

but all in all I wouldn't be worried about ARM support being reduced in the ecosystem for many many years to come


Chromebooks are mostly x86 now. They run android apps. Also android emulator (not everyone is a rich american with arm macbooks).


A constant problem on the Lenovo machines was that there was no byte order preserving when it came to data and as a result stored "state" would often blow up apps as it came back in the wrong byte order.


a few years back, android was used "widely" in the embedded space where a GUI touchscreen is needed other than phone and tablet, now it's pretty much just for phones, and to some extent tablets.

the reason is, unlike phone and tablet vendors, small players just can't catch up the new changes from Android, which is very complex, gradually they moved away.


What are they using instead?


Linux with compositors (sometimes custom ones) for graphical applications. Weston is fairly popular, for example.

Or even nothing at all, you can run Qt6 standalone and have a very nice touch interface.


Raspberry Pi using Chromium in kiosk mode and a small touchscreen is my go-to for stuff like this.


Out of curiosity, how performant is Chromium on rPI in your experience?


Some years ago I worked on software for the Google Home Hub (v1), which according to this teardown¹ has a quad Cortex-A53, same as a Raspberry Pi 3. Before Flutter and Fuchsia, they ran Chromium² on Linux, and they were fine. There was some tuning, but Chrome's performance tools³ are not bad and work decently on remote targets.

¹ https://www.edn.com/teardown-google-home-hub-smart-speaker-a...

² Chromecast builds, which are slightly less than full Chrome: https://source.chromium.org/chromium/chromium/src/+/main:chr...

³ https://www.chromium.org/developers/how-tos/trace-event-prof...


I tried a pi3b for stuff like this and it is fine on 720p if you don't need many/complex animations. Fine, but not good.


~~Moblin~~ ~~Maemo~~ ~~MeeGo~~ Tizen and other Linux+compositor OS, with custom app stacks


All of them are discontinued. PostmarketOS seems to be the only well maintained mobile Linux distro


Tizen is still in development, a new version was released just yesterday[0]. Samsung still uses it in their smart TVs and smart fridges.

[0] https://docs.tizen.org/platform/release-notes/tizen-8-0-m2/


At my company we develop SDK based on a JVM for small devices (https://developer.microej.com/supported-hardware/). We also have a compatibility layer with Android applications.


Android requires really large memory and storage, I can hardly believe it can run on lightweight CPU systems, unless it's running elsewhere and just talk to this MCU via some layer or API.


Yes that's how some of our customers use it: have the heavy computation done on one CPU and use another light MCU when possible to save battery (for example for smartwatches).

We have our own JVM tailored for MCU.


I think DJI still use Android for their controllers


no way, for GUI yes, android is not good for real time control at all, it can work with some RTOS though


OOTL here. Are there any major phones which implement RISC-V? Also, does this require across the stack changes? i.e. not just OS ones.


Not yet, but they're coming down the pipe.

> Also, does this require across the stack changes? i.e. not just OS ones.

I'm not sure where you draw the boundary between 'stack' and 'os'. Can you expand a bit?


> but they're coming down the pipe.

Do you have any links? I'm curious, and not a little bit skeptical about the near-term competitiveness of RISC-V in the mobile space.


There were some HN frontpage submissions on Qualcomm's RISC-V SOC a couple of weeks ago:

https://news.ycombinator.com/item?id=37924092

https://news.ycombinator.com/item?id=37919907

It doesn't exactly sound imminent.


> It doesn't exactly sound imminent.

Mostly Chinese companies have been very busy designing & manufacturing actual silicon. And prices for such products have come down a lot, too.

It wouldn't surprise me if at some point there's RISC-V based phones, tablets & other devices on Chinese market, that most people outside China simply don't know about.

Or that teardown of a cheap phone sold in say, India, turns out to have a RISC-V SoC inside without much public attention beforehand.

Probably sooner than later. RISC-V is bound to become a lowest common denominator for computing devices especially at the low end. Picked by default unless <insert specific requirement here>.

Android being ported is a logical consequence of this.


It's gonna be cheap risc-v Chinese phones for a while. And they'll get faster and faster every six months. But they'll stay cheap. Until even Samsung can't compete and stops making phones, like LG did.


A couple of weeks ago SiFive announced a new core, the P870, which is in ARM Cortex-X3 class. That's the biggest core in the latest Snapdragon Gen 8 chips.

It typically takes 2-3 years from core announcement to being in an SoC for a high end phone.

They are coming.

Arm will have something newer by then, of course, but a 2023 flagship Galaxy S23 is not going to look stupid in 2026.


Also, who would potentially be building it? Unless you tell me Samsung has a five-year roadmap, I am skeptical.


Chinese companies? they are working hard to get away from US proprietary tech.


Probably the same companies that contributed the initial Android RISC-V port.


Every hardware company makes some plans that far ahead because the pipeline is so long. Details of course change a lot up to about 18-24 months before release but general direction must be known way earlier.


Nope. Qualcomm does have smartwatch RISC-V processors in the pipeline. It's not nothing?

Haven't tried the AOSP port for the LiCheePi 4a yet. Probably should give it a whirl just to see if things just work.

That's easier just to try downloading Minecraft through the Play Store than boot strapping the entire toolchain needed to get PolyMC working.


The last thing I will install on a RISC-V is Google Android.


I am wondering if RISC-V becomes often used in mobile CPUs, will this adoption have a positive effect on the ecosystem, or the big companies will steer RISC-V towards more corporate use cases and make it less friendly to enthusiasts.


but maybe you will buy a RISC-V with it being preinstalled before removing it ;=)


Only a fool would run Android if they can avoid it, even more so when you get the opportunity to switch architecture and go vanilla linux.

The one problem is banking, at least in Sweden we are still dependent on mobile BankID (for online purchases above $20 f.ex.), which only runs on iOS and Android.


Is there any feasible way to run a non-Android OS on a phone and still have power efficiency and support for Android apps with multitouch, GPU acceleration, etc.? (i.e. some ready made solution that runs Android in KVM efficiently, or provides the Android kernel features and userspace RPC servers)

Without that, one must run Android to get access to mobile apps, which is essential.


We have MitID for banks and government sites in Denmark. Sometimes you also use it to authenticate purchases. You can get a small device that you use for authentication if you don't own or want to use a smartphone. It seems odd that Sweden forces everyone with a bank account to also own a smartphone. Especially when many elderly people don't have one.


it's not just Sweden

e.g. in Germany you have similar situations when it comes to online banking for some banks (through not when it's about card->terminal payment)

for example credit card apps in the EU have 2FA (which doesn't always trigger and there is still the legacy protocols for payments from outside of the EU so a bit uh, maybe, pointless)

this 2FA is for most banks handled by and app (different apps tho)

and many of this apps bathed in "security" snake oil making them impossible to run even on a non-rooted but de-googled android phone

same for pay with you phone over NFC, e.g. it currently doesn't work for my bank with my phone and I'm speculating that it's because the secure modules attestation is not is not whitelisted (the phone uses a "industry" ARM chip instead of a typical consumer one so attestation derived ids/keys can be "unexpected")

(Through also where I live there is a law for having some kind of b2b online banking API, so theoretically a company could create an app which does work on unusual phones, or create a device like you mentioned.)


The "small devices" (gemalto NL vasco US) are/where made in china. Those are probably going "away", at least as we know them.

To boil down the problem: To access the western systems you need to keep feeding Apple, Google and Microsoft (AGM).

The alternatives are non-existing still 15 years after the release of the smartphone (that in retrospective probably was the dumbest thing humans have made).

The dream device = dual boot tablet Android/iOS and vanilla linux will never exist?

So you have to shoot for vanilla linux (maybe ARM, probably RiscV) and tell the banks that you are dropping support for AGM.

It's a "I'm not in here with you, you are in here with me" sort of moment.

This goes hand-in-hand with root certificates, taxes and the health/legal system at large; everything made by humans without consulting nature first.

I got popcorn, but I'm kinda worried too.


I use an off-line chip+pin reader for online banking and card purchases in Sweden ... but the bank could choose to stop supporting it whenever they want.

BankID is owned by the Swedish banks. It is required for some government- and healthcare sites as well, despite it being a commercial monopoly.


What bank is that?


BankID also supports smartcard authentication and this was provided by my bank before I could even set up Mobile Bank ID


We've come a long way when Google are peddling non-free software run on freedom hardware.


Commercially available RISC-V hardware is not more "free" than other types of CPUs, in any meaningful way.

The openness of the RISC-V specification is great for chip designers, manufacturers, and researchers. But it has basically no effect on end users, except indirectly (i.e. hopefully the chips are cheaper, because there's more competition and no license fees).


Not into open hardware, but recently bumped into this FPGA-based approach - the MNT RKX7 Open Hardware FPGA CPU Module [0] - that is a good example of how to experiment with a RISC-V design before spending the money to produce the actual chip.

[0] https://mntre.com/media/reform_md/2022-09-29-rkx7-showcase.h...


You could also say that open source has basically no (direct) effect on end users, because the majority of end users are not coders, and even those that are rarely look at the code of the apps they use.

The indirect effects are the point.


What are the indirect effects? You realize the least troubling part of making Android run on some proprietary hardware was the ARM processor? That part is perfectly fine!

It's the graphics card, the display hardware, the camera, the video encoding/decoding - these are all proprietary, undocumented blobs of IP with no upstream support. Your RISCV Android phone is still going to have all of those, it makes exactly zero difference.


Wouldn't the main indirect effect be that it's easier for new companies to enter the smartphone space due to reduced costs from the ISA being open? I'd expect that to take a while, but wouldn't that be the main benefit?


The cost of developing a competitive CPU is high, and then you have to package the whole thing into a SoC and optimize for power usage, support it for years, etc. The ISA is not the limiting factor there.


As another commenter already pointed out, open source tends to build on open source, so I'd expect more and more complex chips to be built on the platform over time in the open. Maybe there's a future world where a viable open source smartphone can exist even.


Actual designs of chips you’ll be able to buy will not be open source, with a couple of irrelevant exceptions by a foundation or an educational institution or something. It won’t do you any good, though: the feedback loop to get the open source gradual but exponential improvement engine working is too long and too expensive for hobbyists.

It possibly could work with a community of companies, but my imagination stops there, can’t see how mbas could ever agree to giving away differentiators when fixed costs are so high, it’d destroy margins.


>The cost of developing a competitive CPU is high

It can be dodged by either licensing some of the hundreds of commercial core designs available, or by using one of the (several) available open source RISC-V implementations.

Other competitive ISAs also have a few -albeit not anywhere as many- open source implementations, but there's no dodging the ISA royalty fee on these.


How many of those open source implementations have competitive performance?


For the majority of open source projects the indirect effects are the potential for more eyes, more bugfixes, and more knowledge to spread.

RISC-V CPUs don't have any of that as there's no requirement that actual implementation be open source, only the ISA. And it's not like you can change it. You can't buy a RISC-V CPU and fiddle with the design, you don't get the source nor the toolchain. All you get are the man pages, and guess what? You get those with x86 and ARM, too!


> RISC-V CPUs don't have any of that as there's no requirement that actual implementation be open source, only the ISA.

There's a growing list of RISC-V cores that are themselves open source (that's how RISC-V ecosystem got started, btw). And unlike in x86 or ARM world, no-one would hassle you for licensing or royalties if you bring those designs to a foundry & mass-produce.

Now as for say, GPUs to stick onto those, that is another matter.


The indirect effect of open source software are really strong though. It only takes one coder to remove an anti-feature that the devs decided to include and publish.


The indirect effects of proprietary extensions, which means your software isn't portable anymore?


The blog post is by the open source office in Google, not specifically the android team

And so far entirely about AOSP and getting that running.

GApps (the non-free part) isn't mentioned at all. And like someone else said, there's use cases besides phones that use android. And some people Run de-googled android on phones, it's just degraded :/

- googler, but far from anything android


Freedom hardware?


Assuming US-market-first hardware (As opposed to EU or China). This is my first time seeing such a term too


More likely, it allures to RISC-V being a royalty-free open standard, unlike ARM. As teraflop said, though, in practice this doesn't mean much. Chip designers will throw in proprietary IP just for patentability and differentiation.


RISC-V isn't freedom hardware, there's no equivalent to copyleft.


Uncopylefted software can still be free in every sense.


Free to proprietarize it and free to lock it down and free to spy on users. So much freedom!


They are peddling closed software that spies on you on closed hardware.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: