Hacker News new | past | comments | ask | show | jobs | submit login
Android 13 virtualization lets Pixel 6 run Windows 11, Linux distributions (cnx-software.com)
471 points by todsacerdoti on Feb 14, 2022 | hide | past | favorite | 259 comments



I've also uploaded videos of Windows 11 in action:

Booting, logging in, simple usage: https://twitter.com/kdrag0n/status/1493088558676017152

Playing Doom (via x86 emulation): https://twitter.com/kdrag0n/status/1493089098944237568

And Linux:

Booting various distros: https://twitter.com/kdrag0n/status/1492832966640222210

Compiling Linux 5.17-rc3 allnoconfig for arm64, on Arch: https://twitter.com/kdrag0n/status/1492833078410047488


I was wondering why Windows is not on more ARM devices since Windows on ARM exists and Windows was ported manually to so many ARM devices now (Raspberry Pi for example). Turns out Qualcomm has an exclusivity deal for Windows on Arm licenses (which might soon expire).

https://www.theverge.com/2021/11/23/22798231/microsoft-qualc...


You might want to check out the Renegade Project: https://github.com/edk2-porting


Does Doom have it's own ISO Committee yet?

Because I swear Does It Run Doom? is becoming a requirement checkbox for any new project.


Be careful. Down this path lies "the ISO is asking us to formally specify the mechanics of DOOM" and then the only resulting end game of that is "we got DOOM running on the ISO committee!"

Naturally I agree with you though. (Because we need DOOM running on more platforms~)

And I'd suggest Bad Apple also needs similar treatment.


If it's capable of representing a two-dimensional array where each element can have at least 2 values, we will get Bad Apple playing on it.


Sound output would be a bonus.


I've seen plenty of videos where people just dubbed the music over it, but yeah, bonus points if you can get it running on whatever beeper you got. The best arrangement I've heard was for the Sound Blaster's OPL3 chip (though a little bit that's cheating since you get 18-note poly and stereo which is enough to create a decent soundscape): https://www.youtube.com/watch?v=2lEPH6Y3Luo


Then you can't implement Doom without paying a few grand for a spec.


There is a popular subreddit for it: https://www.reddit.com/r/itrunsdoom/


Sure, the standard is called ISO 666.


Thanks. Have you written up a long-form post on how you went about doing this?

I am also excited to see what booting multiple OSes means for the ecosystem around phh's (treble) builds, too.


Possibly, I might write a post and/or release tools to do it in the future.


Partly joke, partly true... the question that really matters for every AOSP developer out there: how much time did it take to "make clean/make dist"?

I'm still working with Android 11 and compile times are driving me insane. The ritual to compile, pack and flash super.img into the device is absurd.

Do you know if there is any improvement on that side?


> The ritual to compile, pack and flash super.img into the device is absurd.

I typically only do a full flash for the first build after a sync. Afterwards I just build the pieces I'm changing and use `adb sync` to push them to the device, skipping both the step that packs the image files and the flash. The `sync` target will build just the pieces needed for an `adb sync` if you don't know exactly what you need to build; I typically use it so I don't have to even think about which pieces I'm changing when rebuilding.

So typical flow goes something like:

``` // Rebase is only needed if I have existing local changes > repo sync -j12 && repo rebase

// I don't actually use flashall, we have a tool internally that also handles bootloader upgrades, etc. > m -j && fastboot flashall

// Hack hack hack... then: > m -j sync && syncrestart ```

Where `syncrestart` is an alias for:

``` syncrestart () { adb remount && adb shell stop && sleep 3 && adb sync && adb shell start } ```

Incremental compiles while working on native code are ~10 seconds with this method. Working on framework Java code can be a couple minutes still because of the need to run metalava.


This is really nice tip.

AFAIK, sync works on Linux only since it needs $ANDROID_PRODUCT_OUT. The problem is that I develop on a Windows machine (vscode with ssh extension) and my source code is on remote Linux machines (in premises), dedicated to building AOSP. Since I build at least for another 5 platforms, my working PC cannot cope with the current (and future) workload/space, so I asked to move all the source to dedicated hardware for building. Perhaps I can do it with ADB through Wifi...

I always thought sync worked with frameworks or packages, but since you mentioned "native" I guess it will also sync vendor stuff?


New versions of Android aren't open-source until their stable release, so I don't know. I've been running these VMs on the stock ROM.

I don't feel like incremental AOSP builds are that slow, and I don't think it's changed much from Android 11 to 12. It's highly dependent on good hardware though, and it probably also helps that I flash individual partition images instead of building dist or target-files.


Dude, this is awesome! Huge kudos to you!


Now that Windows 11 has 32- and 64-bit x86 emulation this has the potential to do some interesting things in the long tail of the market.

I honestly wonder if there's a monetary opportunity here?

(This used to read a little differently (https://i.imgur.com/yp95XxR.png - thought it would be funny) but it quickly became apparent some editing was in order. This comment will likely remain stuck at the bottom of this subthread... woops.)


Unexpected update: the parent comment has been downvoted further since being edited. I am officially lost.


One of the primary uses-cases for VMs in Android seems to be a desire to replace the Trusted Execution Environments (TEEs) permitted by things like TrustZone. There have been several presentations about this by Google:

- LPC 2020: https://www.youtube.com/watch?v=54q6RzS9BpQ&t=10862s and https://lpc.events/event/7/contributions/780/

- KVM Forum 2020: https://mirrors.edge.kernel.org/pub/linux/kernel/people/will...

From the KVM Forum presentation: "We need a way to de-privilege third party code and provide a portable environment in which to isolate services from each other and also from the rest of Android."

From the LPC presentation:

"What do we need? We need a hypervisor that is:

1. open source

2. easy to ship and update

3. supports guest memory protection

4. trustworthy

KVM as part of GKI is a very good fit with the right extensions."


I think development of mobile OS heads into a completely wrong direction. The "curation" effect of proprietary stores is only good for censorship and mobile OS have the most aggressive applications that border on malware. At least you don't have the browser toolbar classical systems were infected with?

For productivity I don't want iron curtains between my applications or between application and system. Hell, it would actually be nice to be able to directly interface the hardware without all the loops.

We have a deeply flawed security model for mobile OS that relies on dogma that won't lead to additional safety. As safety issue is data exfiltration and by that standard a mobile devices are vastly more dangerous than the average work station. Comes with the nature of the device to a significant degree, but I think mobile OS have ridden of a cliff somewhere.

And now they want to push apps into a VM? Why not just buy a device per app at this point...


Anybody making money off of phone apps should not be trusted, they can and still take every piece of data they can to figure out how to monetize it. I want and need phone app security to protect me from the companies that want me as a product to be sold.


I think where they are going with that is that this sounds suspiciously like a “people problem” and not a technical one though it is attempting to solve it via technology. For some reason desktops let software do whatever and they don’t seem to have near as much an issue as mobile OSes.

IMHO, it’s the fact that there’s a (basically, by default) a single market driven by (basically, by default) a single search engine to locate software. The desktop market, is largely NOT driven by a single market or search engine. With the “single market” if a malicious actor can game the search engine for even a day, they can install on (potentially) millions of devices. However, with the ‘ole distribution model, it’s much harder to game.


> For some reason desktops let software do whatever and they don’t seem to have near as much an issue as mobile OSes.

I think this may be a case of rose-colored glasses. How many PCs were and are a complete mess because Windows both is a free-for-all and is (or was) the largest attack target? Sure, we know how to keep our desktop systems sane, but most people don't.


Then you also need to look at the average persons phone. If you activate any network connection, it supplies the contents of your latest stool sample and sends it to 5 unclosed apps at least. The data exfiltration is not comparable and exposure of personal information also extends to contacts this person has in their phone. It is a much more hostile environment and not really clean at all.


There's reasons to be suspicious but it's also important to remember that mobile phones see much deeper penetration. About 84% of the world's population has a smartphone. Annually, ~1.6 billion phones are shipped vs 275 million PCs.

With scale comes new challenges and attack vectors.


> Annually, ~1.6 billion phones are shipped vs 275 million PCs.

This is most likely because people do not have to buy a PC every 3 years. Phones are treated as disposable items, whereas with a desktop, you can open it, clean it, change parts etc. I can even change the CPU in my laptop if I wanted to.


That’s one theory but one that doesn’t hold up by virtue of the fact that 83% of all humanity has a phone. The penetration for laptops/desktops never reached that high.

We can assume both as having reached steady state and that’s mostly replacement rate (not quite but almost).

Cell phones are more popular for obvious reasons: full access to internet at all times, TCO is drastically smaller across the board (upfront cheaper, cheaper internet, etc), and entire structures are built around them (many critical services go app first if not even app only).

I think you’re applying a very western bias in your analysis.


Most people also don't buy a smartphone every three years, they use them until they either die, or get stolen.


Both of which happen much more often than with laptops, at least in part because they are treated as being more disposable, but maybe more directly because they are smaller.


Which is why the desktop market has tons of garbage anti-virus and PC cleaning software, trying to fix the cowboy land it turned into.


This sounds awfully similar to Microsoft anti open source textbook that no one can be trusted besides them, you always need their platform and services between. The only difference is that now people are shilling Apples fake corpo security propaganda talking points and sadly they are often developers themselves.


I've definitely been surprised by the sentiment here lately that iOS devices are more secure. I don't know enough about iOS to evaluate that claim, but my impression of phone security is to basically assume it's been unlocked by someone even if most apps are successful sandboxed


Google sells advertisements, Apple sells... access to its store involving a hefty 30% tax.

Goggle's interests are aligned with the surveillance capitalists, Apple is either competitive (see Facebook) or ambivalent.

Apple can differentiate itself by trying to sell the security angle and hurting the surveillance of people in their app ecosystem doesn't hurt them. Comparatively, I would say Apple has a nontrivial advantage in security, though I wouldn't make a broad statement that iOS is "secure".


This has nothing to do with apps. It's instead about things like the biometrics or DRM handling code. The stuff that regularly runs on its own dedicated CPU or in specialized sandboxes like ARM's TrustZone. This is about moving that to instead be in a VM on the "primary" CPU, implemented in open source Linux kernel code instead of who-knows-what ARM microcode.

Or to compare against an ecosystem that has more love around here, imagine if everything that Apple runs on the T2 was instead run in a VM with an equivalent level of security. That's the goal.


Walled gardens have many problems, but one of the reason they exist is that they offer some security benefits. Virtualization allows for stronger containment of apps and make walled gardens less necessary.


Indeed, that's one of Google's official use cases: https://twitter.com/salt___doll/status/1492865396692586497

Recently, they've also focused on using VMs to isolate parts of the system and apps: https://twitter.com/salt___doll/status/1492872311652765700

Mishaal Rahman has written a detailed post about virtualization on Android: https://blog.esper.io/android-dessert-bites-5-virtualization...


> 4. trustworthy

What does this actually mean? Trustworthy under which circumstances, in what security model?

I'm terrified it means "unbreakable DRM in a way that is harder still for consumers to use or bypass" rather than "the code is not vulnerable to side-channel malware attacks". A lot of this could either be brilliant -- applications can't look at each other at all, unless one of a few trusted utilities such as screen-readers or keyboards, for example -- or utterly horrific from a user point of view. "Attestation" is mentioned in one of the end goals of that KVM Forum pdf, but it's unclear whether or not they mean a Qubes-like OS with the user as the hypervisor, or the user completely and utterly locked out of being the hypervisor, ever. One of these I think could be quite interesting -- especially without hardware attestation. One I think would be awful.

Part of the reason I (thoroughly break) Android's security model and run a rooted image is to have knowledge that I have control over everything in my device. It's important to me and I worry that these efforts will ultimately take that way from me.


Modern phones already have DRM running in TrustZone, even if you flash a custom Android build. pKVM is actually better in that regard as the DRM will be moved to a protected VM under Android's control, so you could choose to disable it on a custom OS.


Many of the comments are from the point of view treating the phone as a desktop to use directly with is not very nice experience.

I want to treat the "old" phone as a server: 8 cores , 6 GB Ram, UPS, plenty of storage(expandable), Low power consumption , small form factor, redundant network 4/5G+Wifi, has built in screen and "keyboard" for time when you need to do debugging.

Does others experience with this kind of setup? What services are you running ?


I did this a while back with a Pixel 2. I decided it wasn't worth the trouble.

Phones are not designed for continuous power draw (and consequent heat dissipation) - contant-use power dissipation limits are very low. The performance dips dramatically, and the constant high temperature kills the onboard flash prematurely. Same thing applies to the radios - wifi and cellular. Sustained data transfer on either of those interfaces causes issues - dropouts/disconnects/thermal reboots.

This is in addition to the fact that phones just don't have great I/O.


The heat isn't too hard to deal with, there are solutions (including the weird ones like water cooling cases https://www.ebay.com/itm/175003613243 ). Simply adding airflow tends to get you pretty far by itself, even. Device test labs are a common-enough thing and handle running phones like servers at scale without too much difficulty.

But there's more nasty lurking problems in this area, like that phones aren't designed to be continuously powered. They will naively try to keep the battery at or near 100% charge, which will destroy it relatively quickly, and not uncommonly in the "it's bulging and increasingly likely to burst into flames" variety. 2 years is considered a "decent run" for things like device test labs as a result (see eg the FAQ on https://github.com/DeviceFarmer/stf )


It seems that putting a timer on the power source (set to say 30 minutes on, 30 minutes off if the load justified it) would be a fairly simple way to significantly extend the battery life. I've been thinking of putting together something like that for myself to charge a bank of test phones for 30 minutes twice a day.


I'm not sure it would. What would extend it is if the phone could just pull from the wall directly & not the battery, so the only charging / discharging the battery gets is from being unused & slowly re-topped back to 80-90%.

If you do your timer idea it seems like either you're going to be hitting it in the 80%+ recharge over & over & over again, which doesn't seem meaningfully different from leaving it plugged in? Or you'll be in the sub-80% quick-charge zone, which will destroy the battery even quicker.


Maybe a solution to the problem would be a case that would provide cooling and would hide cables and adapters. The result could still be somewhat smaller than a NUC. Then you would only replace the phone once in a while and would have to cut a new screen cover. For better cooling depending on the phone you could remove the back cover.


Yes, and that can improve the situation a bit, but it quickly becomes a tail-wagging-the-dog situation. A heatsink requires very good thermal contact with the main heat-dissipating elements - the SoC, the DRAM, the flash, and the radios to be effective. None of those are easily accessible on a phone unless you are willing to dissassemble it entirely. And even then the contact with the heat sink will be poor due to the package and the other components.

An external case can help to a very limited extent. I tried attaching a large PC heatsink, and it did help, but not to the extent that made the "server" any good. I just switched it out for a 50$ RPI4 and its vastly better in pretty much every way.


Which doesn't need any cooling. But to be fair I expect the screen to be worse than any component. Maybe not as much as the CPU at times which has a higher frequency than it should.


"Gaming" phones do this, actually. Most of them though have poor software support compared to "thin" flagship models with far more problematic thermals.


The Pixel 2 era 820s-830s especially seemed fairly throttle heavy. There's better APIs for sustained power modes now, although I'm not sure VM would use them.


> Phones are not designed for continuous power draw (and consequent heat dissipation) - contant-use power dissipation limits are very low.

Maybe someone here can help me understand how power draw works on Android.

I have a ZTE Z959, a Cricket device. I use the phone to take photos with Open Camera every few seconds and stitch them together to make a video with ffmpeg. (Someone told me that YouTube has no practical limit on storage and I wanted to prove that they will cap me at some point but that is a topic for another day. Basically, the tl;dr here is for my casual use, YouTube has unlimited storage).

But I digress. The point is at some point the phone's battery started swelling up which became a fire hazard. I wanted to power the phone without battery. I have a Thinkpad 65W USB type-C charger. The first challenge was easy to work around. The phone just goes on a boot loop but if I add the battery and plug in the charger, the phone boots up ok. After the phone boots, I can remove the battery and the phone stays on (provided I don't do things like use flash, my guess is flash needs more power than my charger can provide.

Can someone shed more light into this process? How does all of this work? Does all of this mean my phone is technically running from battery even when it is connected to the wall?


Open phone case and put phone board into oil tank :)


I set up an old phone to run a torrent client (qbittorrent). It was a Debian 8 chroot [1] that I could ssh into.

In the end, it really wasn't worth the hassle. Networking ports would go unresponsive when the mobile CPU on the phone would go into "deep sleep". The battery began expanding after a few months.

CPU was more than capable to do the actual processing work. I/O with the SD card was passable.

I really wish someone would invent some sort of generic battery adapter that could transform any device requiring a battery into something that can run on direct power. I really adore that old Sony Ericsson Xperia android device.

[1] https://wiki.debian.org/ChrootOnAndroid


Till this date, I don't understand why phones don't work on direct power without battery. There should be some engineering problem...


Its possible to replace the battery with a DC/DC adapter, I did it with a Samsung tablet. It does require a "beefy" power supply at a an odd voltage (4.2V or so) since batteries can provide more current than most power supplies. Also required a resistor to fool the tablet into thinking the battery was not drained.


Please manufacture at commercial scale. Consumers like me would be greatly indebted : )


Please sandos


It might be because of brief spikes in power draw that a battery usually "smoothes out" but USB power can't handle. For the same reason, some laptops cannot run without a battery (most can).


You're likely right. Some stable power feeder does need to exist, but could it skip the chemical energy storage parts?

The charge-discharge cycles needlessly put a lifespan on that component when I'd like to have it left on 24x7.

Same for always-powered laptops. With WFH, for 1+ year I've used my Thinkpad as a desktop, not bothering to unplug when fully charged, and now the battery doesn't hold charge for 10+ mins.


The answer is that phones really need a user setting to enable a charge limit.

About 4 months after COVID hit and I started to WFH, the battery in my Pixel 3 started to swell because I had allowed it to basically live on the charger all day, constantly at 100% battery. Also, by that point, a 100% battery would only give me about 2 hours of screen time.

A few months ago, I got a Pixel 6 Pro, and I'm basically just charging it from a weak 500 mA USB port a couple hours each day, keeping the battery between 50-80%. I'd really like to set an 80% charge limit and just forget about it.


AccuBattery gives an audible alert when a configurable charge level is reached (default is 80%). Not as nice as having the phone stop charging by itself, but at least I'm not frying my battery.


I don't think that's normal. I have an old cheap Lenovo from 2012 that sees 4-12 hours of active use daily. It's almost always plugged in without any limits on the battery charge (which as a result is always at 100%). The battery still has around ¾ of the original capacity (it ran for ~4 hrs of light use back in 2012, and gives around 3 hours now).


I don't think it can. UPS's store energy too, and stop holding charge after a while. It would be nice to bring back easily replaceable batteries, since they're basically a consumable these days.


> I don't think it can. UPS's store energy too, and stop holding charge after a while. It would be nice to bring back easily replaceable batteries, since they're basically a consumable these days.

I don't know anything about how much "oomph" a dashcam needs as opposed to a smart phone or an uninterruptible power supply but I kno0w one of the bullet points in the marketing of the dashcam on my car was it uses a small capacitor as opposed to a battery and therefore it is safer to use it in a hot car.


Yeah, but how much can that capacitor do?

I had a dash cam years ago that had a capacitor, and all it really was for was to make sure the camera could finish a write operation and close the video file gracefully to avoid corrupting the last segment. Cameras with batteries often can support recording even while the car is off without draining the car's battery.

IMO, the solution to the battery-in-a-hot-car problem is to have the battery be an in-line part of the power cable that can be placed in the glove box, outside the direct sunlight, rather than building it into the camera itself.


I think some Thinkpads (P51 at least) can be set to not exceed 80ish%, but I don't know if that can be done from Linux.


Yup, this is it, IMO.

On my OG Motorola Droid, I once tethered my laptop to it and downloaded a torrent.

In 15 minutes, I drained 25% of the battery. IIRC, it had a ~2,000 mAh battery. That meant I was pulling 2 amps from it. This was back in 2010, when most phone chargers and USB ports were still only 500 mA.

If I had relied on USB power, it wouldn't have been able to power it.

Oh, and yes, the phone got incredibly hot during this time. I thought I was going to burn my hand when I picked it up.


Probably some phones do work on direct power without battery.

But anyways in the general case the user wants the battery to charge when they connect their phone. Not allowing the phone to run without a battery will show more clearly that there is an issue with battery connection or battery itself, than if the phone would run when connected to power without battery.


> Not allowing the phone to run without a battery will show more clearly that there is an issue with battery connection

This is a non-issue since battery status (or lack thereof) is clearly shown in the UI. A more likely issue is that the battery is commonly relied on to deal with peaks in power draw, beyond what can be supplied via the USB port. This can even be an issue in many laptops.


I recall some older android devices with replaceable batteries did. If your battery was almost dead, you could plug it into a charger and swap out the battery for a fresh one.


> If your battery was almost dead, you could plug it into a charger and swap out the battery for a fresh one.

Yes, it does the same thing on my old ZTE z959. However, the phone boot loops if I try to boot it without a battery.


The correct answer is simple: there is no profit in it and therefore no reason to do it. Any work or addition to the BOM of a phone for this purpose is therefore a waste.


I had a small Ruby (Sinatra) website running on a Linux vm on a spare Android phone (Sony Xperia X Compact). Turns out the CPU is quite capable. It actually compiles Nginx faster than a low end Google Cloud vm.

It sounds nothing, but looking at Nginx logs scrolling on a tiny phone screen is so unbelievable.


Symbian did it first, there was an Apache version with mod_python available from Nokia themselves.


PostmarketOS gets you pretty close to this: boot alpine Linux, with more or less everything working. Even if the screen and/or GPU acceleration do not work on that phone, that would be plenty to replace a Raspberrypi.

I've been thinking about this quite seriously, and for now I see a few pain points:

- Some phones (looking at my daily driver, a Galaxy S4) do seem to allow disconnecting the battery from the charging circuit, which could lead to issues in the long run

- Ideally, you'd want to connect a USB hub over USB OTG (wired Ethernet, USB storage, real keyboard). I have yet to try charging the phone at the same time, although I think some cables enable this.


This is basically my vision of how to bring self-hosting to the masses.

Upcycle an old Android phone. Install apps for Nextcloud, Jellyfin, etc. Do a quick OAuth2 flow with your domain name provider to tunnel a subdomain directly to the app, and you have an end-to-end encrypted private cloud.

For this to work we need:

* Simpler domain name providers targeted at consumers instead of IT professionals. You shouldn't need to understand DNS records to use a domain.

* An open protocol for setting up tunnels[0].

* Nextcloud et al need to implement the protocol on their end. For open source projects 3rd parties can make wrappers.

[0]: https://forum.indiebits.io/t/toward-an-open-tunneling-protoc...


I ran an odroids ( https://www.hardkernel.com/ ) as my home server some years back. It worked nicely for what it was, but in the end the lack of good storage options (I ended up attaching USB harddrives) was what doomed it. For what it's worth, a RasPi is exactly what you're describing, minus the screen/keyboard (but with much better cooling, which is what would be the bottleneck otherwise).


Odroid N2s can use EMMC storage which has worked out pretty well in my experience. For larger data sets I use a network filesystem anyway.


For that you shouldn't need virtualization though. With termux you can run lot of programs. There are termux packages for few server software. You should be able to compile some more yourself.

I remember being able to run a rootless debian chroot (using utility called proot) on Android 9 device, but things might have changed in meantime and I don't know if it's still straightforward.

If you're looking for serious performance, though, it may not be practical.


The permission policy changes introduced by Android Q broke Termux's ability to peacefully coexist with the Play Store. [0]

[0] https://github.com/termux/termux-app/issues/2155


UserLAnd is still getting updates on the Play store. It can run multiple distos without requiring a rooted device.

note: I am the developer of UserLAnd.


It would be best to create a kubernetes cluster of phones (reliability, scalability,...). To run containers on Android you currently need to build a custom kernel (1), I hope this feature removes this need.

(1)https://gist.github.com/FreddieOliveira/efe850df7ff3951cb62d...


In terms of performance my expectations is to be comparable to the vm/vps from cloud providers where IOs are also limited, and with plenty of RAM for many workloads this will not be a big issues.


I/O is too bottlenecked to "serve" anything serious. It's the problem with most SoC including Raspberry Pi.


What's "serious" here?

Even ignoring USB 3, most servers should work fine when capped to 100 or 200Mbps.


USB 3.1 over a USB port shouldn't be a bottleneck. You just need an adapter that suits the need.


What phones have USB 3? Almost all phones still do USB 2. It's not about the port (USB-C), it's what the host I/O is capable of.


The initial crop of USB-C adoption was mostly USB 2.0, but almost everyone has been USB 3+ for a long, long time now. All the Pixel phones except the 3a are USB 3 or higher, for example. And for comparison Samsung has been using 3.1 since the Galaxy S10e.

I just checked my Pixel 5 for example reports USB 3.2 (as in, that's what it's actually connecting as to a Linux host per bcdUSB)


What about actual transfer speeds? The signaling protocol gives the max speed. If something is I/O bottlenecked inside the phone it's still slow.


I use phone as a wifi hotspot and file server with Termux (sshd, ftpd). Works pretty well.


Plus built-in UPS


And a bunch of sensors. Things like GPS, accelerator, microphone, camera, are pretty much standard even on inexpensive smartphones going back half a decade.


I'm looking forward to the future we have envisioned for a long time, where you carry your main computer around with you and simply drop it into a standardized i/o dock when you sit at a desk and it runs the big high res displays and accepts keyboard/mouse input, and you use it like a phone the rest of the time. (Basically, a phone that is also a laptop replacement - I know there are dockable phones now that can run a monitor and kb/mouse, but they're not really "laptop replacement" level.)

I could even imagine such a system having two different CPUs (or, more likely, different cpu performance cores in a single package) that power up/down based on wired power availability, basically just automatically getting many times faster when connected to power and not having to conserve juice on battery. Storage and memory are already fairly low power these days, and tiny. Mobile (i.e. handheld) GPUs are now powerful/efficient enough to run high-res handheld displays with all day battery life, which while probably not 4k gaming level, are more than enough to run a multimonitor desktop setup when not gaming, especially if you make the quite safe assumption that they'll have wall power to crank up the GPU whenever asked to run external displays.

I'm really excited about mobile computing over the next ten years. The Nintendo Switch and M1 iPad Pro are little glimpses into this future. I look forward to replacing the dozen computers in my lab with a single handheld device that can simultaneously virtualize many of them and conveniently multiplex several big displays between them, and come with me in my pocket when I leave.


My experience with a Surface Book (which I use in at least 3 different ways - as a tablet, as a laptop, and connected to a monitor/keyboard/etc. on my desk via Surface Dock) makes me think that's coming eventually, but it'll take longer than you think:

- External GPUs are still pretty bad

- Tablets with cellular connectivity are if anything less well supported than before. I think this is mainly because carriers aren't really supporting the idea of one person/account having multiple "phones". Smartwatches have the same problem - I remember going to a Samsung store and they were showing off a smartwatch that had its own SIM card and cellular modem, but the staff couldn't tell me what kind of phone contract you had to have to make it work

- Small devices still mean a noticeable compromise in power. I've tried using Samsung Dex as well and it's... ok, but appreciably worse than even a netbook, even if on paper the processor/memory/etc. ought to be catching up. Laptop as primary computer only really took off once you genuinely couldn't notice any performance disadvantage compared to a desktop, at least in my friend group; I think it'll be the same with tablet and (eventually) phone/watch/ring form factors


> Smartwatches have the same problem - I remember going to a Samsung store and they were showing off a smartwatch that had its own SIM card and cellular modem, but the staff couldn't tell me what kind of phone contract you had to have to make it work

I might be missing something about cultural differences but... why would you expect an electronics store to tell you what kind of plans your telco has?

Here in Switzerland pretty much every telco offers an additional SIM for tablets and watches, but you need to talk to the mobile operator not the electronics store ^^


I'd expect the person selling me a product to be able to tell me what I need to use the product. Like if I was buying a car that runs off LPG or something, I'd expect the salesperson to be able to tell me where I can buy that fuel and roughly how much it costs, even if the car company doesn't actually sell it themselves.


Many of those telco plans in Switzerland that just add a data eSIM don’t actually work with something like an Apple Watch, which is why I assume the GP asked at the store. At least the Apple Watch requires a mobile network to offer a special shared-number add-on and support Apple’s GUI for provisioning this plan with an eSIM on the watch. Unfortunately, this usually means sticking with more expensive first-party networks rather than MVNOs and paying whatever fees they require.

It’s quite possible the Samsung watches don’t rely on this system, but I’m not sure and may have also assumed it was worth asking like the GP.


I guess the OP was from the US. Most US providers still seem to live in the mental world of selling SIM-locked phones, they are no smooth experience for customers bringing their own device.

Prices for just a SIM without buying a phone seem to be pretty high.

Here selling SIM-locked phones is not even legal. In most European countries it is legal, but still less common than in the US. So operators have to serve customers not buying a device from them on competitive terms.


> Most US providers still seem to live in the mental world of selling SIM-locked phones, they are no smooth experience for customers bringing their own device.

This was true 5-10 years ago. Nowadays, nearly every mobile network regularly has offers to entice users to bring their device from a different network in exchange for a cheaper monthly plan or similar.


Most smart watches don’t have a physical SIM card slot since that would be way too big. The software used to load an eSIM onto the watch often has a whitelist for carriers.

Another issue is that both the watch and the phone need to share a phone number if you want to leave your phone at home and answer calls and receive SMS on the watch. This isn’t standardized and only works with each watch’s supported carriers.


The solution to the SMS problem is to use VOIP. I personally very much like JMP.chat.


That would be ideal, but VOIP numbers are blocked in many situations. Most 2fa providers and bank transaction verification comes to mind.


>Most 2fa IME: banks and such are fine with it. Coinbase definitely works.


> Tablets with cellular connectivity are if anything less well supported than before.

Not my experience. I'm sure it depends a bit on your carrier, but on T-Mobile it was easy to order a data sim for a tablet from their website. Setting up a smart watch was even easier with eSIM, it just sets up everything for you when you pair with your phone.


You seem to be talking about the US. T-Mobile seems to be the only operator not having custom firmware for modems. Maybe a small remainder of the European influence in the company. In Europe national and international interoperability between any SIM and device (1) has been a thing since 1992 when GSM came. In the US operator business models were based on technical incompatibility and complete lock-in for decades.

(1) SIM locks exist in some countries. But they are commercial practices, not technical incompatibility.


I don’t see how the history of GSM is very pertinent to the present scenario. At least in the last several years I’ve used phones from Europe or Asia or other networks in North America and they all just work with first-party or MVNO SIM cards in The US, from T-Mobile, AT&T, Tracfone, Mint, and others, assuming the device supports the appropriate bands (which varies a bit by region, especially for LTE).


What's the problem with eGPUs? The most problematic thing I see is that they only work with intel CPUs for now and that all enclosures are tb3.

Also Samsung is really good at adding features and then locking them down in weird ways, breaking basic features or just providing horrible user experience.


> What's the problem with eGPUs? The most problematic thing I see is that they only work with intel CPUs for now and that all enclosures are tb3.

Maybe they've gotten better, but the last time I looked at reviews they were generally flaky (driver issues or incompatibilities with particular games) and had performance issues, to the point that a lower-model built-in card would often outperform a higher-model in an external enclosure. I'm sure they're coming eventually, but I haven't heard of anyone having an actually polished experience with one in day-to-day use yet.


I’ve had this for a year and a half with a PinePhone. I just don’t use the docking ability that much because:

1) Forwarding X11 to a desktop is easier (and all the apps run on X anyway)

2) Since it’s just Linux all the apps can run on a desktop just like the phone, all my stuff is synced with git or rsync, and if there’s something I need outside the usual folders it’s just an scp away. There’s absolutely no reason to use one particular machine over the other other than form factor and computing resources.

It just makes a lot of sense to have a decent desktop permanently plugged in that you can use without any hassle. Devices like that will kill laptops though, I know it did for me. Of course I don’t think this will happen with Android. Google is way too greedy and they’ll find a way to make it unusable.


My laptop has more than 20x the memory of the PinePhone's maximum, and can run 3x external 4K displays, so the PinePhone would not even remotely approximate laptop-replacement level for me.

The hardware in mobile phones needs to improve substantially for the scenario I described to be practical; there's no hardware that supports it today, but the software in TFA is a step in the direction.


This use case would be possible if Google didn't explicitly disable USB-C video output: https://www.androidpolice.com/2019/11/03/pixel-4-has-usb-vid...

(As far as I'm aware, they still do this on Pixel 6.)

I already use my laptop as a desktop when I'm at home by connecting it to a USB-C hub, which in turn connects it to my monitor, keyboard, mouse, etc. I think the smartphone as a "single device which can be used for everything" is a cool concept and definitely possible considering how powerful modern smartphones are. The limitation is software.


This idea is cool, but I already have a seamless experience of moving to different devices through my day and having access to everything I need.

It is stuff like O365, Github, OneDrive, AWS etc that enable that. No plugging in, no reconfiguring devices. Moving between windows 11, android and debian everything I want is right there.

I can't see the advantage, yet, of trying to consoldate down to one device.


You wouldn't even need centralized services to do this properly. Just bring back the old "My Briefcase" UI for device syncing, now additionally backed by a git repository for history, "fork" and "merge" support.


Windows phone used to have this and it was called Continuum. Although it couldn't actually run proper desktop apps, only a few office apps made for it. The promo page for it is still up despite windows phone being dead for a few years now.

https://www.microsoft.com/en-us/windows/continuum


Except that you will not be able to own a "main computer". It will be rented. And it also will be nothing more than a slightly less stupid terminal connecting into a cloud which offers you to rent the software you want to use, because you can't actually install your own thanks to everything being locked down behind VMs preventing the execution of unsigned code.

Want to execute your own code? Fine! Buy a license!

That's what you're looking forward to.


There are companies which provide the freedom, e.g. Purism Librem 5.


Too bad that it is almost impossible to actually purchase one.


Hopefully the CPUs will be available soon.


We already had that, e.g. maru OS, ubuntu touch, Linux on Dex, whatever windows called their version, ... . It pretty much flopped.

So it is "the future" in the same way 3D TVs were: Much hype and then kinda neat, but not great.


In a perfect world, it would work great. A world where hotels and workplaces have available phone docking stations that allow the travelling business people to plug their phones in, and their phones boot up an OS where we could have access to the full power of our personal computers on the provided monitors/keyboard/mouse.

The world doesn't exist. And it's not that much more expensive for the workplace to add the computer, and it's much more convenient to just carry our own ultraportable laptops so that we could work at coffee shops and on the planes.

But perhaps there is an use case out for parents with multiple kids who don't want to use low-powered or hand-me-down computers...


> have available phone docking stations

For that you want something that is ubiquitous anyways and works with everything. For samsung that was HDMI and usb (kinda works, but too many cables) a few years ago and nowadays is just a single usb c cable. Works with laptops, tablets and phones.

Counter question: Why aren't docking stations for laptops common in hotels/coffee shops/wherever? Or am just not staying in the "right" hotels? To me it just seems that the demand is pretty tiny.


> Counter question: Why aren't docking stations for laptops common in hotels/coffee shops/wherever? Or am just not staying in the "right" hotels? To me it just seems that the demand is pretty tiny.

Well, there is an attack called the "evil maid" attack. This attack can happen whenever your hardware is unattended for some period of time. A hotel where maids enter the room every day unattended is quite literally the scenario for this attack. Imagine the surface area for attack, where a dock gets modified, then an unsuspecting user plugs into the dock.

Not only is the demand tiny, but the liability is incredibly high for hotels.


And I even used a Motorola Atrix with the Lapdock - that was underpowered and disappointing too. Great idea for the time but a few years too early.

The hardware was nice though.


I feel you should get a Steam Deck then. It has almost everything you described you wanted in a pocket computer.

1.) Run windows/linux distro on the go.

2.) Can play graphically demanding titles in 720P.

3.) Has a dock which supports ethernet, Mouse/Keebs, HDMI out.

4.) Can have upto 4TB internal SSD with tweaks.


Then I still have to carry 2 devices though. I am not very interested at gaming at this moment.


Isn’t this the Samsung Dex experience?


Probably yes, but we haven't had a decent product like that. Now if it can decently run a linux distro (with external display, etc) I might ditch my iPhone....


What's not decent about Samsung DeX?


It's also what Ubuntu Touch aimed to do. Not sure if that's a use case that UBports still tackles, though.


have to also figure out cooling at that level as well. hell even my laptop can get pretty hot. its definitely is a good step in that direction though.


PinePhone does that.


Even cheap commercial phones are far more powerful though. I have a pinephone because I want a linux native phone, but it's not really useable still as a phone; it works ok as a very low powered Linux system though.


Apple needs to open up is newish virtualization framework on iPadOS. The M1 iPad Pros with Magic Keyboard are great, but basically useless for development work. I want to use mine as a separate, stripped down environment to learn new technologies in. The only way I have found is to use it as an SSH terminal.


> The M1 iPad Pros with Magic Keyboard are great, but basically useless for development work.

I think that's the point.

There's a nontrivial % of Mac Mini / MacBook / iMac sales entirely because of the need to have Mac to publish anything, even PhoneGap/Cordova projects and Safari Extensions to the Apple App Store.


I already have a MBP for that. I want something with no/less windows and less distractions so that I can learn in a stripped down environment. Also, an iPad Pro+Magic Keyboard costs as much or more than many/most MacBook Airs.


I don't think apple is too afraid of cannibalizing the Mac with iPad.

The iPad Pros are on par price-wise with the Mac after you get keyboard cases etc, and on iOS they also get a huge cut of every app sold since you have to get them from the app store. They'd probably be thrilled if Macs were replaced with iPad. The can always jack up the price later.

They wouldn't have spent years neglecting the downfall of the Mac if they cared so much about that revenue.


I'm not sure why you'd want to use it as a development device when there is a not much heavier MacBook Air available with the same brain in it. I actually ran from September 2021 to January 2022 with just my iPad Pro as a computer for all personal tasks, which include programming, and decided that I was artificially limiting what I was doing for the sake of a minimalist ideal. iOS is just not the right tool for the job.

Each Apple device has a very nice overlapping niche and a lot of consistency between them but some devices are intentionally not designed to do some tasks. iOS is fine for non power user tasks and simple automation but nothing more. For 80% of what I do that is fine so I usually go to the iPad first always. But if I want to sit down and do full on keyboard based productivity it's the MBP every time.

The iPad Pro has a very special place in my heart though. It's the most reliable and efficient machine and with the Apple Pencil it's a game changer. I love to take it with me when I go out for a weekend and will sit in a hotel, do spreadsheet, organise tasks, do some drawing, watch some streams, casual messaging and emails and even video and photo editing. But not programming!


That’s a circular explanation. The iPad isn’t suitable for programming because it isn’t suitable for programming. The limitation is completely arbitrary because Apple thinks they can tell users what they should use their devices for. It’s “Your holding it wrong” applied to software.


Actually that’s not it. It’s not suitable because of the modality, window management and state and data management concepts which are all compromises required for high efficiency touch driven mobile devices.


1. People already use remote desktop software in order to interact with desktop OSs from the iPad.

2. The iPad Pro already has keyboard and mouse support

3. The iPad Pro is already powerfull enough to run virtual machines via emulation, see UTM

4. The formfactor is already prooven by the success of Microsoft Surface and copies.

5. The virtualization APIs created by Apple already exist

This is just a matter of Apple having an enforced monopoly on app distribution and using that power to dictate what you should be able to use each device for.


1. Yes. Remotely

2. Sort of. It has good keyboard support and completely different mouse support to most platforms.

3. It probably isn’t within the thermal envelope specified and the storage available.

4. Surface is horrible so I’m not sure why that’s comparable.

5. Yes they do and are exposed by macOS only.

I agree with apple. One of the reason iPads are so damn good is that they put some constraints on them to stop people doing horrible things. Virtualisation is one of those horrible things.


1. I matters not that it is remotely. Wether remotely or as a local VM, it prooves that your claim that "It’s not suitable because of the modality, window management and state and data management concepts which are all compromises required for high efficiency touch driven mobile devices." is FALSE.

2. A keyboard is a keyboard and a mouse is a mouse. Remotely or inside a VM they behave as you expect the remote/guest to behave.

3. Bullshit, the Macbook Air has the same thermal envelope

4. Horrible or not, it prooves the formfactor is viable and desired by people.

5. An arbitrary decision designed to protect market segmentation.

That is a rather emotional response. What did the horrible virtualization ever do to you?


1. It's pretty terrible doing it remotely as well. I ran off iOS for a whole three months only for personal stuff. It's wearing gloves when you don't have to.

2. Yes and no. It uses finger emulation on iOS. There is precision control if you need it but the UI is designed for fingers not pointers and switching between one and the other is jarring to say the least.

3. No it doesn't. I have one. The MBA has a much lower thermal resistance than the iPad Pro does and doesn't even get remotely as hot.

4. It proves it was sold to people, not that it is desirable for any particular tasks. You can't draw than conclusion without more data which you have not presented.

5. Not at all.

As for virtualization it is a pretty bad solution for most problem domains. It adds overhead, inefficiency, latency. At that point it is illogical to use it for devices which require low overhead, efficiency and low latency i.e. most mobile devices out there. Taking the initial post into consideration, in what insane world does it even make sense to run a full windows stack on a mobile device when the only thing that matters is the applications?

It's an insane proposition really. I don't do it on any laptops either. Same set of compromises. It barely even makes sense in the cloud either where we end up gaining cost and reduction in performance. Containers are as far as virtualization should go at this point.


> Surface is horrible so I’m not sure why that’s comparable.

The Surface Go with type cover is an amazing janky device. It weighs almost nothing and has the CPU power to match, but I can toss it in my backpack and have a lightweight dev environment with me all the time.

It's great because it's has no software constraints, despite all the hardware compromises. I'd ditch it in a second if the iPad could run full macOS.


Virtualization is the only way they can give users a Mac experience while still being a walled garden with the associated security.

So may be they never will give us Mac experience on iPad (including the shell, forking processes, compile any program which include JIT, etc.) But if they do, it is very likely it is some kind of virtualization that contain the associated risks of those freedoms.


>I agree with apple. One of the reason iPads are so damn good is that they put some constraints on them to stop people doing horrible things. Virtualisation is one of those horrible things.

The iPhone was originally planned not to have any apps. I guess that would've been even better (and less horrible)?


No there's a happy medium in the middle. They took the approach of starting with a bowl and adding holes as required rather than start with a sieve and try and fill all the bad holes up.


The funny thing is you can code redistributable stuff on iPad/iphone, just only within Roblox who made a special deal with Apple to avoid the normal dev license cost and Mac requirement, to enable child-labor entreprenuerers and where the cut is more like 70% instead of 30%.


This is hilarious and spot on.


Of course you're going to get a different experience than on a desktop OS, and you're not going to satisfy every programming environment. But between "virtually none" and "every" there is a huge space that iOS would be able to fill with its current OS design. I'd argue that a lot of developers could do their job on the iPad, if there was VS Code available for it. Right now this isn't even possible, because alternative browser engines aren't allowed on iOS. Why not allow the software on the device? There's no need to change the OS in the slightest. It's easy to dismiss the concept, when it hasn't even had the chance to prove itself.


You can actually build code in swift playgrounds on iOS.


That's why I wrote "virtually". Others suggested virtualization, which I think would be a good tradeoff that doesn't require usability changes to the underlying OS (and would finally put the processor to good use). Other than that, the inability to code on iOS is due to artificial gatekeeping by Apple, not due to design decisions. There are many ways coding (and probably other types of apps!) on iOS would be possible, if it wasn't for Apple.


First of all, just having a full-blown shell with a text terminal would make the iPad much more usefull for small programming tasks. Then I can't see a reason why it couldn't run an X or Wayland desktop full-screen in a VM app, providing a standard GUI desktop just inside that app. It would make an iPad Pro much more useful when you can't justify to buy or just bring a dedicated laptop.


Macbook air isn't a lot larger or heavier, especially when you add a keyboard case to it. In fact my MBA weighs less than my MBP with the logitech case on it.


Right, but then you might have to carry another device where just an app on the iPad would have been sufficient. Not to mention, that you would have to buy another device. And while that sounds like an incentive to Apple not to provide those capabilities to the iPad, I would have renewed my iPad Pro much earlier, if it were a bit more capable software-wise. I usually don't need a laptop, just a little bit more capable iPad.


An idea that is very bad explored is using handwriting for programming.

Naturally it is somehow still a PhD topic, but imagine using Swift playgrounds with the pencil as if it was paper notebook.


5 people on earth would dominate humanity with handwritten APL.


A mentor of mine got her start working with a Big Giant Brain as his assistant - he only programmed by reading fanfold paper output and writing his code down. She would type it up, run it, and then bring the output back for him to analyse.

I can imagine such a person finding this idea gratifying, albeit perhaps too much of a REPL for his tastes.


When I started at technical school writing pseudo code in paper before coding into a 512 KB PC with 8" floppies was common, like doing diagrams of data structures, so maybe I am biased.


That would be horrible. My handwriting is terrible.


Hence why it is still a PhD issue, however I think it would be quite cool instead of trying always to fit a keyboard into portable screens.


Because I already own an iPad


I don’t disagree with you, I have a new iPad 13 Pro and a three year old iPad 11 Pro that would be great to have better dev tools on. Pythonista and Juno are fairly good for Python development, LispPad is a nice Scheme dev environment, Raskell used to be pretty good for Haskell but no longer runs or is supported, etc.

I have a new M1 MacBook Pro with a large monitor for programming, but since I usually write using an iPad, having first class development tools for all popular langauges would be very nice.


I can virtualized other operating systems but I can't have root access and a banking app. The world we live in....


This effect has been talked about 10 years ago: https://boingboing.net/2012/01/10/lockdown.html


You could run linux before. With qemu. It was a crappy experience. Even writing on a touchscreen is bad and when the window is tiny and you barely see the letters is even worse. Of course you can hook a wireless keyboard and mouse but then better go to the PC which has even a bigger screen.

I really don't see what this brings. Is google so lost that the only "innovation" they can bring in android is descovering that the linux kernel has support for virtualization ?


> I really don't see what this brings. Is google so lost that the only "innovation" they can bring in android is descovering that the linux kernel has support for virtualization ?

When talking about the strategy of a successful multibillion dollar company, the most likely answer is "no".

The very short article explains at a high level:

"they’re used for things like enhancing the security of the kernel (or at least trying to) and running miscellaneous code (such as third-party code for DRM, cryptography, and other closed-source binaries) outside of the Android OS."


This is about Fuchsia.

First Google is gonna run Fuchsia on Linux, then linux will be removed entirely.

that's what this laying the groundwork for.


Forgive me for regurgitating trendy HN counter arguments, but: this comment seems very similar to the early dismissals of Dropbox.

"I can already achieve this with FTP/samba/whatever." Sometimes taking existing, established technology and making it easier to use is all it takes to "innovate".

Of course I have no idea how killer this particular Android feature will be. I'm just criticizing the "this is not new" argument.


For every Dropbox, there are a thousand companies and products that similar criticisms could be made against that don't actually succeed, sometimes for the reasons stated in those criticisms.


I think it's fair to say that for those failed products/companies, the reason for them failing is a bit more nuanced than "because the core service already exists elsewhere". Competition always exists so you're almost never the sole provider of a particular type of service. If you fail, you were likely out-competed, even if it's in a non-core area like marketing or popularity.


As an aside: this is exactly what tech bros do not get about the AirTags when we try to get the message across about how bad they are for stalking. Every time I try to bring this up, I am told that Tile, GPS trackers, Samsung I-do-not-even-know-what already exists. Who cares -- the AirTags work, alas they work exceptionally well (precision and battery life wise especially), easily accessible, their existence easily discoverable. They are devastating in this field. Problem is, vulnerable people will use public transit where it's practically guaranteed within a ten meter radius there will be an iPhone, especially so in North America where the iPhone is so prevalent. Meanwhile, the victim might not have an iPhone, just some scuffed Android 'cos that's all they can afford. That Apple released (or didn't recall) them is a blow, that Congress didn't ban them is not really because it's not like we expect Congress to be functional, to protect poor people especially women. But watch a stalking case involving one going bad (= rape, murder) in, say, Germany and then watch the EU bringing down the banhammmer.


There are cheap trackers from aliexpress that can be used in place of airtags. Most include GPS and use mobile internet for sending location updates. May be the airtags made tech based stalking more popular but will it reverse just because Apple takes it back. Also its not like finding where someone lives is a particularly difficult task given the ridiculous amount of information there is online. And as for the public transit example, may be all they have to do is wait till the victim gets down the from the bus to find out where they live ? (I mean it is public transit after all).


The caveat is that in the general population, I would expect that there are vastly more people who can figure out how to configure a AirTag versus buy and configure a tracker off of AliExpress, similar to how the graphical user interface enabled vastly more users than the command line.


I don't want to live in a world full of objects that can't be misused. Maybe you do.


Dropbox was a different tech from FTP/Samba/whatever.

This one, it's not the first VM on smartphones at all. Running a desktop OS is possible for years with comparable solutions. What makes this different from Samsung's DeX, for example?


This can run windows application. DeX is ChromeOS-like


You act like Dropbox is successful when they are not. A company paying other companies to advertise and push their products and burning through cash to appear successful is not success.


Dropbox was successful.

Its current financial state is more a case of bigger vendors bundling a good enough alternative with their products AND dropbox not adding much beyond "easy to use"; than HN insight of "lol just rsync/bash/perl instead" being right all along.



Nothing about that indicates that they are making more than they spend. Look at their assets vs liabilities. Their negative keeps increasing.


The liabilities increased significantly only because Dropbox (as with nearly every other company in the United States) wanted to take advantage of the 0% interest rates that the federal reserve has kept.

  1,369.3 (bln) Convertible senior notes, net, non-current
Who wouldn't want to borrow $1.3 billion USD interest free or close to it? We need to take a close look and try to understand these numbers instead of just seeing "negative" = "bad"; more often than not it much more nuanced than that.


Sounds like a terrible idea right now. I think they are only popular here because of their association with YC. I honestly find Dropbox to be terrible and overpriced.


For simply keeping files synced between two computers, I thought Dropbox was the best.

Microsoft OneDrive is fuckin' awful in how long it takes for a file on one system to be synced to another, even when both systems are online at the same time. It also has a habit of completely pausing syncing entirely if it reaches a file it can't read (such as a lock file).

Google Drive works well, but I find its desktop client to be resource-heavy, especially on startup. I like its integration with my Android, though.


Well, imagine your next work laptop to be a phone. It connects wirelessly to a display, which is paired to a keyboard and mouse.

Still not seeing the point?

People are either gonna love or hate it. Love because of how little space it requires or hate because the performance is gonna be worse then even the non pro versions of the surface.


> It connects wirelessly to a display, which is paired to a keyboard and mouse.

When you add the display, battery, keyboard and mouse, it's really cheap to add brains and storage and build a full laptop that can share data with your phone.

This is why most lapdocks fail - they aren't that much less expensive than a cheap laptop. This is also why, at some point, nobody was making dedicated terminals for large computers - they were as expensive to build as PCs.


This was the dream for a long time (sans the wireless part, but that's not a blocker), see Ubuntu Phone and several other attempts.

I wonder when we'll be there. For sure we're not there for wireless, yet. It's unreliable, especially if you have a lot of wireless devices around.


Why use a virtualized Linux on proprietary Android if you can use native Linux phone with all free drivers and desktop OS? https://puri.sm/products/librem-5


to be blunt because the librem is an awful phone. The battery life is miserable, the UI is terrible, and the performance isn't great either. It feels five years behind at the price that's higher than the pixel 6.

A mid-range Xiaomi phone is better than the Librem 5 and costs four times less.


These are all software issues, which are being worked on: https://teddit.net/r/Purism/comments/sqbml6/is_librem_5_good...

Also, it will receive software updates forever unlike any other phone.


It's not just software issues. It's a 1200$ phone with 30 gigs of drive space, 3 gigs of ram and a 720×1440 screen.

not to mention, from the thread you linked:

"About 2600 L5's have been delivered, so everyone who ordered before October 2017 should have gotten their phones. Purism reportedly just got in another 1100 L5's from the factory, so when those get shipped out, that should cover the orders up to mid-2018."

I mean you can't be serious. People paid almost a grand to wait four years for a phone?


The delays are real (waiting for mine too), but IMHO there were good reasons for that: https://source.puri.sm/Librem5/community-wiki/-/wikis/Freque...

But this tiny company with no experience in smartphones did finally produce the product respecting users unlike Google or Apple. The current problems with CPU supply is not Purism's fault.

Also, the phone supports microSD up to 2 TB.


It was cheaper then, about $600. (still expensive and worth it, but not "almost a grand")


Am I reading the website right that it has 3GB of RAM and 32GB of eMMC storage? There's no way that has the RAM or IO performance to run general desktop programs.


If making a good phone OS was that easy we would have more options besides Android or iOS.


How does it compare to the pinephone nowadays?


Especially, how does it compare to the pinephone pro


You can have a good phone with a good virtualized OS you you can have a not-that-good phone with a good OS.


That is what I want. In addition to a docking station in my office, I would like iPhone <—> iTV style interop in the living room, and compatible kiosk style support in public areas like airports, airlines, office buildings, etc.


If people are spending 1500 on a phone rather than 750 each on laptop/phone, the performance isn't necessarily worse.

That's the real killer feature.


OTOH, if you spend 750 on each and one breaks, it's only 750 to replace it.


Like Windows Phone and Samsung DEX.


If it could work in a pinch, that already would be valuable for some users.


It seems like they're taking Android in a Qubes-like direction.

https://blog.esper.io/android-dessert-bites-5-virtualization...


Yes, please. I'd love to separate Facebook, Google, Amazon, etc. apps into separate domains. As I'm doing with web apps via Firefox containers.


I don't think Google is doing this to give you more privacy. They have every incentive to do the exact opposite.

Ps Firefox containers are great indeed and I use them for the same reasons. But I doubt this is the intended usecase for this. I don't see Google investing a ton of money to build something that will hurt their bottom line.


You're probably right, but the fact that underlying technology will be present may be used by someone to implement it.

Like in the case of the Shelter app.

https://play.google.com/store/apps/details?id=net.typeblog.s...


Think bigger. Big tablets with detachable keyboards that can now run Android and Windows.

Also there are Android based VR headsets, anh their resolutions are getting better. Think of working in a connected virtual office, running Windows applications.


I kinda want my phone to be a portable PC in my pocket. At least I think I do... If I could dock my phone in at work and have it boot up a Windows VM (I work on a Windows PC) that would be neat.

I know there are some options for this, like Samsung Dex, but with this there is at least some potential for having a Windows PC in your pocket. Like Microsoft tried to do with those older Windows phones.


We are probably focusing on the wrong use case here.

Yes this virtualization allows us to run windows/linux. Thats not the main goal probably. Its more to reuse packages from those stacks on your android phone, kinda like the VMware Fusion mode on a Mac, to run applications side by side, or to run things in a secure virtualized container.

Why recompile to android, when you can virtualize?


> Why recompile to android, when you can virtualize?

Performance?


Why Linux when you have Termux?


Unless Termux folks acknowledge Android's userspace API is Java and not POSIX, they won't be around for much longer on newer Android versions.

https://developer.android.com/ndk/guides/stable_apis


You could just chroot into normal GNU based user spaces before they screwed it up between all the namespace stuff and then later restricting exec. I even ran X11 apps on my Android a long time ago with zero virtualization overhead.


> Even writing on a touchscreen is bad and when the window is tiny and you barely see the letters is even worse.

We need to bring back gestural writing, with simplified letter forms. The basic tech was in production use in the mid-1990s, and there are clearly unencumbered alphabets that could be easily used for this, such as the 19th century Moon Script. Recent UI work has made Linux quite usable on touchscreens and smaller devices, but text input is way harder than it could be.


Linux has had KVM, sure, but mobile ARM CPUs have not had the necessary virtualization extensions for “native” speed hypervisors up until very recently.


It's a formal capability of the device and a design goal- see

https://arstechnica.com/gadgets/2021/11/android-12-the-ars-t...

Private Compute Core—Running AI code in a virtual machine?


The article explains the most plausible rationale for that and it seems it's not necessary to run other OS's...


you could also run Win 9x through Dosbox on Android although again not very well.


> they’re used for things like enhancing the security of the kernel (or at least trying to) and running miscellaneous code (such as third-party code for DRM, cryptography, and other closed-source binaries) outside of the Android OS

long shot here, does this make it more possible for production releases to be closer to AOSP and then run the rest on the hypervisor ? Also the future of project tremble, meaning better upgrade paths for all devices (outside of manufacturer will which is still the main issue) ?


> What that means is that it is now possible to run virtually any operating system including Windows 11, Linux distributions such as Ubuntu or Arch Linux Arm on the Google Tensor-powered phone, and do so at near-native speed.

Native as in Windows on ARM native? I held Surface Pro X in a shop and man was it disappointing.


Well, obviously it has to be AArch64 Windows, running x86_64 OSes on ARM requires emulation which has always been possible. KVM simply allows the Linux kernel to act as an hypervisor.


Since Android uses the Linux kernel, if you have full access to the device, you can already generally sideload a distribution of your choice onto the filesystem. The Linux userspace can run and coexist alongside the Android userspace. Though some applications may require kernel features which are not enabled in typical Android kernels, GPL forces manufacturers to release the kernel source code, which makes it fairly easy to enable more features and use that kernel.

The advantage of this approach over running Linux in a sandbox / VM is that you can administer your Android device from the Linux side, which means that you can use your existing backup strategy or other automation tools. With a USB/bluetooth keyboard, it can also work as a small PC in your pocket.


> GPL forces manufacturers to release the kernel source code

I wish that was true in the real world, looking at you Onyx.

https://news.ycombinator.com/item?id=23735962


I found it to be true in most cases, and when it is not... well, we can vote with our wallet.

Case in point: I had never heard about Onyx. Coincidence or causation?


By any chance, do happen to know HowTo's for configuring phone to run Android and Linux side by side ?


Have a look at Linux Deploy, it is essentially an installation wizard and launcher.

I no longer use it, and instead use Magisk startup scripts to start the Linux userspace, but it remains useful as an installer.


Would this allow me to plug an Android phone into a monitor and have it boot windows/linux like DeX? (so keeping android on the phone, but the desktop on the external display)

This would be a pretty strong argument for me to move away from iOS.


This one allows that: https://puri.sm/products/librem-5.


Is it really usable as a phone? Seems like it would be a step back for any modern ios/android device user.


Depends on your use case. Yes, it's a step back in convenience (yet) but many steps forward in privacy and security.


I'll have to wait for those to get flagship specs before considering it.

Though honestly I'd probably prefer an android/ios device able to run a different operating system on the external display in all cases.


Just yesterday I was wondering what's the point of the new Galaxy S8 Ultra Tablet with its glorious 14.6" display and 16GB RAM. Now with the new Android 13 you can potentially install any Linux distro VM, then it will be the most lightweight if not the best Linux desktop with the supplied keyboard + cover.


> But why did Google enable virtualization in Android?

They had no idea it was for locking things down. Sure, it allows running VMs, but that doesn't change anything about it.

"I'll force A on you, but you'll get B so don't be mad."

... but this only really applies on the surface. Locking things down is the road forward in the industry anyway. That's why the Desktop OS war is long over, too. Everything will, eventually, run on everything.

We're being sold digital lockdowns as features which supposedly provide us with more freedom. In the end we'll have downloadable programs we'll rent to use, which run in a cut-for-the-purpose container, without any ability to tinker, hack, or modify. Rent or die. Don't want any of this? Fine, but you're locked out of the eco-system. Have fun enjoying what's left for you to do/use.

I wish more people knew what's coming. I don't know why they don't. I'm sure the information is out there, but apparently nobody is talking about it, thus nobody knows about it. My guess is simply that it wouldn't actually be particularly popular if people actually understood that they're just being misled.

For those rolling their eyes, considering that nowadays it's the norm to sell safety/security as beneficial, because of reasons based on fear.

Benjamin Franklin would probably be really angry about how normalized it has become to give up liberties for some false sense of security.

The unintentionally worst people are the ones who think this is all a great idea. Because security. Fact of the matter is, though, that if people had to actually know and understand what they're using and doing, we'd not be in the mess we are today.

What I mean by that is that the world apparently has this deep issue with fear of pretty much everything and humanity tries hard to make the fears less worse instead of getting rid of them by using education and getting rid of the fearmongers.

This reminds me of my friend. He insists on having his AV and cookie blocker running. He thinks that's super important. Every new site he manually blocks everything. This same guy also insists on continuously installing all kinds of stuff, and after just two months his new notebook took 20 seconds to boot. When he got it, it were two.

The worst part about this is that he's so brainwashed into believing that he really needs this, despite me being living evidence that he doesn't, that there is actually no way of educating him. The fear machine has dug too hard into him and, unless they stop, there's actually no way of getting rid of it.

Not gonna lie, I actually think this is amazing. On one hand he's extremely cautious about security, which is not unreasonable per se, on the other hand he installs all kinds of shit because he's an idiot who doesn't actually know what he's doing.

He's just doing what he's being told to do.

Amazing.


This is why Microsoft is pushing the TPM so hard as well. You'll own nothing, and you'll be happy.


It's been pretty clearly established over the last 13 years that TPM isn't some evil plot to prevent ~2% of users from installing Linux. Even hardcore free software distros like Debian aren't pushing this narrative anymore.

https://wiki.debian.org/SecureBoot


> Benjamin Franklin would probably be really angry about how normalized it has become to give up liberties for some false sense of security.

I honestly think it's even worse than that, because the status quo has made it so we don't even have the courtesy of knowing we're giving up as much as we are.

IMHO, the substance quantity of what is being lost is on the order of an entire language. "Privacy" means something totally different today than what it used to :(, and we've all but lost the very element of *pause, consider* that would be our way back to where we used to be.

I have to admit I'm looking at Europe with a bit of a wobbly mentality these days; the EU is not a panacea but the GDPR has had some really interesting ramifications, and France's position to ban GA recently (if that's what it actually was) was... well it'll be interesting to see how that goes down...

---

> The unintentionally worst people are the ones who think this is all a great idea. Because security. Fact of the matter is, though, that if people had to actually know and understand what they're using and doing, we'd not be in the mess we are today.

I wrote something a while back about end-to-end encryption that also touches on the danger of cargo-culting a "yay! security! awesome"-by-default ideology: https://news.ycombinator.com/item?id=25522220

It's not really a first-class substantial "oooh, thing" perspective, more just unimpressed grumbling about the status quo. But it's a bit of anecdata that does agree with your position.

---

> This reminds me of my friend. He insists on having his AV and cookie blocker running. He thinks that's super important. Every new site he manually blocks everything. This same guy also insists on continuously installing all kinds of stuff, and after just two months his new notebook took 20 seconds to boot. When he got it, it were two.

(This sort of thing is really interesting to me but I'm really bad at talking about it concisely. Apologies.)

A contributory perspective:

The moment I saw "Every new site he manually blocks everything." I immediately jumped to a mental reference point that might be called the "manual drive fallacy". If you give someone a bunch of knobs and settings to tweak, and the knobs and settings induce ideological changes that are not mechanically/concretely measurable, and all this happens within the context of "control" and "freedom"... in certain people, I think the brain can start going very very loopy, in a very specific way. It never gets into a state that would ever be classified as "unhinged", but it's like the brain "discovers" this alternate pathway that satisfies both our intrinsic desire for control while short-circuiting past the "proof of work" feedback loops of self-reflection, critical thinking, engagement in depth, etc that keeps that control harmonically resonant with its environment, in that unexplainable way that makes the influence meaningfully productive at both the micro and macro scale.

It's kind of like if bikeshedding were put in an infinite feedback loop and left indefinitely. Stuff just folds in on itself. Perpetual motion machine meets black hole. Meep.

I call this a "manual drive fallacy" because I personally equate the mindset you describe with having a pathological affection for "manual drive" processes.

I read a while back that the Air Force crashes many more UAVs and drones than the Army and Navy do (or at least they did a little while back) because the latter depend very heavily on autopilot, whereas the incumbent Air Force has always justified its existence by performing those processes manually. At the micro scale both approaches make sense - the Air Force exists predominantly to train amazing pilots, who are going to make mistakes; the Army/Navy exist to defend land and sea, and need unspecialized local air superiority as part of their own bigger pictures. Insights like "computers are actually way better pilots than humans" can only emerge when when a macro scale focus is introduced that is able to laterally make comparisons across verticals while maintaining sight of a bigger picture. (Reproducibly coordinating such focuses is of course the billion dollar question...)

In a similar sort of way I've come to think that there are a similar organization of internal processes and balances that happen in the individual brain that influence the "functional level" or "watermark" of insightful impact and control a given human can have. We all fundamentally want to control, and organize, and achieve cohesion. But the underlying mechanics we use to achieve that can involuntarily affect how efficient we can be overall. If mental functioning is very high, these mechanics can integrate a lot of input, and our control/organization/cohesion will be very efficient, cohesive, and resonant. If mental functioning is low, significantly less input can be integrated into executive output, and the result will be very fragmented and micromanaged.

A person that can only model the effects of control in their environment to a low level may constantly be in a state of disorientation as they continually send their mental models of their environment back to the drawing board to start again as their attempts to summarize the world around them do not integrate sufficient substance to be useful. I'm reminded of mental health advice that generally recommends to patiently remind a person having a panic attack about their environment and what's going on around them, in the hope this encourages distraction from painful mental feedback loops.

I wonder if there's a correlation between fragmented integration and an obsession with mechanical, concrete, "manual drive" processes and procedures. In much the same way there are unexplored knock-on effects from poor social engagement, I think a similar magnitude of impact may result from poor executive engagement, and perhaps one of those effects is a strong affection for tinkering with stuff that has things you can open and shut at a surface or aesthetic level.

Broadly speaking, creative coordination almost seems like a human mental attribute or quality that we imbue into the things we create. We design things according to some intrinsic sense we don't even realize we're following half the time as we simply concentrate on getting stuff done. Good design - perhaps the epitome of "10x senior engineering" we all strive to reach for - is to recognize the need to weave a sort of structured permeability into the things our ambition creates, so our creations can bend and stretch with the wind, and let others' influence in. It's really sad to see this dynamic fall apart. There really seems to be something critical about our brains' ability to "slice and dice" the input we receive, and the functioning of that underlying capacity is what sets the pace.

I've seen a couple of really bad Windows 9x/XP simulators out there that barely let you do anything beyond opening the Start menu. I've long noodled over the idea that the core motivation driving these sorts of projects stems from a sort of focus-affinity that sadly bottoms out at that predominantly aesthetic, surface-depth level of coordination, potentially coupled with nostalgia from a time when these executive handicaps had less of a perceived impact. Maybe the person wants to remember that time, but emotional processing issues make it hard to recall the memory with sufficient fidelity to achieve nostalgic closure, and some frustrated consideration about what might nip this in the bud leads to the conclusion that remaking Windows might fix the problem (coming solely from a surface or aesthetic position - not even remotely close to considering the kernel design or hardware targets). And then maybe the person realizes soon after commencing the project that even just cloning the UI is too much work and will not help them get closer to closure, and they soon give up.

I think everyone wants to express their coordinational capacity and style; and because the brain's comprehension cannot extend beyond its own limits, this capacity and style is never intrinsically wrong.

As a form of human expression and communication, I think coordination's significance is woefully undocumented. We imbue how we see the world, and the fidelity of the mesh we use to integrate our perceptions, into how we express coordination.

(Incidentally, accomplishing this in the digital realm, where we have absolutely nothing to cue off of ("here's the instruction set manual for your 5GHz calculator"), making the inventions all around us the byproduct of an ideological collective sensory deprivation tank: we cue off of our brains. This is both terrifying and inordinately interesting IMO.)

---

> The worst part about this is that he's so brainwashed into believing that he really needs this, despite me being living evidence that he doesn't, that there is actually no way of educating him. The fear machine has dug too hard into him and, unless they stop, there's actually no way of getting rid of it.

Alternative possible perspective (I could be wrong): you operate and exist outside of the scope of his cognizance of control. You don't exist. You're like the syllables of a brand name or jingle his brain just memoizes without considering.

> Not gonna lie, I actually think this is amazing. On one hand he's extremely cautious about security, which is not unreasonable per se, on the other hand he installs all kinds of shit because he's an idiot who doesn't actually know what he's doing.

(Continuing above theme) Or there just might be some totally concidental "miraculous" overlap between his actions and best practice :( and his perception of security might be uselessly broken.

> He's just doing what he's being told to do.

I actually agree here, with the caveat that "understanding is in the eye of the beholder" :v


Last time I checked, the only meaningful feature of Windows 11 when compared to Windows 10 was advertised as the ability to run Android apps at some point in the future. Well, it looks like Microsoft got behind again.


They have a chance to get ahead if they add full blown Windows in docked mode for their new surface phones since they're running android anyway. Yes, DeX and co exist but no one wants to use them because for the mass majority Desktop = Mac or Windows and not full screen android with a taskbar slapped on it. Having a proper desktop with the same documents and files from your phone with a single USB C connection could finally lower the friction enough for docking phones to become a mainstream thing. Combine that with natively passing through the actual android apps from the device into the VM, you can potentially get a no-compromise experience. But who knows, maybe I'm the only one who gets too excited about tinkering with weird setups like this and it's not actually ergonomic for everyday use


So a VM inside a VM inside a physical container. Sounds great.


A few things I didn't see an answer to in my skim of the article

- is this hardware or bios locked, or will there be the ability to get this functionality on eg older phones

- what are the performance characteristics here, I wonder?

I'm keen because we produce a ton of e-waste in the form of mostly useful cell phones, and it'd be cool to turn them into useful devices again. This might help enable that.


> Currently, no Android devices on the market ship with the Virtualization module — not even Google’s own Pixel 6 — but this is set to change with the upcoming Android 13 release. In fact, Google is currently testing its new virtualization tools on the Pixel 6; if you build AOSP with the target aosp_oriole_pkvm, you’ll find that com.android.virt will be automatically inherited. I don’t know if Google will enable pKVM on the Pixel 6 series with the Android 13 update, but there is evidence that Google plans for Android 13 to include the first release of the pKVM hypervisor and virtual machine framework.

Original blog: https://blog.esper.io/android-dessert-bites-5-virtualization...


There's very little chance of enabling KVM on Qualcomm devices: https://news.ycombinator.com/item?id=30322404. Theoretically, it's always been possible on other SoCs, but now it basically works out-of-the-box on the Pixel 6.

This and the videos below it should give you an idea of the performance: https://twitter.com/kdrag0n/status/1493082399520919552


You can run Linux distros on non-rooted phones with UserLAnd. It is still receiving updates on the Play store. It uses proot intead of virtualization. I hope this new feature ends up being more accessible to people, but until then, I will try to make UserLAnd useful for everyone (I am the developer of UserLAnd).


I hope this saves Termux


It's unlikely to save it, per se; it might, hopefully, obsolete Termux for most uses (why recompiled debs when you can just install Debian?), but won't help with Android API surface that Google keeps breaking (if they didn't give a native app raw filesystem access or let it run in the background, will they let a VM do so?).


UserLAnd already allows you to just install Debian and other distros on non-rooted devices.

Note: I am developer of UserLAnd.


The only thing that would save termux would be to provide a UNIX like experience on top of Java APIs, trying to pretend NDK is Linux won't save them.


Is this specific to Pixel 6, or will this be available on all Pixel phones?

And an aside: since Apple's SoCs are now much better understood thanks to the M1 and Asahi Linux project, how long until someone manages to virtualize iOS on an Android device? (though I'd rather have Android running on an iPhone tbh)


Cool, I guess, but I don't want Google or Microsoft to the layer between me and my computers hardware. This will just continue the e-waste problem as bad actors would still control the hardware.

I wonder if this is because the Linux phones are really starting to shape up?


So cool! A serious replacement for termux which is awesome. Being able to run a full vm, if can have a personalised “desktop” will make using a mobile phone and tablet vastly more productive for everything. Hope this comes to more than just the Pixel line.


May be it will finally let Linux distros have a teeny tiny foothold on mobile ? May be the stuff that is difficult today - interfacing with the hardware can be delegated to the native OS and guest OS can focus on providing more features.


So now we can run Windows 11 on Android that then can run Android that then runs Windows 11 which runs Android that can run Windows 11 where we install WSL2 in which we utilize wine/proton to play the newest AAA titles....


Does it offer accelerated 2D and 3D graphics?


I'd assume that it doesn't, while passthrough exist on KVM, it still requires the actual drivers to be installed on the emulated OS and mobile handset drivers are usually proprietary blobs. The APIs could be translated to a generic driver that works on the OSs though, something like https://virgil3d.github.io/


No. It should be possible to get gfxstream or virgl working with Linux guests, but I'm not aware of any way to provide acceleration for Windows guests.


This is actually amazing!


Now I wonder, if this is also possible on Pixel 5's SoC too.


This opens the opportunity to run virtualized iOS inside Android?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: