I was always intrigued by the concept, but never really wanted to buy such a device for one simple reason: performance. Mobile phones, especially back then, don't exactly have great performance characteristics for desktop use. The Atrix has a 1GHz processor and 1GB of RAM, those were low specs for a cheap laptop even back then.
The same problem still exists on the PinePhone and Ubuntu's attempt, and Microsoft's failed attempt to use Qualcom chips as mobile workstations have so far all failed. When it comes to powerful yet power-efficient chips, Apple simply has no real competitor. Sure, AMD and Intel can outperform the M1, in some cases even at similar power draws, but Apple has mastered the base and idle power draw and caching that give their devices such a great battery life during simple, normal use.
With the processor unification, Apple may be able to provide a decent experience if they can think of a system that won't kill the battery (external battery the phone switches to?) and can cool the processor sufficiently while it's in a dock. Apple seems more than capable of solving those problems, if they'd want to.
I'm 100% sure I won't buy an iPhoneBook because I strongly dislike Apple's operating systems and the way the company itself operates, but if Apple fans will buy the product, competitors should soon follow with a device I'd find acceptable to use. Maybe, by then, the Linux smartphone ecosystem has grown to the point where it's actually usable for day-to-day operations (unlike the Librem/Pinephone/pmOS in their current state).
> With the processor unification, Apple may be able to provide a decent experience if they can think of a system that won't kill the battery (external battery the phone switches to?) and can cool the processor sufficiently while it's in a dock.
Why not just power the device off the lightning (hopefully Usb-c soon) cable used for docking? That's how all docks work now. When you're charging a single cell (most phones) you draw power from the charger, with the battery helping out during bursts if needed. No need to invent anything.
There are still a lot of things that you can do in a 4-core 1.1GHz Cortex-A53 / 3GiB RAM context of a pinephone, where performance is more than good enough.
Perfectly useable for i3 wm, terminal/ssh usage, light browsing in firefox, playing 1080p@60fps video, viewing photos, and plenty of other things, in docked mode.
Yes, so many people seems to be dreaming of that "phone that turns into a laptop" but I don't see it becoming a thing.
It might have made sense 20 years ago when everything wasn't online and sharing files was a pain, but nowadays it makes more sense to have independent devices than a single phone/laptop/desktop.
You need a screen+keyboard anyway, so why would you make that a dumb terminal instead of an independent device?
As kube-system said, Motorola had this, and Microsoft phone also tried to fill this niche. Personally, I would love to have everything in a phone, with a good docking system for my desk, interop with a living room large monitor, and of course supporting all the dev setups that I use. After seeing what functionality Apple packs into their Apple Watch, it seems like the tech exists for a ubiquitous phone device - if only there was a large enough market so a company like Apple, or a Linux based phone company to build it.
I'd love to be filled in. I saw some videos of Manjaro which was dreadful (latest version). Ubuntu seemed to run fine until you do something performance intensive (for 2005 standards).
Apple likes the idea fine, they just don’t want every website in the universe being able to send notifications to a user’s device. I don’t understand why people don’t see why that’s a bad idea.
It's more about bridging the gap between mobile and web. Installable web apps that have as much access to the OS as a native app (through user-initiated consent) is a great direction to move towards.
The browser can now run any language through web assembly, threads and all. The HTML/CSS layer is just a portable, expressive markup for visual assets.
I made a fitness goal tracking web app that worked completely offline. If you've used "strong" on android, it's basically a free clone that.
It stored all the records on the browser via indexeddb and the timer would ping you with a push notification when the rest timer would expire.
iOS erases the persisted storage after 7 days and doesn't let me use push notifications. I'd love it if iOS asked "hey this web app wants to store data long term" or whatever, but it doesn't.
I would be down with this if apps aren't allowed to ask for notification permissions until the user adds it to the home screen, the same way that game controller and full screen work now. Therefore the user consented for it to "become an app".
If Apple like the idea fine then they don’t fund/pay attention to it accordingly. Safari is full of weird PWA bugs and long standing ones, too.
As for the notifications, it’s easily solved. Softest touch: don’t let a notification prompt show without there being a user interaction first. Hardest touch: only allow it for webapps that are installed to the home screen.
I have a much less charitable view: Apple doesn’t like webapps because they’re a threat to the App Store ecosystem. Not so much from a monetary perspective but an exclusivity one: if you can run all the exact same apps on all mobile platforms they lose an edge. (though I’m sure they don’t mind the money either!)
> Apple doesn’t like webapps because they’re a threat to the App Store ecosystem
Steve Jobs in 2010:
>"We have two platforms we support. One is completely open and uncontrolled and that is HTML 5. We support HTML 5. We have the best support for HTML 5 of anyone in the world. We then support a curated platform, which is the App Store."
Just because Steve Jobs said it doesn’t make it true. Apple have lagged behind webapp standards for years. Even with the release of iOS 14.5 there are no notes about changes in the browser, despite there being a bunch.
Claiming Apple fully support the web because of something a former CEO said in a marketing presentation over a decade ago just doesn’t work.
you can get a convergence pack with more RAM (3GB) and storage and with a USB-C docking bar (that you can also buy separately). Once plugged, you have a desktop GNU/Linux, I've tested with Plasma Mobile and it's working quite well (got a few crashes, but it's not flagged as fully stable yet).
This is pretty cool. I could host my development environment on my desktop PC and using Termux, ssh into it and use vim as an IDE.
I can also forward ports so I can access things from my browser, though no debug, console, network tooling. There is an obvious lack of tooling - a full fat Linux desktop would be much better equipped, but it is incredible that this is possible at all.
Samsung discontinued support for Linux-on-DeX (which was always experimental/beta anyway). I assume whatever hacks they had in place to make the thing work were too hard to port to Android 10 for what was essentially a feature for hobbyists.
You can still use DeX itself to get a desktop environment when you plug your phone into a dock, but it's an Android desktop environment, with Android apps running in floating windows.
Maybe Apple can do this themselves by creating a partition in iPhone that contains iPad OS, and releasing an Apple Cinema touch screen, like the Wacom screen, I guess only make sense for them if the profits of people buying the touch screen would be bigger than people buying current devices, iPhone, iPad and current screen.
That really is the dream, but the natural data barrier between phone and desktop is very useful too. I wouldn’t install some npm package on such a Linux iPhone device.
I'm not hugely familiar with the Linux release cycle, can anyone explain why such rudimentary support should be merged in now, instead of waiting until it's ready?
Merge the portions that you know are correct and will have no affect on anyone else now, which makes future work easier as you do not have to keep those "working" commits up to date.
We do this all the time with kernel development, and is one reason why breaking changes up into tiny pieces is so powerful. We can take the pieces that make sense now, and allow the developer to redo the portions that are not ready yet, instead of having to reject the whole thing if it were done in one single "chunk."
Also note that the TTY/serial portions of this hardware support was already merged through the serial tree because they were independent and didn't affect anyone else.
The big "downside" is that it takes more work on the patch submitter side. But the benefits in the end are almost always more than worth it (easier reviewer time, easier time to track down problems, better development cycle as feedback can be more specific, easier evolution of changes, etc.)
I wrote a whole chapter in the book "Beautiful Code" about how this development model can help create an end result that is almost always better than the initial "huge" submission model. Check it out if you are interested, it should be free online somewhere...
My instinct would have been that it's easier for the submitter (as they have less to polish and test) and more irksome for the reviewer as they have to go through multiple rounds of submissions, but naturally I'll take your word for it!
This kind of discussion is always of interest to me, I'll check out the book, thank you.
Reviewing three changelists, which individually do only a single thing each, is in my experience much easier than reviewing a single changelist bundling the changes from all three.
This is true even if the same lines are changed multiple times. It's something you'll learn with experience, but it's also not even close. Break your patches up as much as possible, and everyone will be happier.
Not GP, but... In the context of recent event, [0] where not reviewing thoroughly enough some tiny patches had a major come-back-to-bite fallout, I can't help but wonder:
How, exactly are you expecting an increase in average patch size to help?
I did read through this debacle when it came out actually, I'm thoroughly on team Greg. I suppose my question was separate from malicious patches - I was interested in knowing if this incremental "merge tiny patches as and when they're ready" mode of development has ever caused issues with half-baked solutions affecting other parts of the kernel where perhaps it wouldn't have otherwise done so if the release was given more time for polishing and testing.
Obviously you should listen to Greg and not me, but in many ways you can summarise it as "are the changes correct and do they work?". That they don't amount to end-user useful support is a very separate matter. What's the benefit of delaying submission of correct and working branches?
If you plan for such model beforehand you already might be lowering overall efficiency. You know that you can't get the whole thing you want as one piece, but you plan "one third now, another third in a couple of months and the last third in more couple of months". Overall you might end up with twice the effort, since you had to account for that split, but in the context of kernel it still makes sense because of all the complexity and many people working on it simultaneously. And of course it also depends on "splitability" of the thing you're working on.
It's actually more efficient to do it this way. When you develop in a fork, you end up having to both keep rebasing on mainline, and then on submission, you might find that large parts of the code are the wrong approach or do not meet upstream standards, and need to be rewritten.
By planning for incremental merges, you ensure that your foundation is solid and acceptable and avoid wasted work.
It seems like it's ready to go in mailine. Linux doesn't really do "Big-bang" Everything is done releases of new features. Support is added incrementally. Here it says USB, PCIe, IOMMU, NVMe. Haven't been finished. But given those features are spread occross difference subsystems and have different maintainers it'll be easier to work on those once the platform is in Mainline. (You won't be able to expect all the PCIe developers to have the special M1 mac branch ready for testing)
Depends what you mean by incremental support. It would definitely help someone to base new work on what's already approved, merged and supported without the need to hunt for the latest updated external branch and reconciling it with their local version. (But maybe that's what you're calling branching issues)
I'd rather spend time writing new branches against approved, merged upstream rather than submitting to an upstream branch that may never get reviewed and merged. Keep in mind, too, that the longer a branch lives the more rebasing it goes through. Also, the bigger the branch is, the more it is to review when finally considering whether to merge it. I think giving people a foundation on which to build is a good way to prevent a lot of extraneous work and also to build confidence in the direction of development.
This is an example of "integrate early, integrate often" which is a core part of Continuous Integration (CI). These days people tend to focus on the automation aspect of CI, but getting code into mainline early is also key.
One idea might be building a relationship of trust around a certain architecture in the Kernel, another idea is that it makes future work faster to merge in if the basics are already in.
So, if the machines last a long time without Louis Rossman style of issues, getting a M1 Macbook for NixOS in about a few years might make sense. Very nice.
He is an outspoken advocate of electronics repair, educator on repair techniques and has a repair business. You can find his many videos about repair on YouTube. He comes up in pretty much any thread on the "Right to Repair" movement, so people who read those threads on HN usually hear about him.
He is the owner of a repair shop in New York and specializes in repairing Apple Products.
He became famous for his Youtube Channel[1] where he regularly rages about Apples repair policy and for his regular appearance in Right-to-Repair Law hearings[2].
The style of issues he has to deal with are stupid design on Apple part that compromises longevity and resilience. Like really retarded, easily avoidable crap. And sheer spite.
A guy who earns his money by repairing apple products, talking bad about apple products. I get it. Everything for the clicks and subscriptions on youtube.
While apple is known to be unfriendly for 3rd party repairs, the sad reality is that a lot of hardware vendors are going that way.
Also, stating you will have to wait a couple of years to buy a M1 macbook makes 0 sense as by then the latest macbooks will have M<something> and the issues with 3rd party repairs wont be addressed in the hardware revisions of the M1 macbooks released now.
You will need to buy one now on speculation that Linux support will ever be finished. Apple will quit making the m1 long before work is finished. The battery may not even be good by that time. Just today, news of m2 going to production is on hn front page.
>>While transitioning to 100% recycled materials is critical to reducing the sector’s footprint, it is also fundamental for Apple and other major IT companies to design products that last, are easy to repair, and recyclable at their end of life.
Does anyone know about any progress about getting the latest Intel Macbooks working on Linux?
I have a 2020 Intel MBA and it "works" as in it boots, but the laptop trackpad and keyboard don't do anything.
Externals work, so it seems like a driver thing.
There could be other problems, but I haven't gotten far enough to hit them because I don't see a point in using it unless the keyboard and trackpad work.
I don't think this works with 2019 or later Intel Macbooks, but as far as I know the most recent relevant information and drivers are here: https://github.com/roadrunner2/macbook12-spi-driver. In my own experience, it's the trackpad, keyboard, and built-in speakers that don't work, but everything else seemed to be fine (2015 macbook).
Edit: this driver got the keyboard and trackpad to work, but the microphone, speakers, and webcam did not work. I also needed to downgrade to an older kernel (4.14) to get storage to work. All external things worked fine even through a hub, and the headphone jack also worked fine.
ARM is a CPU ISA. A MacBook is a whole computer. A general purpose OS to be really useful needs to support the CPU, yes, but also the boot hardware, the bus, the video devices, keyboard, pointing devices, speakers, mic, line in, line out, the network hardware, internal storage, external storage, and preferably any peripheral through the USB ports or wirelessly that's USB or Bluetooth with a standard profile.
Also remember that M1 is not only ARM. It's a SOC which combines ARM cores with GPU cores and AI-targeted cores. Targeting just that is a lot more work than just porting the ARM portion from a Raspberry Pi or an Android phone. I'm sure just getting a framebuffer up was a lot of new work. Fully supporting the GPU cores will be necessary for a lot of people's use cases. Once that and most of the peripherals are sorted, maybe the AI cores will be addressed.
ARM is just an instruction set architecture specification. It is not any specific CPU; there are many different implementations (different chip designs). They can differ to various degrees, and often have at least a few non-standard features.
Hence, each ARM chip/platform often needs at least a little bit of special support (or drivers) to fully work in Linux, besides the standard ARM stuff.
Apple's platform is actually very different, with much more non-standard stuff, compared to other ARM implementations. Hence, much more extensive work was required to support them, than would be typical for other ARM platforms.
Even some of the really low-level features are different. For example, Apple have their own Interrupt Controller. Supporting it (in place of ARM's standard General Interrupt Controller) was a prerequisite to even be able to boot the linux kernel. Another weird thing about this platform is the bus configuration.
Also, Power management is something that is virtually always chip-specific, and the Apple M1 has some unusual quirks on top of that.
My dream would be to run nixos on my phone so I could have all my devices configured from a single set of config files. Could then unify entire infra from config to sync and backups.
Why though? Is the iPhone hardware that much better than their high-end competitors? I’m a big Apple fanboy but I buy an iPhone (and indeed a Mac) to run iOS (and macOS)
Plug it into a thunderbolt dock and you have a full Linux desktop.