Hacker News new | past | comments | ask | show | jobs | submit login
Unifying iPadOS and macOS (screamingatmyscreen.com)
120 points by fallenhitokiri on Aug 14, 2021 | hide | past | favorite | 215 comments



It's not about the touch screen, the UI, the hardware or "casual" vs "professional" users, but simply about the ability to create(!) and combine small specialized tools into something that's bigger than the sum of its parts.

The "walled garden app ecosystem" is exceptionally bad for this, and the UNIX shell is exceptionally good, but both are extremes. It's hard to imagine how a UNIX-like flexibility can be achieved inside an ecosystem that's optimized for passive media consumption and online shopping though.


This is IMHO a really good point. Somewhere in between App Groups, Share Extensions and custom URL schemes I really hoped at some point we would see a way to make this happen.

My wife is currently using Affinity and Adobe products to work on illustrations. It works mostly the same for her as on her Mac. She does not care about small, specialised tools, she wants one app that does everything she needs. The iPad is doing a great job running those apps.

I am not sure how many users (outside of the ones who got used to it) want UNIX-like flexibility vs "give me an app that does everything I need". If the later group keeps getting larger the walled garden approach to apps might actually work.


But for anything non-trivial, that app that does everything person A needs doesn't do what person B needs, and vice versa. It mostly does what person C needs, who is a whiteboard made subset of the most used features and doesn't match anyone.

It's not just UNIX-like flexibility, it's human uniqueness, and the uniqueness of the tasks they perform and the ideas they have. It's also the difference between owning and knowing how to use a tool, versus renting someone to do it for you who might help you today, but could rob or hurt you tomorrow.

Just consider how we spend decades to teach new humans to read and write as well as they can learn it, versus just giving them a bunch of emoticons to signal when they're hungry or sleepy or bored. Because we expect them to become full peers, and architects of their world, responsible for the next generation, not just consumers picking options others prepared for them. And we don't care if they want that, because we know what they don't know, yet. We base our judgement on the information we have, not the information they lack.

Making an exception for computer literacy just because it is hard (as if language and reading and writing aren't until you get used to them) set us on a terrible path.


> Just consider how we spend decades to teach new humans to read and write as well as they can learn it, versus just giving them a bunch of emoticons to signal when they're hungry or sleepy or bored. Because we expect them to become full peers, and architects of their world, responsible for the next generation, not just consumers picking options others prepared for them.

Thanks for this summary of the case for general purpose computing.


The problem is that company incentives want users to just be consumers. Because locked in users that are dependent on the company is very profitable.


Why should the average person learn these skills in particular rather than plumbing, cooking, woodwork, auto repair, or any number of other useful skills they could also learn?


I don’t see your point. I think people should learn those too if they want. And we do need people to learn them.

But nobody would suggest that someone that can only put together an IKEA nightstand is qualified for carpentry on the level of house building or fine cabinetry. Just because you can water a garden with a hose doesn’t qualify you to work on water mains.


Right. But most people do not work on water mains or as professional carpenters, nor does society need them to be able to start at any time.


Because most of those skills are relatively easy to be picked up at any stage in life, whereas computer literacy - or even a new programming language - is quite hard to introduce into people after they are 30 or so.

Take this with a grain of salt, as it is based mostly on my own experiences, although I remember at least an essay from Paul Graham talking about how rare it is to have a programmer switch languages after some age.


I have changed languages every time I started a new job and sometimes just switching projects on the same job. I don’t think learning a language is really that big a deal for a programmer.

I’m sure it’s hard to learn to be a serious programmer past a certain age. But it’s not easy to learn a new (human) language or get really good at many of the other skills I mentioned that quickly either.


Why not? What makes software development any less useful than the skills you mentioned?


Nothing. But I don't see anyone claiming that every high schooler needs to do mandatory shop or culinary arts classes, or lamenting that because you can just buy furniture in a store people no longer know how to make it themselves.


> I am not sure how many users (outside of the ones who got used to it) want UNIX-like flexibility vs "give me an app that does everything I need". If the later group keeps getting larger the walled garden approach to apps might actually work.

I think this is crux of the matter. Even outside the Apple ecosystem I see a tendency in people to not think about composing multiple tools but to look for an app that does everything.

I'm talking about my colleagues, working in IT, but in a Windows environment.

And on Windows there's PowerShell, which, although slow and to me not as flexible as bash, still allows one to do quite a lot of things. But people seem to see using it as a last resort, when there's absolutely no way of doing what they want by clicking around some window.


It's hard to remember all the arguments for a passel of command line utils. Discoverablility is much better for a GUI, and only someone who wishes to do something unusual or specialized will chafe against the limitations.


UI discoverability has massively declined with the advent of touch UIs though. You can't hover a UI item to see what it does, and there's a lot of non-intuitive "magic swipes" and hidden UI elements in smartphone UIs which you just have to know about.


Touch UI items tend to be larger by necessity (and to have surrounding padding as a safety factor, which increases their "effective" size even further), so you don't really need to do the hover thing. Just redesign them to be more self-explanatory as opposed to wasting that space.

"Magic swipes" could also be redesigned to be more intuitive, e.g. with some background soft-3D effects that make swipe-sensitive areas "stand out" from the neutral background.


Aren't magic swipes basically the analog to keyboard shortcuts?


Keyboard shortcuts were usually accelerators for easily discoverable menu items. Magic swipes don’t usually have the equivalent of a menu that can be used if you don’t know the swipe


Your argument applies to all kitchen sinks, regardless of whether they are considered one application like VSCode or an operating system withs collection of applications like Unix. Both still have all the capabilities for text editing, syntax highlighting, version control and executing scripts. You can spread complexity around in different ways but you can’t eliminate it.

Personally I find GUI discoverability to be like the hunt-and-peck model of typing, vs touch typing being CLI, and I don’t think it’s a coincidence that touch typing and CLI go hand in hand: you never leave home row. You think about what you want to do and type the command, instead of looking through panels of icons or more exotically arranged buttons (which in the newest web apps, constantly jump around as the page lazily loads, new elements appear forcing layout changes, fonts load causing text reflow… I frequently tap the wrong menu item because what I really wanted got pushed down by a new appearing element as my finger approached the screen.)


Maybe with something like PowerShell, with very consistent command names and arguments, this model is believable, but I don't know what other than memorization is going to tell you that "less" is for reading a file and "dd" is for copying one disk to another.


I agree with you about discoverability, but in practice I find that tasks aren't completely new every time, there usually are common principles. With usage, you kinda get to know what is and isn't possible, even if you don't recall the precise details of each and every command. So you can look them up, knowing more or less what you're looking for.

On Linux, `--help` usually... helps. Or there's the man. On Windows, the MS docs are usually usable to find things.

Of course, it may not work 100% of the time, but I find it's a question of philosophy. I'm not a Windows user, so I generally approach it with a mentality of "it would be nice to be able to do this, let me check real quick if there's a way". I usually manage to not have to click around for two hours when I have to do something repetitive.


Hm, well, part of the problem is, what man page are you even looking for? You already have to at least know the command you want. Honestly I usually end up resolving my queries with Google instead.


Uh no.

Take clicking around a Microsoft ISS configuration on Windows 2000 vs an Apache httpd config file and it’s online doc.

I briefly tried both, and didn’t look back


For a task like configuring a Web server you're right. But do you think most tasks people do on a computer are like that? I don't think that they are.


I think I saw a command line tool that would show the documentation for all flags when you pressed tab... I don't remember it's name, though.


Even in Unix it's not like everyone is satisfied with small and specialized applications, or Emacs, let alone modern IDEs, would never have been created.


That's just it. The future is very likely most people using iPad-like devices. They will need some kind of keyboard, so I'm not sure it's going to be tablets mostly, but the software will resemble that of smartphones.

But no desktop operating systems are well suited to being used primarily by professionals, not even Linux. The desktop paradigm is fine, but it was designed with the idea of making computers more accessible, and it inhibits composing different pieces of software together, which is what a professional-first operating system should optimize for.

I'm not sure how viable a good professional OS is. It would arguably require graphical software vendors through the biggest design paradigm shift in their history, to serve a relatively small market. The transition to the desktop paradigm wasn't as demanding — you just switched from taking control of the graphics hardware and controlling the entire screen to doing basically the same thing, just with the OS as a proxy so that you're rendering to a smaller rectangle. Composable graphical interfaces means abandoning the idea of having complete control over your rectangles.


> But no desktop operating systems are well suited to being used primarily by professionals

I love macOS and I’m a professional. I mean, I get paid for what I do. I’m a professional, right?


Well it's roughly as good as your other two options, none of which are terrible. I'm not trying to insult the people who use.. literally any operating system available today.

But surely you can imagine ways that macOS could be better suited to your needs. Why (to give an example that's easy to explain briefly, not necessarily the most important one) are the various things you have open organized primarily by which application can open them, instead of which task they're relevant to? You can organize your browser tabs by task by using multiple windows, but why can't those windows hold anything relevant to the task they represent except browser tabs? Why can't you have your terminal and text editor grouped with your webpages? It's because less sophisticated users expect each window to belong to exactly one "application," and because software vendors assume that their job is to make self contained "applications," and not composable graphical components.


You can use separate desktops. I don’t bother though


Yes. And yet people who use workspaces still use windows with multiple tabs, and have the layout of windows in the workspace dictated by application-level sorting.

I actually used a desktop that let you group content from different application in the same group of tabs for a couple years in college, by using i3/sway and a custom Firefox extension. It's an upgrade over just workspaces. I stopped working this way because I switched to tree-style-tabs and there's no window manager that gives you anything like that, and it was on net a workflow improvement.


Plasma Desktop does this with Activities.


I feel like a traditional OS will always stick around because of software developers. I couldn't imagine trying to build an iOS app on an iPad, even an iPad Pro with a keyboard. There will always be some more advanced tool that is used to build the tools.

Thought it will still be interesting to see where that will go in the next 20 years. Like you said with the convergence of desktop and mobile devices on the one hand I can't see traditional OS's surviving but on the other hand I don't se another option given that's the only medium to build the higher level software.


That all sounds right to me. And with everyone who doesn't need all the nitty gritty access that software developers (and some others) off using different OSs, the advanced OSs that software developers use will have no reason to cater to anyone who doesn't understand how a computer works or isn't willing to read a bit of technical documentation.


> I couldn't imagine trying to build an iOS app on an iPad, even an iPad Pro with a keyboard.

Why not? The only officially-supported way to build iOS apps right now is using Xcode on a Mac. What would be the difference between using the Xcode app on a Mac or the Xcode app on an iPad with a keyboard?


Yesterday, I was debugging some code. I had 3 macvim windows, 3 iterm tabs, firefox and the simulator open. That’s just in one space. The others were general browsing, communication and media. As most as I love my iPad, it only works great for single and focused task (note taking, researching, video meeting, drawing,…)


The sheer number of on-screen controls in a single professional app Window present a problem when you have to make them all large enough to be a touch target.


That might be an issue for non-programming work like video editing, 3d modeling, animation, &c. But tons of programmers just need a text editor with a total of zero buttons, a terminal, and a web browser.

Xcode for macOS isn't designed for that, but no reason xcode for iPadOS couldn't be.


1 monitor and cumbersome multi-tasking would be a nightmare for development work


> The desktop paradigm is fine, but it was designed with the idea of making computers more accessible, and it inhibits composing different pieces of software together,

The Xerox Parc and Smalltalk are the ancestors of today’s WIMP UI. If you look at Squeak (or Pharo) you’ll see how is possible to make a desktop UI that’s more composable than the usual Unix shell. The problem is not the desktop paradigm, but a lack of commercial interest.


I wouldn't call smalltalk systems part of the same paradigm as mainstream desktops, although if historians call them both desktops then we can use that name for both.

I think we can do better than squeak/pharo, but they're fine, really. They're at least the kind of thing I'm talking about.


> I wouldn't call smalltalk systems part of the same paradigm as mainstream desktops

Yes, Windows and Mac copied the WIMP UI. But, they ignored the composable message passing architecture. The goal of Smalltalk was to make a computing environment with a GUI, the programming language was like C to Unix.

During the 9x-2000 there was a lot of hype on providing the same system composition capabilities in Windows and Mac: ActiveX, Windows Scripting Host, AppleScript, etc. But, these things are dying. Those frameworks are not cross-platform, and they were not designed with security in mind. The interest shifted to the web.

My point is: It's possible to make a programable/extensible desktop UI. But, today the cost of doing it is high, and the interest is low.

> I think we can do better than squeak/pharo

I agree. I used Smalltalk -to work- for 3yrs. Smalltalk is full of good ideas, but it also has many bad ones.

I'll love to see something better too... In a sense JavaScript and the browser share some of the St GUI capabilities: you can inspect, experiment, and browse the code of any app. But, in other aspects the web platform is terrible.

I hope that the competence between Apple (pushing for apps in iOS/iPadOS) and Google (pushing for PWA), will renew the interest on creating environments where anybody can code and tinker with other apps. (if PWAs evolve to have the same capabilities as native apps, Apple will need work hard on removing the app creation barriers to compete -I don't see native Android apps as a competence to Apple iOS... but I might be wrong)


> During the 9x-2000 there was a lot of hype on providing the same system composition capabilities in Windows and Mac: ActiveX, Windows Scripting Host, AppleScript, etc. But, these things are dying.

iOS and MacOS are both converging on Shortcuts to accomplish this.


Which “professionals” do not use desktop operating systems?


Well I'm claiming that most future ones will not. There's no reason that smartphone/tablet style OSs couldn't accomodate video editor, graphic designers, writers, salespeople, real estate brokers, etc in the future. And the demand exists, so I'm betting that that'll be our actual future.

But they cannot in principle accomodate software developers.


You think people will abandon the mouse and keyboard for a touch based interface?


You can just remote into a terminal if you want a CLI from an iPad.


> the ability to create(!) and combine small specialized tools into something that's bigger than the sum of its parts

Funny, the possibility to do that currently exists in a walled garden, for audio apps: https://audiob.us . This is used by creators and live performers, exclusively. I don't think anyone uses applications in this ecosystem for "consumption" purposes.

The problem was never was the current OSs, it was just about app makers not willing or not knowing how to collaborate amongst themselves. It was also never about open vs closed, since AudioBus is proprietary and 99% of the apps that support it are also proprietary.

The thing about UNIX pipes is that they use plain text. AudioBus uses audio. Those are two things that naturally impose limitations. Developers seem ok with those "natural limitations" but whenever you want to impose limitations to something else you immediately get pushback.

You wouldn't be able to do that in a business CRUD app for example. The amount of wheel-reinvention is too big on those, compared to audio/video/unix-tools.


Assuming enough apps expose functionality through it, Shortcuts is the bridge between walled garden apps and UNIX pipelines, and Apple made a very clever move in making Shortcuts how Siri discovers functionality.


Is there a way to version-control shortcuts and import/export them between devices, especially for those unwilling to use iCloud and associated keyring upload and content scanning?


Not that I can see, but frankly if you’re not willing to use iCloud, give up on Apple devices because it’s what makes basically all the good stuff work. (This is not an invitation to get into yet another debate on recent events)


There are enough good iPhone/iPad apps to use Apple devices with self-hosted storage (WebDAV, CalDAV, SMB/SFTP/DLNA, Git).


The Unix shell was exceptionally good for its time. Powershell, IMHO, go it right by not sending strings but objects around. This gets you around the issue of spaces and means you can send more than one value.

I haven't been able to play with the shortcuts functionality because I don't have an iPhone, but if it is what I think it is it, it is a powerful tool to do things with.

We shouldn't fail to acknowledge that most, if not all, spend more time consuming than creating.

Then you have the group that isn't served with something as powerful as a real computer. My grandparents would be much better served with an iPad instead of a computer, since all they need is a browser and a system to do video calls.


And that's exactly why the "great unification" doesn't make sense.


I wholeheartedly agree; the app-centric model of today's desktop environments is the antithesis of the Smalltalk vision of composability. In an environment where everything is a live object, programmers can send messages to those objects. Thus, in the Smalltalk environment you end up with something even more powerful than Unix pipes and redirection.

Now, Smalltalk provides the infrastructure for composability, but Smalltalk by itself doesn't provide us the full-fledged desktop environment and applications that users have come to expect; they would have to be implemented in Smalltalk. But where things get interesting in a Smalltalk-implemented desktop environment is that the live object environment is still there, leading to interesting possibilities. Imagine being able to control a word processing program through scripts written outside the word processor, without the word processing program itself having to implement something like Visual Basic for Applications? Imagine being able to control a spreadsheet with Smalltalk methods on cells instead of having to learn the spreadsheet's language or having to learn Visual Basic for Applications? The possibilities become even more intriguing when objects can interact with each other in ways that are far more general than Unix pipes. This is the desktop environment I dream of using, the unification of the ease of use of GUIs and the flexibility and programmability similar to the Unix shell.

Oddly enough, in the mid-1990's Apple implemented a less-ambitious (but still very ambitious for its time) version of this vision known as OpenDoc (https://en.wikipedia.org/wiki/OpenDoc). There are some wonderful videos at https://www.youtube.com/watch?v=oFJdjk2rq4E (a three minute summary) and at https://youtu.be/2FSFvEIpm5o (a 50-minute demo). OpenDoc was released in 1996, if I recall correctly, and there were some applications written with OpenDoc, most notably the CyberDog web browser (https://en.wikipedia.org/wiki/Cyberdog). However, once Apple bought NeXT in late 1996, Apple committed itself to building its next-generation operating system on NeXT, which had its own collection of object-oriented APIs. The cancellation of OpenDoc led to this famous spat during WWDC 1997 when a disgruntled developer questioned Steve Jobs on Apple's decision to cancel OpenDoc (https://www.youtube.com/watch?v=oeqPrUmVz-o).

Why was OpenDoc cancelled? One can argue that OpenDoc's cancellation was due to Apple needing to have a tight focus during its very vulnerable period in 1997. The proof is in the pudding: mid-1990's Apple before the return of Steve Jobs was an unfocused beacon that was able to produce many interesting technological demos (the Dylan programming language, SK8, OpenDoc), but there was no single coherent vision. The Taligent and Copland projects, which strove to replace the classic Mac OS with then-modern underpinnings, were disasters. OpenDoc was just one out of the many, sometimes competing, visions that Apple had during this time. Steve Jobs was able to turn around Apple's fortunes by having Apple focus on one vision for the Mac.

However, another way of looking at the cancellation of OpenDoc is that its success would have completely upended the software industry. Instead of software vendors selling apps, software vendors would sell components, which users can integrate to create custom workflows. While I believe that this would have led to a lot of flexibility and user-empowerment, OpenDoc would have also been seriously challenged by major software vendors like Microsoft and Adobe, whose empires were built on selling large, proprietary software packages. They were not going to give up their moats without a fight.

Still, I dream of the modern realization of component-based software, and it's something I've been thinking a lot about for the past few years.


You forgot Newton: applications are components written in a memory safe language all running in the same address space with full access to each others’ objects. The “imagine…” scenarios above are entirely doable (indeed, were done) in NewtonOS.


> another way of looking at the cancellation of OpenDoc is that its success would have completely upended the software industry.

Ironically, that was the main interest of Brad Cox who, with Tom Love, created Objective-C, which was acquired by NeXT, and became the way forward for Apple.

In my experience, it is very hard to build a business around software objects. I spent a couple years trying to build an App around the Apple Watch. But, Apple only allows 3rd party complications. It is essentially a component. Apple recommends focusing on doing only one thing.

Moreover, as a component, you the developer often have less control over the UX. On the Apple Watch, when a user opens -say- Spotify on the iPhone, your app is kicked out.

So, a couple weeks ago, I decided to put the Apple Watch development on hold and do something else on the iPad.


I concur; I believe selling software components is a difficult business model for the reasons that you mentioned. This may have contributed to Steve Jobs' decision to cancel OpenDoc in 1997: Apple needed the support of its existing base of software vendors, especially Microsoft and Adobe, in order for the Mac to survive, and OpenDoc, which was designed with the express purpose of challenging the types of monolithic, large applications that Microsoft and Adobe developed, was not going to win the support of Microsoft and Adobe. Even when it came to the question of whether Microsoft Office and Adobe Photoshop would be ported to NeXT's APIs (later named Cocoa), both Microsoft and Adobe balked, which forced Apple to develop the Carbon API to make it easier for software developers to port programs written for the classic Mac OS. If Microsoft and Adobe balked at having to use Cocoa, imagine the howls that would have came from a demand to port their software to OpenDoc.

However, I believe that component-based software is a natural fit in the FOSS community. One of my favorite Hacker News comments of all time is https://news.ycombinator.com/item?id=13573373, where the author makes the case for why component-based software would have been a better fit for the Linux desktop than the standard approach of trying to build a FOSS replica of large, commercial platforms.


For a modern take on this you might like Paul Chiusiano’s work

http://pchiusano.blogspot.com/2013/05/the-future-of-software...

And that culminated in

https://www.unisonweb.org/

Essentially all code is pure functional and content addressable, meaning it’s inherently distributable and transferable (including closures). Like Smalltalk but with more use of modern programming language theory.


Are Windows, Linux and MacOS really better platforms for lots of tasks, performed by mainstream and professional users for daily tasks, principally because they are used to pipe text between command line tools? This is delusional.

I have good news, iOS user's productivity woes are solved. Using iSH you can pipe text between command line tools all day every day. Finally the power of the iPad is unleashed!


This must be sarcasm... There's much more to being productive with an OS than piping text on the command line in an emulated and highly restricted environment. The command line is a management interface to the OS, not a sandbox you get to play in with the tools the manufacturer allows you to.

I would like seamless interoperability between CLI and GUI apps, a unified file system, the ability to run virtual machines, containers[1] or maybe just nginx[2].

And then you have the usability issues of the iPad(OS) like external displays being mirrored only, or only recently being able to actually access files from a USB drive...

The iPad is a great media consumption device, that has some limited professional use (art, video editing maybe), but let's not fool ourselves that it's anywhere near being a replacement for a fully functional computer.

[1]: https://github.com/ish-app/ish/issues/63

[2]: https://github.com/ish-app/ish/issues/137


The text piping is just one small part and not the most important, instead its about sharing and processing any type of data with small tools that can be combined like lego blocks (and if there isn't the right lego block for a problem you can build your own). The UNIX shell is just one very primitive implementation of that idea (and yet it's still much more powerful than anything we have on iOS or Android).

In the "mobile app ecosystem", each application is more or less an island. It would be possible to achieve something similar on iOS/Android devices, but the entire "value proposition" of walled garden ecosystems isn't compatibel with this idea of open and creative computing. One prerequisite is that I actually own my device and can do anything I want with it, without the platform owner getting in the way.


On iOS now we have all the same interchange mechanisms between applications we have on desktop. Decent file management, rich content cut-and-paste, side-by-side applications. Share sheets are great.

The main limitation is the interaction model, and that's just down the the mechanics of touch interfaces. Firstly fingers are imprecise big fat squishy blobs. Secondly the tablet form factor doesn't afford a large physical keyboard. These are the impediments and they're just facts of the form factor. the new multi-tasking interface in iPadOS looks like a great step forward, but it's always an issue.


iOS is also great as a software copy protection scheme. You can't have that in an open OS without compromises.


It's basically like DRM at the device level.


Counterpoint to this, is that I’ve seen people be reluctant to click where they are not reluctant to touch. That barrier is vital for people who did not grow up with tech to interact and experiment with it.

Here’s one question: hover on focus vs click to focus?

Hover makes sense if you think of the pointer as a finger.


Those are just different input paradigms that have evolved over time, one isn't inherently superior to the others. Cars have a much more difficult UX than kbd+mouse, touch, gamepad or (Wii-style) point-and-wiggle interfaces, yet the relative difficulty of driving a car for "technical" vs "non-technical" people rarely comes up in discussions.


>That barrier is vital for people who did not grow up with tech to interact and experiment with it.

Who are you talking about? Senior citizens? Or maybe people outside of the US in developing countries? Because anyone under 50 in the US has grown up around tech.


Don't just assume that default. Remember more than 10% of the US still doesn't have internet, and a shocking portion of the US has smartphones as their only method of computing and would be dumbfounded at a PC.


Yeah, I don't buy that, unless you are talking about the Amish or someone willfully avoiding technology. And in that case, we are not really losing "future techies" are we?

Regarding the smartphone point, again, you might be talking about elderly people or self-isolated people. Otherwise all people under 50 have used computers in their school classes and would not be dumbfounded at a PC.


You can automate and glue iOS apps together using Shortcuts, script them with Scriptable or Pythonista, etc..

It's somewhat analogous to using Automator and/or macOS' Open Scripting Architecture.


Yeah but to even try and compare that to the power of the UNIX shell is silly.


> The "walled garden app ecosystem" is exceptionally bad for this, and the UNIX shell is exceptionally good, but both are extremes.

Very true.

> It's hard to imagine how a UNIX-like flexibility can be achieved inside an ecosystem that's optimized for passive media consumption and online shopping though.

Unix-like flexibility is also extremely insecure and easy to screw up.

It’s fairly obvious that the App Store ecosystem can and is becoming increasingly flexible. Things like shortcuts and safari extensions are obvious examples.

Unix-like flexibility will likely never come to iOS, nor should it, but it’s easy to imagine a steady drumbeat of easy to use managed points of flexibility that ultimately provide 80% of what people use Unix-flexibility for but without the insecurity or brittleness.

It’s not so easy to imagine how ease of use, security and robustness can be retrofitted to unix any other way.

Remember iOS is Unix. It’s just a matter of what they build for end users.


Y’know Samsung’s DEX, where you plug a Samsung Android device into an HDMI output and the output is not a mirror of the phone’s mobile UI, but rather a full Android desktop UI?

There’s little reason that Apple couldn’t do the same thing with iPads, where the “desktop UI” is macOS. Unify the kernels/userlands, keep the “Desktop Environments” distinct, but ship them both on iPads, with the macOS DE just waiting around for you to plug your device into a monitor.

Apple are already training us for this with the new version of Continuity — there’s little difference between “control your Mac from your iPad” and “control the macOS DE container running on your iPad, from your iPad.”

The only real differences in interaction paradigm between Continuity and a DEX-like approach, now that I think of it, would be:

- a shared filesystem

- [possibly] moving iPad/Catalyst apps freely between screens, where they swap between being fullscreen on iPadOS and being windows on macOS

This would also be a (rather-charitable) explanation for why iPadOS has never done anything smart so far when plugged into an HDMI display. If they were planning to do this, they wouldn’t bother with half-steps like giving iPadOS apps multi-display support.


I want this, but not limited to only tablets.

I want to own a single "pocket computer" and that's it, no more syncing, just a single device. I can drop it into a dock on my desk, that breaks out the I/O into a 30" display, keyboard, trackpad, speakers, and it activates the Mac desktop environment.

Unplug from the dock, it pauses everything I'm doing on the desktop, and it goes back into my pocket and uses the touch screen.



Yeah, but have it power all the peripherals of a full desktop. It could hot swap by dropping it into a magnetic dock/cradle whenever you need to "do some work", and then pick it up and walk away and all the desktop peripherals go back into hibernation.


> Unify the kernels/userlands, keep the “Desktop Environments” distinct, but ship them both on iPads, with the macOS DE just waiting around for you to plug your device into a monitor.

I want basically what you’re describing, but rather than having 2 distinct desktop environments I posit that the iPadOS environment, with a few refinements, would be better for desktop usage than macOS’ DE. Specifically, the refinements I’d want would include allowing more than 2 apps in a split view, and perhaps some reconsideration of that floating app-stack thing to make it less clunky. As a daily user of tiling window managers on a Linux desktop, floating desktop windows seem like a UI dead end that we’ve somehow been trapped in for decades. Growing existing “smartphone/tablet” UI to offer more power seems like a more realistic way to widespread adoption of a better desktop paradigm than removing “features” that most current desktop users have become familiar with.


>I’d want would include allowing more than 2 apps in a split view, and perhaps some reconsideration of that floating app-stack thing to make it less clunky.

iPadOS 15 has limited support for floating windows and more than three "windows" on screen at once. I'd posit that they are preparing the APIs for floating window support for external displays in a future version of iPadOS.

>As a daily user of tiling window managers on a Linux desktop, floating desktop windows seem like a UI dead end that we’ve somehow been trapped in for decades

As a Mac user, I agree. On my 13-inch notebook, I more or less always use applications in a full screen configuration. The notebook screen is too small to allow for effective uses of floating windows.

However, even on a 27-inch display, I still find myself using full screen applications more often than not.

In fact I'd be ecstatic if a future version of macOS stole some of the multitasking features from iPadOS. Managing fullscreen apps on the Mac is a lot clunkier than it is on the iPad.


Many people were expectantly waiting for this to be announced at WWDC 2021, especially with the M1-based iPad including silicon support for hypervisor/virtualization and the 16GB RAM / 1TB storage model. Yet nothing was announced, suggesting that the only use of 16GB RAM will be upcoming memory-consuming Adobe "Pro" apps.


IMO this is the first future-proof mobile device from Apple.

Apple has bad habit to install absolutely minimum amounts of RAM into their mobile devices. iPhone had 2 GB RAM when Androids (with similar price) had 8 GB RAM. It means that RAM will be main issue for supporting those devices in the future iOS versions and eventually they'll be dropped, despite extremely powerful CPU and huge storage.

But with 16 GB it won't be the case ever. And, of course, CPU will not significantly progress in the future and certainly will not the limiting factor. So this tablet should easily last for another 10 years (if battery could be replaced with reasonable efforts).


Agreed, but not worth almost $2K for the 16GB/1TB model. Until Apple un-cripples the iPad Pro, it's better to buy a lower-end iPad + MacBook.


My iOS devices with 2 GB of RAM are still going strong today. Actually, I’m responding to you from one running the latest iPadOS 15 beta.


Adoption rates of these new features is too low to train a substantial portion of users.


You can hack macOS using macOS, but kids won’t be able to get that level of understanding how OS works on iPadOS.

This is bad for IT. Maybe not as dramatic as an existential crisis but it’s certainly a lost opportunity.

My friend's daughter is 10 and she could pass job interview for a junior software developer any day simply because her first PC was running Ubuntu (it’s Arch now).


"My friend's daughter uses arch btw."


>This is bad for IT.

Maybe for how we do IT. But consider the future where 99% of apps are developed on a tablet or phone (for a tablet or phone) using a cloud web interface with mostly low-code or no-code widget drag and drops.


"Low code" and "no code" solutions have existed for as long as computers exist (for instance to automate tasks, author shaders, or describe simple gameplay logic), yet they were always specialized niche solutions and never replaced "freestyle coding". If such a thing has been tried over and over again for half a century but hasn't had a breakthrough and replaced the alternatives I think it's likely that this also won't happen in the next 50 years.


I feel like many people forget how many "no-code" solutions have existed over the decades. It's been the dream that so many people have chased.

The solutions get better over time, but the issue is the people who can code don't consider them as anything more than toys, while the ones who are empowered by them, now make things they couldn't before.


Even in mathematics, a field which is much more amenable to graphical representation, textual descriptions still dominate textbooks and graphics are only used to help illustrate certain concepts with simplified problems.

Part of the problem there is dimensionality, a 2D surface can only show a projection of a problem in 2D space. The highest level of dimensionality that most humans can reason about when projected onto 2D space is 3D. There are workarounds for this, such as show multiple different dimensions projected onto 2D space side-by-side, but there are limits to the effectiveness of this and the amount of screen available.

A graph can easily show you that a ball thrown in the air will follow a parabolic path, but can it a single graph also incorporate factors such as ball diameter, wind direction, initial velocity, gravity, and atmospheric density? A creative graph maker might be able to capture some of these elements together, but not all. A program must capture them all together.

Most programs have an extremely high degree of dimensionality, so they are innately poorly suited to generalised description with graphical approaches.

The most we can ever really hope for is visual domain specific languages that restrict the dimensionality of manageable levels.


There's a natural limit to what may be achieved with a low/no code tool. Kids might be able to drag a few elements on screen to create charts and games, but that sort of work doesn't really advance the field.


>There's a natural limit to what may be achieved with low/no code tool. Kids might be able to drag a few elements on screen to create charts and games, but that sort of work doesn't really advance the field.

Ahhh, who said the point in the future would be to advance the field? That is how we think. Future generations might be content with a few "programming geniuses" and everyone else making simplistic apps for work for fun.

You could see this future coming. It is already onerous to be a desktop developer on Windows or Mac. It is purposely easier to develop and deploy simplistic mobile apps that can be controlled and monitored by the "stores" that distribute them.


A bit like today, I feel like a toddler playing with Java coloured cubes, while the “real programmers” are the Linux contributors, people who write drivers and so on. Tomorrow it will be 3 layers: The C programmers, the people from Atlassian or Apple who build the platforms, and us, programming with the subset of the language they give us.


Every generation has been okay with having multiple tiers of skillsets and domains.

That's just natural. Even in the early days of computing, you'd definitely have people split between low level and high level code.

The fact is, everytime new technology comes along that reduces the barrier of entry, you're going to have some people who will be lower level developers and many who will be high level due to being able to do things more easily.

You can apply this to most consumer friendly domains.


> But consider the future where 99% of apps are developed on a tablet or phone (for a tablet or phone) using a cloud web interface with mostly low-code or no-code widget drag and drops

This will simply never happen. Just given the history of low-code / no-code, they are great for prototypes and small apps but as soon as you do anything more then that's when things start falling off a cliff. Of all the apps I've been contracted to build or help build I can think of maybe one that we could have glued together using low-code. Everything else just has too much custom logic behind the scenes.


Drag and drop HW drivers, or similar, in a cloud IDE? I'm skeptical.


>Drag and drop HW drivers, or similar, in a cloud IDE? I'm skeptical.

Perhaps you are thinking too "now".

In the future why would you personally need hardware drivers? Any hardware that you subscribe to (ownership will be a thing of the past) would have factory-installed drivers constantly updated from the few reified "real" programmers that work for a few dozen mega-corporations.

For that matter, I could imagine a future where there are only a few dozen hardware drivers approved and any new hardware that is approved to be sold to the public must conform to one of them.


And the path to this, is to raise a constant stream of vulnerabilities. Therefore you need a constant upgrade from upstream. Therefore insurances only cover you if you are using a maintained system. And since there are distributions with vulnerabilities, insurances will only insure you for public service if you use a few pre-approved OSes.

Or just use the cloud provider who has all the preapproved workflows. The world just needs a little notch to tip into regulating the provision of services to the public, and at that moment, any independent OS will be dead, and we’ll be forever tethered to the cloud.

“He didn’t use a safe OS”, we’ll say, of everyone using an independent Debian version and meeting a vulnerability. “It’s only fair he’s the hammer fell on him.”


I just finished reading two articles from Agre and your comments literally make my head hurt now. I'm sorry.


Agre?


And "the few reified 'real' programmers that work for a few dozen mega-corporations" are organized in an union. Sounds like fun times!


Full app development and selling apps in the App Store is close to being released on the iPad. You can use SwiftUI right now in Playgrounds, but Apple has not released app generating functionality yet.

It is a funny feeling working on the same Playground project on both an iPad and MacBook, but it works. The code for reading data files is a bit different, but the iPad dev experience will get better.


Playground isn’t close to full app development; it’s very powerful tool, but I don’t see many professionals jumping to build on that. Full app development is when Xcode and instruments are available.


Well, let's wait and see. Apple wants it to be easy for semi-technical people to write apps.


It's amazing to me that Apple made a very strict distinction between touch UI systems and keyboard/mouse UI systems, right from the start, has maintained that rigorously, and has been right all along.

Their competitors and the pundit-sphere saw this as a fatal weakness, but every attempt at unifying touch and keyboard/mouse interfaces failed miserably.

So I'm not concerned about Apple trying to merge iPadOS and MacOS, because they already know it would be a mistake and are sticking to their guns until maybe our eyes and fingers get about 4x as sharp.


What do you mean? iPadOS has keyboard and mouse support. Apple sells a first-party keyboard accessory for iPad.


Like the pencil these are for specific subsets of use cases, they are not a primary interaction mode but more like specialist support for optional peripherals, the same way desktops support graphics tablets and trackballs. When I described Mac interfaces as keyboard/mouse you didn't say "oh, and also graphics, tablets, trackballs and VR headsets". They're an edge case.


Yes... but it's still a very touch-focused system. The mouse support on iPad, you'll notice, has very little precision in where you click. It's a round dot like a finger - you can't precisely click a few pixels. They found a way to make a touchpad while keeping the touch focus.


> Their competitors and the pundit-sphere saw this as a fatal weakness, but every attempt at unifying touch and keyboard/mouse interfaces failed miserably.

Only because it wasn't really tried. Heck, GNOME on a touchscreen or tablet PC has a better "unified" touch+mouse interface than any of its competitors on the desktop and mobile side.


The only reason there is a sepparation is that it allows to sell you two devices instead of one. That's alo the reason for no multiuser switching on consumer iPads


Is it really amazing? Everyone has been saying the same from the start about wanting separate mobile and desktop UIs.


Microsoft's tablet strategy was literally to implement a touch first tablet UI in their desktop OS. Meanwhile loads of companies have been trying to merge desktop UI modes into phones since well before the iPhone. Check out Android desktop mode for a recent example. I remember loads of friends of mine for years telling me 'proper' desktop OSes in tablets would kill the iPad.


I switched from a windows only company to an apple only company. I generally like macOS but I still touch my screen on my MacBook way too often expect the page to scroll or zoom in.


I bought an iPad Pro recently, and I hate the fact that I can’t run any of my favorite macOS apps on it despite the fact that the hardware is more than capable. And no web browsers / browser extensions / PWAs?

It feels like a toy. Very disappointing.


Return it and get a MacBook for around the same price.

I don’t understand the point of iPad Pros being so powerful when they can’t - y’know - run even Apple’s own Pro software. (Final Cut Pro, Logic Pro)

They’re mind-numbingly expensive toys, for sure.


They make a lot of sense for creatives.

ProCreate, various image editing tools, even video editing... I know a lot of talented professionals who've switched over to iPad Pros for most of their work except 3D.

A large part of that is the Apple Pencil and very low latency, but they appreciate every spec bump because it does help their type of work out.


>> they make a lot of sense for creatives

I am a professional creative and use Apple’s software for that professional creativity, in the form of Logic Pro mainly as well as Final Cut.

The iPad’s offerings of GarageBand and iMovie are about as useful for me with the work I do as ProCreate vs. MS Paint.

No - the iPad isn’t good for ‘creatives’, it’s good for people who do digital visual art or design.

That’s...for creatives? On a pro level? Honestly it.

It is super shameful that Apple’s own Pro software won’t run on a device they are calling ‘Pro’.


This. Not only that but newest M1 is a league on its own and a clear path for Apple to dominate the desktop market. What Apple should do is entrench their hardware position with M2 and M3 and heavily invest in macOS for desktop compute purposes.


The only use I have for an iPad is Foreflight. If it ran on any other operating system I would not use Apple products. I dislike no fingerprint reader on the iPad Pro too.

Still, it is nice for consuming books, blogs and video.


The latest iPad Air has a fingerprint reader on the power button, which is nice. Hopefully this makes its way to other Apple devices.


At least browser extensions will be a thing as of iOS 15.


You knew what it was when you bought it right?

As far as I know, Apple never said “you’ll be able to run your favorite macOS apps on it because the hardware is more than capable.”


In the article, 2 of 3 examples are about consumption. This just reinforces my opinion that the iPad is primarily a consumption device. My wife is a teacher and I asked her if her students, the so called „digital natives“ hold up to their reputation. She declined and said they were producing TikTok videos and instagram photos at best (which can lead to a creative career but I’d argue that’s the exception not the rule). Even if we improve productivity somewhat, it’s offset by a huge increase in consumption. I don’t see this changing in the near future, mainly because there are no incentives to do otherwise.


Author here - first of all: I agree. I should have gone a bit more into detail how „not sure how she’s related“ is using her devices. She is also doing actual work for school on the iPad (research and writing papers).


As a power user on MacOS, I dread the merge. I can only suspect that MacOS becomes less power-user friendly in the name of "intuitiveness". I hope I am proven wrong


I saw this coming in '19 or earlier when they deprecated 32-bit software. I started moving away from Apple then. Now I still have my functioning 2017 Mac but I mostly use newer Windows/WSL computers, and my new work computer is an X1 running Linux. It is not hard to switch except for the lack of emacs keys for cursor control. (Unix user since '88.)


> I can only suspect that MacOS becomes less power-user friendly

That's pretty much been the trend for years now.


I agree. My personal plan is to ride it out and then maybe find a machine that works well with Linux. I can't go back to Windows unless MS rereleases Win7. Win10 is crap and win11 appears to be more of the same kind (i.e even worse).

More to the point, this seems to be a general trend: we have so much better technology by now, but a worse society. That is something I don't know how to get out of.


It's just lazyness / QA problems at Apple that prevent high quality apps


No, it's more about the inherent limitations of touch-centric UIs, combined with limited multitasking (full-screen apps), and trying to hide the filesystem from the user.


The file system hasn’t been hidden for years. Unfortunately the app and components that interact with it are completely anemic, so it’s still a pain to work with.


Maybe they just can't afford QA? Shareholders are demanding sometimes, but end-users aren't mostly.


LoL @ "Perhaps Apple can't afford..."


There really should be some syntax for jokes on the internet.

Only one laugh I've got for this one…


Apple can afford anything in the world, except degrading quality / reputation.


Apple can afford a small country, and shareholders will certainly support an Apple management that insists on adequate QA.


I think iPad is the one product that Apple is the most proud of and it shows. The 2021 iPad Pro is the most sophisticated product in Apple's lineup. It's the smoothest, fastest and most elegant software they have ever made.

I know some people who used to work at Apple and they said it was clear that Apple wanted to let the Mac die and focus primarily on iPadOS and iOS but regular users and professionals kept buying Macs so they were forced to revert their attention to it again.


I would argue that this is just "timing" thing. If we look far enough into the future I would bet that almost 100% of regular tasks should be done not only on iPad, but on iPhone.

Human comes to work => docks/AirPlays their system to external monitor => does work in browsers/office apps. We are pretty close to this reality tbh. Mostly software is a limiting factor.

As for computer pros...we probably will be forced to use some other vendors(sad) or use remote systems for compute tasks while using our phone/tablet as thin client.

Thanks Apple for not jumping into that train just yet!


Yes indeed, this makes a lot of sense for customers but then Apple would lose money only selling one device to most people (and also maybe a watch).

The last time I visited my Dad, I tried plugging my iPad Pro into his new Apple 6K monitor - really nice! Some apps like Mosh use two displays fairly well.

Even with limited multiple windows, it was nice. Then playing Apple Arcade games on the 6K monitor was visually stunning and fun.

If I had only one device that could function as you and I want it to, it would be an iPhone or new iPad Pro that also functioned as a cell phone. I use the smaller iPad Pro, and it really is portable. Probably an all in one iPhone would be best.


If they would put macOS on iPads or iPhones it could be a reality today, but I see no indication that iOS/iPadOS are going to be suitable for this any time soon.


> Human comes to work => docks/AirPlays their system to > external monitor => does work in browsers/office apps.

This sounds like Samsung DeX?


OQO had much of this working ~14 years ago, but I don't think it met much of the phone replacement functionality that it would have needed.


The signs are all in the air, MacOS will not exist maybe even 5 years from now.

The merge will absolutely happen but it will be less of a merge and more of a takeover. iPadOS apps running on MacOS is just the thing to ease you in, eventually at a WWDC down the line the words “Most of our legacy MacOS users are spending most of their time in iPadOS native apps…” and your old versions of Photoshop or whatever will be shifted to be the ones living in the emulation layer like Classic OS9 apps to eventually be removed completely.

If you think I’m wrong, forget your own opinions and prejudices to iPad and walled garden computing and imagine you’re an Apple exec who gets to see earning charts, iPhone, iPad and MacOS. One of these things is not like the others in both usage numbers and profit.


> imagine you’re an Apple exec who gets to see earning charts, iPhone, iPad and MacOS. One of these things is not like the others in both usage numbers and profit.

You are correct, one of those is not like the others, but it isn’t the Macs. According to their latest earnings report, iPhones made Apple $39.57B, iPads $7.37B, and Macs $8.24B [0]. I wouldn’t be surprised if iPadOS and MacOS merged in the near future, especially since they are running on the same chips, but it won’t be because iPads are the ones making all the money.

[0] https://www.cnbc.com/2021/07/27/apple-aapl-earnings-q3-2021....


>According to their latest earnings report, iPhones made Apple $39.57B, iPads $7.37B, and Macs $8.24B

These are just device sales right? You have to remember iPads have the App Store revenue and the cream scooping of IAP/subscription revenue of most applications running on them.

Apple doesn't get a cut of my Photoshop subscription on Mac, it absolutely gets a cut when I buy Procreate on iOS.


Apple is getting services revenue on the Mac.

At least half the Mac GUI Apps I use are delivered via the Mac App Store. That number moves towards 100% for my non tech friends.

I continue to use iCloud because I have a Mac.


I use a Mac, in tech, subscribe to no services, and do not use the Mac App Store. There, now you have one more data point.


In this case if the Epic lawsuit is successful and drives down the 30% Apple cut it may save macOs as an open-ish platform.


Not necessarily disagreeing, but my key question here is: how will Apple handle lower-level development cases? It's not difficult to imagine a watered-down iPad XCode shipping, but what about the tooling they use to build iOS itself?

After all, Apple has to use something to develop iOS, and I think it's unlikely they'd create a special in-house operating system for the exclusive use of their OS development teams that's totally different than what they ship to consumers.


Why are you assuming iPadOS wouldn’t support low-level development?


I'm sure that they could just use Linux for that. I wouldn't be surprised if they're using Linux right now.


Almost all Apple engineers use Macs.


I’ll believe it when Xcode runs on iPad OS without comprise.


IOS will replace Mac OS, the writing has been on the wall for a while now, iOS is where the money comes in and where most usage is, and maintaining dual development apis is painful and costly.

Apple doesn’t need to rush this process, but it is inevitable IMO, and is foreshadowed by actions like allowing iOS apps to run on macOS, moving Mac OS closer in UI to mobile, adding essentials like file handling to the mobile os.

At this point the underlying OS is the same, the UIs are converging, the UI frameworks for iOS are almost capable of replacing Mac OS, and it would be relatively easy to merge them in the next few years, keeping some extra layers of UI for macs but merging most of it and certainly the dev frameworks. We may see touchscreen macs or dockable iOS devices first though.


I think you have it backwards. MacOS is going to replace iOS. Docking is a good reason for that. With the advancements in Apples silicon, we’re seeing more and more features from iOS added to MacOS, but not the other way round.

I would love a dockable iPhone, I’d pay a big premium for an M line iPhone that has the full MacOS in dock mode.


The money and power within Apple have all gone to iOS over the last ten years, and developer interest is now overwhelmingly in iOS. That alone will dictate the direction of travel.


iPhones are apples leading revenue maker, but it’s closely followed by their services. Absorbing iOS into MacOS makes the most sense, they can sell devices that are iOSish only and then sell the premium dockable iPhones with MacOS. It also will help developers focus on a unified platform without alienating them with an entirely new tool suite.


I think that’s the best argument. To point for many people is the services are so good. You text someone on your phone and it shows up on your desktop. Even though macOS doesn’t make as much money it makes iOS more valuable


Most developers develop for iOS nowadays, not MacOS. MacOS is increasingly an afterthought serviced by electron wrapper apps. I think it's much more likely they'll replace the Mac OS SDKs with the newer iOS equivalents.

You can now write an iOS app which runs on the desktop, and I suspect that's the direction Apple will promote - write once for iOS and it'll run anywhere, including Mac OS.

They'll obviously keep the brand Mac OS, as they do for iPad OS, but it would become the same OS as iOS once the SDKs are the same, the underlying OS is the same already (Darwin), and many frameworks are shared.


Agreed, it's hard for me to imagine any realistic price that Apple could put on that product that I wouldn't pay.


Hmm. Seems likely in a way, but I don't see how to square this with the professional crowd like software devs


The appearance of a trend does not guarantee the continuation of that trend to its logical extreme.

I'd argue that they're just doing the obvious: refining and maturing iPadOS, and unifying the vibe of all their products. This makes sense. Just as one band influencing another doesn't mean that the two bands will eventually converge into an amorphous blob, two products by the same company influencing one another doesn't mean that the two will become a single product.

I'd just take Apple's word for it that they've maintained all along: they're letting the device do its thing as best it can. If iOS were swallowing everything, then why did iPadOS split out into its own thing? People don't give enough credence to the fact that these platforms are splitting just as much as they're merging.

Also, even if all devices were going the way of "dumbed-down iOS," someone still needs to develop for it, and you best believe that Apple won't loosen their grip on that segment. Just as Mac Pros are loss leaders just to keep the creative class, macOS devices in general will stick around, hell or high water, just to retain the developer class.

I for one welcome the iPadOS-ification of macOS, the macOS-ification of iPadOS, and so on. They're just allowing the products to slowly evolve and (ideally) be the best version of what their interface dictates, just as they've always done and always said they're doing.


They run the same underlying OS, in many ways it doesn't matter which replaces the other, the UI chrome and API are the main differences, and something like xcode could be run on iOS without huge modifications.

I do think at some point they'll merge them though - maintaining two APIs which are distinct but incredibly similar is a waste of effort and confusing for developers.


I for one ain't bothered: It's clear to me that Apple needs tools to engineer these spectacular devices and there would be little or no sense to them designing away the power of MacOS and landing on a product range limited to simple consumption. Does anyone foresee a time where Apple migrates to something like Lenovo Laptops or Windows or some scraggly Linux desktop environment? Apple won't be losing these tools any time soon. At worst we might see dual mode macbooks that can be switched between MacOS and iOS paradigms.

Above and beyond that fairly self-evident conclusion, there is plenty of room for more sophisticated interaction with iOS devices that maintain the basic interface but also provide extremely sophisticated data manipulation capabilities. Perhaps even more powerfully than our beloved UNIX shells - think something like AI assisted voice interaction where you can easily state a pipeline verbally: "take results from A that contain N and modify them by X and then sort them by I and make a graph and paste it to my document". (e.g., grep | sed | sort | gnuplot | paste >> example.doc)

That sort of thing is potentially just a few iterations away and simultaneously more powerful and more useful than text mode pipes, if only for the fact that the user wouldn't be required to memorize thousands of cryptic flags/switches. The same interface could be used to string these directives together to form scripts and set jobs, etc.

This isn't meant to be a specific prediction, by the way, just one glimpse into the idea space. There are so many good ideas that people haven't had yet... it just staggers the mind to consider the potential. I'm just skimming the surface but surely there are so many ways to marry the insanely intuitive discoverability of something like iOS with the equally awesome power of the UNIX philosophy.

But it's up to us to find them, rather than get inflexible and grumbly and say it can't be done, or Apple is stupid, or whatever nonsense take you might jerk your knee towards ;)


I’ve pretty excited for the day I can do my programming job on an iPad in addition to my laptop. Apple has captured the software engineering market very well. Even the Microsoft F# shop I worked at had Apple hardware on most desks running Parallels or Boot Camp, and the two jobs I’ve had since were standard-issue MacBook Pro.

The pipeline you described reminds me of Apple Shortcuts - it’s easy and useful. Most recently I made a GIF out of a bunch of photos on my phone, which is a simple task that used to require a far from ideal app (ads, black-box that could be hiding analytics and tracking tasks).


I see a bunch of posts about the iPad being a "consume" only device and that's totally totally wrong.

I've had an iPad since day one, generation one and took notes with an aftermarket stylus. I used Keynote and OmniGraffle to make slideshows and diagrams.

I even bought an Apogee JAM and was able to record my electric guitar in GarageBand and used several other apps for guitar effects and simulated amplification. My favorite was AmpKit+ [http://agilepartners.com/apps/ampkit/].

The iPad is a great creator's device and has only improved with the Apple Pencil and other enhancements.

That being said, if Apple keeps moving macOS towards iPadOS I'm out. The iPad is a complementary device as I use it. It can not replace a notebook computer and my notebook computer can not replace my desktop.

There are definitely people that can get by with just an iPad or even just a smartphone, but I can't and will not. There are still many options for non-Apple notebooks and workstations and many more OS options too.


To me the real question is if a machine running MacOS existed in a Surface-like form factor that had touchscreen and pen support would it be better than the iPad at the tasks you just described.

I'm finding it harder and harder to justify that the divergence even needed to happen at all when more and more iPad users are just interacting with it via keyboard case and trackpad.


You're always consuming someone else's software on it though. Being able to rip out and re-implement even parts of your OS gives you a radically different view of computers than the one you get from an OS that doesn't even let you write your own alarm clock.


Another paradigm not often discussed is to keep iPadOS and macOS distinct but offer macOS as an app (hardware virtualized vm) on iPadOS.

This is a similar transition to MS-DOS on Windows 95 (successful), Unix terminal on Mac OS X (successful) or Windows 8 desktop program in Windows 8 tablet Metro shell (unsuccessful).

Most people just need MacOSX for 10% of their computing needs — that 10% is unique for most people though — and an iPad Pro able to access macOS when needed would be enough for peace of mind.

On the flip side if Apple just added cellular and Apple Pencil support to the Mac (not even a touchscreen/pen supported screen; just Apple Pencil support for the trackpad) it would decimate sales of iPad Pro.


Maybe the future will be 90% of normal users using dumbed down touch devices, and the rest of us “power” users who need to develop stuff will be using a normal OS.

Maybe the normal OSes that we all grew up with are actually overkill for these other users and they dont even want it. Why would they need a shell? Or the ability to install some arbitrary programs? They only need email and word and thats it.

Maybe this is not a bad thing in the end? Who knows.


It's sad that we can only think of touch screen interfaces as "dumbed down". The affordances of a touch-centric interface have yet to be fully explored; at least in principle, there's no reason why they could not be made just as capable as a keyboard+mouse.


It all comes down to text input. Steve Jobs launched the iPhone with only a touch screen but he launched the iPad with a hardware keyboard accessory (discontinued after a year but then reintroduced with the iPad Pro). So with iPadOS it’s always really been touch+keyboard and frankly mouse+keyboard is better.

If Apple managed to offer a revolutionary input for the iPad (perhaps subvocalised speech input or a a redesigned touch/pen input) it would be the first step to a touch-centric interface as capable as keyboard+mouse


Gestural text input is very much a possibility. It was available on palmtop devices some 20 years ago, and these had far worse touch sensitivity than any modern tablet or even a modern phone.


It depends on whether you think computers should be tools for empowering people or subjugating them.


But isnt that too far fetched? Most people dont care about anything else than checking emails and playing candy crush. They dont think about empowerment or subjugation or changing the world.


Even if it is not done daily, not being able to tinker with your system is fundamental. You don't have to be an expert yourself, but most computer users know someone with in-depth knowledge or just follow step-by-step instructions from the web or, imagine the concept, a computer book :) This allows a lot of control of your computing environment, you just don't get on an iPad.


Beware of self-fulfilling prophecies, especially this one:

https://en.wikipedia.org/wiki/Learned_helplessness


The article does a good job summarizing what works on an iPad and what doesn't ! To that, I will add that iPads are designed as a "single user" device. Sharing one between family members means you have to let them use your Google, Spotify, iCloud accounts.

Now that their hardware is all lined up between laptops and mobile devices, I fully expect the merge to happen in a few years. I feel Apple's software has been stagnant for too long and I won't be buying new hardware until then.


> I feel Apple's software has been stagnant for too long and I won't be buying new hardware until then.

Odd. I'm still happy to use Mac OS High Sierra ... Yosemite even. I wonder with each OS release why I would even want the upgrade.

Mainly because apps I use (Affinity, etc.) stop working (or stop trying to work) for older versions of Mac OS.


> I will add that iPads are designed as a "single user" device.

There are tools available to schools that allow multiple logins on a single iPad. It would be nice if they had something similar for families.


I need a real computer. I don’t mind a few UI ideas being borrowed where they make sense but if I can’t run anything I want on it and can’t multitask or combine things together it is not a real computer.

iOS works fine for a phone. It’s worthless for my job or my personal interests.


That's kind of the beauty of an iPad. It isn't for everyone. It hasn't been designed as a lowest-common-denominator type of device. There are a few things it's great at and those aren't necessarily the types of things a lot of people want.

I have a 2018 iPad and it's easily my favorite computer to use. With a keyboard, it's lovely to write on. With the Pencil and Good Notes it's a fantastic note taking device. Get ProCreate and it's an amazing combination for sketching and drawing. By itself, it's a great device for reference manuals and videos when I'm doing something like fixing my washing machine.

I don't read much on it because I have a dedicated e-reader. I don't play games on it because I have a console that's better for gaming. I don't write software on it because I have a workstation and laptop configured exactly for that purpose.

I think as they make changes to broaden the iPad's appeal, they may be undermining it.


iPad is the same iOS like iPhone, just with bigger screen. It's not a real device for work. There's no bash in iPad. There's no docker in iPad. There's no accessible filesystem in iPad. You can't connect arduino and program it with iPad. You can't install Java IDE and use it. It's infinitely far away from being usable. And making it usable means turning it into macOS.

The only possible way for iPad to become slightly useful is to provide a proper macOS in a some way. But it's still useless because of terrible thermals and will throttle on any serious load. Even laptops are barely able to keep up with ever increasing demands from developers. Tablet is just not made for work. It's good to read a book, that I'll admit. Even browser is barely usable, there's no developer console to kill that annoying div.


When you say it isn't a real device for work, remember that you are thinking only about your work. There are lots of people who get plenty of their work done on an iPad.

> The only possible way for iPad to become slightly useful is to provide a proper macOS in a some way.

Why would they do that? They already sell portable macOS machines.


I agree, and the same applies to the Mac and anything that narrows it. They are for different audiences.


Interesting take, there's 0 doubt that iPads are incredibly more intuitive to use, and anecdotally, I've also seen the same observations regarding young and elderly. I think a good portion of that is the indirect pointing method (mouse or trackpad) that takes some use to, but there are certainly inherited conventions, features based on past limitations and other things that look just weird to a new audience picking it up.

It's certainly not impossible but one could argue that some oddities could be cleaned up. Releasing software as dmg on Mac is really terrible for new users (and I hate that I did it for Aerial's companion app), it's inherited from the olden days of CD-ROMs and the fact that you have to unmount it is really something that could be improved on.

Now, one could argue Apple has very low incentive to provide something better and push for a new norm (most software nowadays does zip and detect if user launches from the Downloads folder, complaining/moving the file in Applications for you), but that's definitely a relic of the past that, while you can still support it for legacy reasons, should be phased out for something better officially pushed by Apple (that is not just the App Store).

The Apple approach to feature changes/simplifications on macOS though seems solely based on design. Hiding the document proxy icons on windows in Big Sur is a good example of this [1].

If you've seen Monterey betas, Safari tabs are also losing functionalities (favicons worked a bit like document proxy icons, that's gone now) and while they rolled back some of the most egregious changes (they had made a utter mess with toolbar buttons in beta 1), the new UI is still clunky in terms of general usability and readability.

I guess it looks epurated and consistent with the new Safari iPad UI, but do those change help in any way the new users to get around the inherited "peculiarities" of the Mac ? I don't think so.

And when Apple does "line up" features to make them consistent accross platforms, it's always the Mac that loses. The phasing out of plugins in Safari for app extensions killed uBlock Origin for example, and the alternatives are certainly not as great.

[1] : https://daringfireball.net/2021/07/document_proxy_icons_maco...


Have you tried creating a .pkg file instead of .dmg? Much better user experience, in my opinion. The developer controls where the .app goes, and there’s nothing for the user to unmount when finished.


This was dumb when Windows 8 did it, and it remains dumb.

Unifying an OS and it's GUI/UX layer across devices with fundamental differences like this gives you lowest denominator crap.


It's no surprise that inexperienced computer users can get more done with a mobile UI than a MacOS (or for that matter, most classic computer UIs). The big difference is that to operate a mobile device, you don't have to be able to read and understand what the words mean. That doesn't mean a mobile UI is the right one for every laptop, or for a Macbook.

We've seen many tries at unifying the design language between mobile and computer. ChromeOS, Windows 10, Android Desktop Mode, Ubuntu's Unity all are mobile/desktop UIs. What makes them work (to the extent they do work) is that most devices have touch screens (well, not so much with Android in desktop mode). Without the touch screen, optimizing the shape of UI components for touch is actually a bad experience as a pointing device works very differently than does a touch screen.


There is a lot of reason for having different UIs to start with, between a tablet and a laptop or even a desktop computer. Though I think productivity could be better on the iPad in this regard too.

But the very basic problem is file handling. It somehow seems to vary across apps. Some have local storage to themselves which is hidden from everyone, some have visible local storage, others support files/iCloud. While files can be cumbersome on its own, if at least all apps would share the data in a way accessible by files, it would be a start.

Best example which keeps bugging me: music. I ripped my CDs into my iTunes library and consequently into Apple Music. Now, Apple Music decided to delete some of my tracks from its library. How to get them back? I even was able to put them into my iCloud drive, but there is no way to add music to your iPad this way. Is there any other way you can add tracks to Apple Music? Is there any way to cause iTunes, which still has all of them, to upload them to Apple Music again (never mention that I do need my Mac for that, destroying the notion of the iPad being stand-alone).

Same situation with videos. How to copy videos from iCloud onto the iPad apps? And of course, none of these apps open something like an "open file" mechanism.

I quite like my iPad Pro. But I never figured out, how to really "do" things with it. The only positive exception seems to be the "Working Copy" git client, which enables some apps to be used productively.


True, these things seems intentionally made complicated since they want to push Apple Music/TV subscriptions. If you still haven't figured it out, you can put iTunes music to your iPad. But it requires like ~$20/year subscription called iTunes Match. So yeah, you can't do it for free using their suggested methods.


Well that is the point, I have the Apple Music subscription which includes iTunes Match. So that is, how my music all got into Apple Music in the first place. But later on, it deleted some tracks I cannot add back. At least for years now, I am looking for a way how to do it. The problems only started, after I subscribed to Apple Music.


If the latest iPhone chip is as powerful as the Intel chip used in MacBooks, why shouldn't I be able to connect a hdmi to usb c cable to my phone and get a full MacOS machine on the external display? (Like Samsung dex does this or so did Ubuntu phone from 2014-15?)

Same question with Android phones and Chrome OS.

What's the technical challenge?

I feel the only reason Apple won't do it is because then instead of buying 2 to 3 devices, people would buy just one, which will hurt their revenue. (Doesn't explain why Google won't)


They can’t get rid of a general purpose OS. They have to have products that allow you to write software for the iDevices, and web applications, in general. You need macOS for these things.


Not really. They can set up cloud boxes for that (with pre-installed libraries and simulators) and let Microsoft, HP, Dell etc serve their beloved developers with crappy low-margin hardware.

All the development goes to the cloud nowadays. The local machine will soon become irrelevant.


Virtualization.

I have never needed to access the shell on my iPhone. But to do dev work I obviously need it on my Mac. But if you look at the rise of Docker and other such things, running stuff on the core OS is not always very optimal. It would be just as good to have a virtualized machine in whatever OS I want and be able to take that around with me. When I can have that, I don’t really care how locked down they make the OS as long as I can stay productive.


Funny the author suggests defaulting to showing all apps as if it was mobile. Windows 8 tried that, didn't work out too well.

I do think it would be great to unify though, maybe find a way to keep everyone happy. It would be great to just have an iPad (or any tablet device) and be able to just use a bluetooth mouse and keyboard, and be able to run any IDE, game, or app you want.

As computers get more powerful, it might not even be necessary to have a full-blown desktop PC for regular personal use.


Talk to a CS professor and they will tell you an OS is what is deep inside the machine but not see. Read a review in arstechnica and you would think an OS is about appearances and the exact shape of each UI component, etc.

My understanding is that the kernel is basically the same already, it is the services stacked on top that are different.


iOS always ran a stripped down XNU/Darwin based on the original CMU Mach blueprint. Right from the first version.

For a fact, Steve Jobs mentioned that in his iPhone introduction keynote in 2007


It's completely practical to ship the (almost) same kernel with different a different userspace. That's the difference between desktop Linux and Android, XBOX and Windows, etc.

In fact if you care about performance and reliability (particularly add more cores and get more performance) there is no option rather than build on a mature kernel. Google's Fuchsia, for instance, is a high risk project which might never reach parity with Android.

(I think of Microsoft's "dual track" OS strategy that took 5 years to merge Windows 95 and Windows NT)


> Google's Fuchsia, for instance, is a high risk project which might never reach parity with Android.

If you write an app using the Android APIs and not native code then it wouldn’t matter if you were really running a Linux-based Android or a Fuchsia-based Android, as long as the API contracts were met. Google knows this — it’s not impossible to imagine that one day Android might not be Linux-based.


It's possible, but Google has to get all the details right to make it "good enough" if not better.

For instance they have to get the story right w.r.t. to drivers and the needs of handset vendors, carriers, etc. Performance has to be good. Microkernel and capability-based systems are notorious for having bottlenecks.

Personally I liked the original WSL from Microsoft a lot but Microsoft didn't feel it was good enough because filesystem metadata operations are much slower in Windows NT than they are in Linux. If you do something that involves an excessive number of files (say build the Linux kernel) some people find the performance of WSL unacceptable. Yet, nobody complained about the slowness of filesystem metadata operations before on Windows -- people figured that was just the way it was, might not even have known some operating system was faster, and they stuffed data in SQLLite and otherwise reduced the number of files that they handled.

Microsoft felt they had to do something about it and came out with WSL2 which is inferior to WSL and to just running a normal Linux kernel under Hyper-V, VirtualBox, VMWare and the like -- because it is still closely coupled to Windows in ways that make "it just works" elusive. (e.g. just requiring that you install it from the Windows Store means you can't install it if your Windows Store is b0rked, which mine often is)

So Fuschia might be the future, but getting there involves confronting a lot of details that they might not want to confront. If there are ten critical "non-functional requirements" and they only get nine of them they are doomed.


As long as I can still sideload the same applications on macOS - I'm OK with this. Even if it's only enabled on the macOS "side", having a unified OS on unified hardware would be huge step forward in my opinion.

Having XCode on an iPad would be convenient as well - though it's not the only IDE I use and would still ultimately prefer a proper laptop/desktop form-factor for "serious" programming.


Depends on what you mean with unify? Apple locked up macOS more and more over the years and I guess or fear, it is just a question of time until you can no longer install apps outside the App Store. After that the difference between macOS and iOS is just the UI.


iOS is undeniably the one that would lead and determine how the future would be. In terms of money, iOS makes almost 5x more money than MacOS excluding all the services and wearables the iOS brings in as well. It makes lots of sense to continue invest heavily on iPadOS to attract more advanced users like devs.

iPad Pro M1 is the baby first step to start bringing in more MacOS compatible pieces to iPadOS. Eventually it would suck some or even most of the MacOS users into the iPadOS.


I've written about this before, in "The Mac and the iPad aren't meeting in the middle yet":

https://micro.coyotetracks.org/2021/04/21/the-mac-and.html

While I won't repeat myself too much, my basic point is that Apple sees iOS (and iPadOS) devices as application consoles and Macs as general purpose computers, and there is no good business case for changing that any time soon. The Venn diagram of "users likely to walk over such a drastic change to the Mac" and "users likely to spend boggling amounts of money on Apple hardware" is close to a perfect circle, and the accounting department would probably not be super keen on taking the bet that the increased service revenue from shoving all app sales through the App Store would make up for the last hardware sales. You need 15–30% of a hell of a lot of apps to make up for a single lost 16-inch MacBook Pro sale, let alone a Mac Pro.

Furthermore: given all the radical changes Apple made to the Mac in 2020, that still feels like the "now or never" moment. I wrote back in April that "if M1 Macs and macOS Big Sur didn't lock us into an App Store-only world, it's pretty unlikely macOS Pismo Beach or whatever is going to." Well, it's a year later, and macOS Monterey still isn't. Maybe in a year or two macOS Fresno will come along and prove me wrong, but I'm pretty confident it won't.

The flip side of this is that I don't think iPadOS is going to be opened up. I also wrote that I didn't think we would be able to run macOS apps on M1 iPad Pros the way we can run iOS apps on M1 Macs; that's holding true so far, too. I'll note here, though, that the Hacker News crowd has specific ideas about What Makes a Real Computer that I don't think are widely shared by the non-engineer crowd, and that honestly a lot of you don't have a clear idea of how much automation and app interoperability is possible within iOS's restrictions. I prefer using the Mac, in no small part because I've been using Unix for close to three decades, but it's startling how much I'm able to do on the iPad even with its current nerfball limitations.

And, sure, the obvious objection is that "application console" is arbitrary, and it is. But isn't "game console" just as arbitrary? I mean, a PlayStation 5 has an 8-core CPU with 16GB of RAM; you can't develop software on it because Sony won't let you, full stop. We're more annoyed about that limitation being on the iPad because the arbitrariness feels more obvious, because we didn't buy the iPad "only" for gaming. But on a technical level, there's not a whole lot of difference.

The linked article makes the prediction that Apple is going to be trying to drive more and more people to the iPad and away from the Mac. I don't buy that, simply because the evidence just doesn't support it. They have literally just reported the strongest quarter of Mac sales in the company's history. It's not just that they're moving to their own CPU architecture, it's that they're in the process of rolling out new industrial designs for the entire Mac lineup. This is not what you do if your business goal is to have Mac sales taper off!

My feeling now remains the same as it did, er, all the way back in April: iPadOS and macOS are never going to merge. In the long run, there is going to be an operating system that replaces both of them, and the groundwork for that new OS is being laid out now. But it's not going to be here any time soon, and nobody (including me) should be making confident predictions about what that new OS will be -- what it will and won't do, what it will and won't allow, how locked down or open it will be.


It might make Apple currently more money, or it at least might appear to them that it does. But I really think they are missing the boat great time by gimping the iPad with the built-in limitations. The hardware is fantanstic, and by starting with a complete new OS, they had the chance to grow it into an entirely new computing device. But limiting the iPad to apps which harve to adher to arbitrary app store rules, they might miss the next big step. Which could cost them way more revenue long term, that the short term profit on people who buy a new iPad and a new Mac.


Locking users in AppStore will not happen for Macs, because users just won't upgrade and buy new hardware. Who needs computer which he can't control. That's absurd. They might hide that switch to enable full control over computer, but that's about it.


It's vanishingly rare for me to read someone's comment that starts with a link to their own blog and then to think, "I should see what else this person has written." Well done, you.


appleOS


is macOS scans users data like iOS? (and reports to gov)


macOS 12 will also scan uploads to iCloud and I'm pretty sure iMessage for minors will also have the backdoor active. So, macOS is as vulnerable as iOS.


No it does not.


for now, that is


iOS does not do this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: