IMHO, the direct competitor of Server.app in the "SMB office server" use-case is actually a Synology NAS appliance. Synology's set of first-party OS packages has 1:1 feature parity (and then some) with every function Server.app either has, or used to have.
And that's the key-word here: appliance. A NAS is a standalone box that receives automatic updates, is 100% remote-management enabled by default, can be easily reset to factory settings, and can't be messed up by employees who think it'd be neat to run local native apps on it, "since it's a computer just sitting there."
Plus, since a NAS isn't going to be doing any desktop-OS things, the software for it can be slimmed down enough to run on lower-specced hardware, making the appliance itself much cheaper than the sort of machine required to run Server.app smoothly.
No SMB office-manager who needs this kind of functionality these days would buy a Mac or PC, to set up server software on it and leave it sitting there headless in the office. They'd buy a NAS.
And, IMHO, that's why Apple haven't kept Server.app up to date. There's just no demand for it any more. Not because of the cloud (which is mostly a complementary use-case) but rather because this software-solution vertical has now evolved into a turn-key hardware-appliance solution vertical. And it's one with slim-enough profit margins, that Apple doesn't want to compete.
> And, IMHO, that's why Apple haven't kept Server.app up to date. There's just no demand for it any more.
This seems to be a primary driver of the software-rot found all across Apple's ecosystem. iOS is generally pretty well-maintained; macOS less-so. Apps like Music are kept in better condition than, say, Podcasts. Etc. You can almost directly correlate how unpopular a piece (or feature) of its first-party software is with how buggy it is. Which I guess makes some amount of sense, but it really tarnishes Apple's image as a company selling high-end products.
To be fair, their hardware standards have held up much better over the years than their software standards. Most recent hardware problems have been design choices, like the keyboards and ports shenanigans. The execution is still rock-solid.
In software, on the other hand, the latest macOS update introduced a bug in Notes where now if I hit the Tab key (entering an actual tab character), that whole line of text becomes invisible. Just invisible. I can highlight the text and see that it's still there, but without highlighting, it's invisible. It's very reproducible, this is not an obscure corner case. And this is at a time when Apple is attempting to ramp up its online services like Notes.
They didn't fail after a year or so, they failed when dust particles got in there because it was physically impossible to make that design not do that. They reworked it and reworked it but it just couldn't be done. Finally they admitted it was a mistake to require that level of thinness from a design standpoint.
But putting that aside, I still see that as an execution flaw, or at least, it's a way in which design and execution are so intertwined that the distinction no longer makes sense. A computer with a non-replacable keyboard that breaks after a year is a badly-executed product. The way to execute it better would be a keyboard that doesn't break, whatever design decisions that entails.
The way I see it is those doing the executing were handed an impossible constraint. Immediately upon having that constraint loosened, the keyboards became pretty much perfect.
Okay, I just experimented a little further (I didn't bother the first time, just switched to a different text editor), and it's slightly narrower than I thought:
1) Enable dark mode
2) Open a new note
3) Tab twice, then type text which will be invisible
It appears to be any line with two or more tabs of indentation. Not two or more relative to the previous line, but two or more period.
Research In Motion (RIM) later renamed to BlackBerry Ltd. 37Signals renamed to Basecamp. I wonder if at some point Apple Inc. would rename to iPhone Inc.
Apple has already changed their name twice. From Apple Computer Company to Apple Computer, Inc to Apple Inc.
The biggest part of their business has shifted from computers to phones but I doubt they expect the iPhone to be their biggest product in the future forever, so I don’t think they would pigeonhole themselves when their name and logo is so ubiquitous.
Research In Motion meant nothing for most people while the Blackberry brand was strong and instantly recognizable. 37Signals was also renamed after the much stronger brand of their core product. Apple and iPhone are both just as powerful.
Companies like Apple, Nike, Mercedes, Coca-Cola will never completely change the name (maybe some variations here and there, without touching the core name) simply because the brand is so recognizable. It would be throwing money and image down the drain and it would just cause confusion.
Should have said BMW :). Then again Daimler is far less known across the world than Mercedes. My point was that when you have a strong brand you don't mess with it. You go from a weak brand to a strong one, not the other way around.
But did that have more to do with the Beatles agreeing to it rather than anything else? I was under the impression that Apple had wanted to make the switch much earlier.
The Apple brand is one of the strongest brands in the world. Blackberry was always a bigger brand than RIM itself. Doesn't make sense to ditch an abandon a big brand.
Another interesting case is when HP decided to spin off their PC business: they gave the HP brand to that new company.
That’s different. When they started out the only thing coming to mind when someone said Apple was fruit. That changed, so they dropped the now redundant “computer”.
No. When Apple Computer first started, the Beatles "business" was called Apple Corp. This has resulted in several lawsuits between the Beatles and Apple.
I definitely used it as a DNS server internally and a few of the other functions. I was totally bummed to see them strip effectively all useful functionality from it. :-/
> and can't be messed up by employees who think it'd be neat to run local native apps on it, "since it's a computer just sitting there."
Synology actually encourage this. In particular, the Docker package is great and all sorts of uses have cropped up. There are some gnarly rough edges that I have encountered but that package and others make the unit very useful.
I’m a home user so it’s a different use case, but my 918+ is probably my favourite machine ever.
It's our digital Swiss army knife! Need somewhere to run homebridge or pihole and don't want a pile of raspberry pis sitting around? Get the docker containers for these. Ditto for Plex and Roon servers. I even found a container that works as a replacement for the deprecated software that my ancient multifunction printer needs for its "one button" scan functionality.
I put an 8gb stick into mine, giving it 12gb. Officially it only supports 8gb but it seems to use more just fine, 10+ containers can be taxing at times.
That and people have completely switched to MDMs to provide a lot of the old functionality that macOS Server used to provide.
The popular ones for macOS are Jamf and Mosyle. There are some other ones as well that are more cross-platform oriented, but both Jamf and Mosyle more or less cover the features that macOS Server provided for administration of a set of devices.
Tbh, I don't think Server was popular as a NAS service ever. The only thing I've ever used Server for was managing devices.
Yeah this. We have a Synology NAS and it's been running for four years trouble-free. It's a little sketchy for use cases other than its core purpose, but if you only try to use it as a smart reliable RAID disk with an Ethernet port you are going to have a great time.
Their Synology Drive app is a drop-in Dropbox/GDrive replacement too. Works great on Mac at least. Haven't tried Windows yet.
We maintain an official ZeroTier package for it too, so it works anywhere.
Synology's set of first-party OS packages has 1:1 feature parity (and then some) with every function Server.app either has, or used to have.
We've seen this a number of times with macOS, and some other Apple adventures. Apple will put out something, then if a third-party offering comes along that is equal or better, it cancels the Apple product expecting everyone to switch to the third-party solution.
Off the top of my head, Aperture comes to mind. But I've had this thought before and came with several other examples at the time.
Apple also works the opposite way. If third-party offerings for a particular function are OK, but not great, it will make its own version. Stocks, and several other apps for iPad come to mind.
When the first happens, people complain that Apple is fickle and abandons its customers. When the second happens, people complain that Apple is stifling competition.
Apple is either more schizophrenic, or less predictable, than other big tech companies.
I feel like Apple has done the same thing to Safari and web-based Apple stuff, just always banging my head when developing for Safari browsers especially in a production environment when everything counts.
1) Way better battery life. Until the other browsers give a damn about battery life I won’t even touch them!
2) Cross-device syncing. My passwords and my tabs are always available on both my iPhone and my MacBook. Everything is synced through iCloud. I go back and forth so often every day that I could not see myself using a different browser on my Mac.
I use Safari too, but I know what it used to be. That's what saddens me. It's still convenient, but the bugs in Mobile Safari of late are just... unacceptable. Particularly there is a bug, acknowledged by many others now, over many iOS versions, where the zoom freaks out in Mobile Safari and the renderer just seems to give up or something. How is that regression even possible? Ah, man. Everything goes to crap after a while, huh.
I’ve taken to downvoting anyone who says this, not because in certain ways it’s not true, but because it’s just such a lazy comment that adds nothing to the conversation.
> Plus, since a NAS isn't going to be doing any desktop-OS things, the software for it can be slimmed down enough to run on lower-specced hardware, making the appliance itself much cheaper than the sort of machine required to run Server.app smoothly.
Really Server.app would run fine on the same slimmed down hardware, but Apple doesn't make that. The cheapest Mac is what, $800 for a Mini? $250 in PC hardware would more than suffice.
If you're on an even tighter budget you hardly need to buy new hardware at all, just set aside one of the old machines the next time you do a desktop hardware refresh and install any of a dozen free Linux or BSD NAS solutions on it.
> A NAS is a standalone box that receives automatic updates, is 100% remote-management enabled by default, can be easily reset to factory settings,
Most NAS fail at one or more of these.
In addition 90% of them are very bad about security, and putting these kinds of devices on your company or office network transforms it from an internet access network into a garden for insecure appliances to be coddled, a stepping stone that must be vigorously firewalled, scanned, and monitored.
> Synology's set of first-party OS packages has 1:1 feature parity (and then some) with every function Server.app either has, or used to have.
That’s a strong statement. Care to articulate? Server.app used to have services for Calendar, Email, Contacts, Wikis, a profile manager (basically an MDM server), Open Directory, VPN, DNS, DHCP and, well, a web server. Would a Sinology really cover all these features?
One man shop pro fotog here. My macOS server 5.6.3 High Sierra is awesome, the last complete version... File Server with literally decades and multiple terabytes of raw photos… The ease of the email server is unmatched…. Websites and Nextcloud, invoiceninja, all just work with minimal php tweaking…. and all here IN MY OFFICE. I have linux droplets, and am more or less ready to make the move, but I won’t until it actually dies… but only when I have to. I hope the hardware (from 2010) holds out forever, but I do have a spare ready and daily CCC backups and disks rotated to firesafe… at least for me it just works. Knock on wood.
Are you exposing any ports to the internet? The older library versions in High Sierra have numerous security issues (of various severity). You might be able to reduce your risk by disabling server features, but that's a dangerous game of whack-a-mole. (Also, as a fellow self-hosting fotog (that never went pro): you should try PhotoStructure! Details in my profile).
Yea, no kidding, I wish apple would open source server.app, so some of the old components could be updated without hacking it. It was just a perfect all in one solution. I will check out Photostructure.
Unfortunately there's no data checksumming with APFS. Last time I checked, ages ago, CCC wasn't using a version of rsync that supports checksums. Newish versions (for a few years) of DNG compute separate checksums for DNG metadata and raw/image data, but to what degree applications use this to verify the integrity of the image file may vary.
Image silent data corruption via bitrot can be frustrating. Without a regime to prevent it, it spreads into all backups. And typical workflows depend on backups for eventual migration to new storage which allows any corruption to be replicated via the backup strategy. You end up backing up the corruption, unwittingly.
Which was one of the big problems with the original G4 Xserve. They added it later in the G5 model, but I suspect much of the damage had been done. Most technical people, just ignored it after discovering that it didn't have ECC and the disks were low end ATA rather than SCSI. With the cluster/HPC version the stories changed to how unreliable the machines were.
At that point it seems only the diehards were willing to trust their data to apple server hardware and it died a couple years later.
My solution is to create par2 files. This is a manual solution though, and doesn't help you when you go and edit the original pictures. But what's great about par2 is that you can it allows you to correct errors.
I don’t use APFS on the big spinning drives where the pix are. But they definitely are on my iMacs and MacBook ssd’s. Yea Lightroom has a Validate dng feature... of course it locks you into Adobe.
PhotoStructure maintains SHAs of all files, but I currently assume the user has updated the file if the SHA changes. PhotoStructure validates files before it imports them to keep corrupted images out of your library.
How do you think I could discriminate between file corruption and the user making an edit to a file?
(Perhaps if the mtime and filesize doesn't change, but the SHA does?)
In a non-destructive workflow, the image data is never modified. The image data checksum should never change. Rather, the edits are a kind of "edit list" stored in metadata, which can itself be optionally (separately) checksummed. If the metadata is corrupt, it can be discarded, effectively resetting the image back to its original pre-edit state. Yes, you'd lose the edits.
The location for metadata depends on the application. For DNG workflows, the metadata is a separate location in the DNG file, with separate metadata and data checksums. For other workflows, the metadata is in a sidecar (a per image file), or stored in a database managed by the application.
To be fair: while they did leave all the classical 'server' type use cases out in the cold and are making it worse every release, it's not actually the future or the best practice to still roll out.
We are in between the legacy 'directory' based networks with authentication etc, and the more robust and expansive beyondcorp type setups. Do you really still need classical 'user' and 'group' membership things? Network accounts? Local web servers? Pretty much anything of significance is in a private or public cloud, if only because the fast pace and scaling. All that is left is basic file sharing that is sometimes done locally, and even that is becoming more and more stupid to implement in a workflow or business process. (regardless of your OS or vendor)
There are of course still some true file-based processes where Apple is still used a lot, but even that is no longer local-file-only; a lot of the still and motion processing is done either super small locally or simply distributed to dedicated systems like render farms. And for the 1-man-photo business all of the local stuff still works and you really wouldn't have gotten anything out of 'server' anyway.
The most problematic holdouts are the likes of Adobe who refuse to write applications that play nice with NAS-type filesharing so that just leaves you with file syncing instead.
Uhh what? I'm with you that the days of the "trusted internal network" are coming to an end but you're insane if you think that directories are going away. LDAP has certainly fallen out of favor but every SaSS user management platform / service is just implementing what are essentially "noLDAP" directories.
What do you think is providing the authentication and user management in those private clouds? OpenLDAP, FreeIPA, ActiveDirectory, Keycloak?
And even when you move everything out of the office to a datacenter you still need all normal IT stuff that you needed in your little office rack just in the datacenter now.
The only part of the "old-guard" IT stack I see disappearing is the office file server.
> What do you think is providing the authentication and user management in those private clouds?
SAML. Yes, the provider's own network probably hosts a directory service as the default SAML backend, but for integrating with enterprise clients who already have their own directories? SAML. Now, rather than making ACL assertions about local users, you have to make ACL assertions about what ACL assertions each foreign directory is allowed to make on behalf of its own "guest" users in your environment.
SAML is a middle tier between the endpoint (user app) and directory (ADFS, LDAP, etc.) and not a single solution on it's own, it doesn't store data - it provides a means for leveraging that data defined by the schema (e.g. posixAccount and groupMember). It's generally equivalent to pam_sss, nss-pam-ldap and friends at the systems layer of your average Linux. They are complementary, not replacements for each other.
The difference is that, in practice, if you're "just" a large non-IT enterprise or "just" a cloud provider, then you're not implementing the whole solution any more, only half of it.
If you're, say, Box, you're implementing only the endpoint, and mostly don't need your own directory, because all your customers bring their own. If you're, say, IBM, you're implementing only the raw directory, but rely on cloud providers to actually interpret it, rather than hosting any of your own Intranet. And if you're Joe SMB, then you're implicitly using GSuite as your SAML-exposed directory through OpenID Connect, and you don't need an IT department for this at all.
(And even if you're an enterprise who also writes your own cloud services, then this is still a change for you, because your endpoints are probably now written "generically" with a SAML interface—treating your own directory as just one among many, talking to it through a public exposure—rather than interfacing directly to it as a privately-configured PAM binding on your servers et al. In other words, now your services [and the departments providing them] are decoupled from your directory, because they both just see each-other through the lens of SAML, and so don't know that your company's employees are any different than a client company's employees.)
Office file server isn't going anywhere as long as there are places where internet isn't at least dozens of Mb/s in each direction, and I assure you there are still many, many such places.
True, but you can also spin up replicas in your cloud providers' nearest regions.
Everybody talks about capacity elasticity, but easy geographical distribution is the much cooler property of public clouds IMO. I could drive to a local colocation site and maintain my own boxes if it came to that, but having dozens of datacenters around the world would be physically impossible for me otherwise.
Of course your applications have to support that, but if they can be distributed across in-office servers then they already do.
"serverless" doesn't mean there isn't a server. Just because I don't need a user/group doesn't mean my cloud won't. Just that I don't need to run one myself. Which is true.
Ironically the only thing we run locally is file server, and that’s just an AWS storage gateway backed by S3. Its the only reason we still have an ad server.
Managing groups of Windows client PCs without Active Directory still hasn't reached feature parity w/ Active Directory. I do not welcome going back to the "herding cats" days of "workgroups", personally.
Azure AD and Intune is good enough right now if you don’t need local servers for apps, and very cheap (or maybe free) depending on what you buy from Microsoft. It can secure a device, apply ADMX files, install applications, and if you need more complexity then you can run PowerShell scripts. It’s not really user friendly, but AD and GPO never really was either.
Small on-site server deployments still make a lot of sense in many cases. Not every business needs scaling so badly it has to make itself dependent on online services, and in many parts of the world "reliable, fast internet at every business location" is still not something you want to need.
1) yes software companies, especially ones that host any kind of public facing app, need to be able to manage authentication.
2) every business is going to have internal applications wether bespoke or something on top of ERP/CRM software so the same applies here.
3) when you use an app (either professionally or personally) you need to have an exit strategy, you’d think Oracle’s latest games would teach everyone that. All these SAAS apps have nothing, so self hosting with directory/file servers is the way to go.
3) MS and Apple OSes really need some of these things to function in a corporate environment. Unless you’ve got everyone on Linux or are doing BYOD then you’re going to need directory servers to coordinate IT.
Unfortunately, most compliance policies (SOC2, PCI, HIPAA, SOX, etc) remain very outdated, and make this a challenge for anyone that needs to meet compliance needs.
There are large, serious, public companies on Azure AD, OneLogin, Okta, etc. They often like those services because they eat some of the compliance burden. This doesn't ring true.
Former big time user here. The product slowly declined from its own operating system to a buggy app.
We really had no choice but to migrate onto other products and these days even hard core Mac folks use AD for directory, Exchange for email and 3rd party products for device management.
Mac OS X was first released as Mac OS X Server 1.0, the commercial release of Rhapsody, and then released as Mac OS X Server for Cheetah, Puma, Panther, Tiger, Leopard and Snow Leopard, before being released as a standalone update to Lion, and then hitting the App Store for $19.99 for all future releases.
Performance was mediocre compared to Linux. Everything that it shipped with was outdated. There was a weird mix of GUI tools and commandline to get anything done. And to top it off, things were (at the very least) just-different-enough to confuse someone who thought, "I know Linux this will be easy."
True, but in germany they disallowed rating macOS Catalina. That's a very shady policy - having a rating system for all developers, but excluding your own most prominent and important software from store reviews. Just one of many hilarious Apple moves.
Also, if you cancel any subscription (maybe just to make sure it doesn't accidentally get renewed), you get to keep however many days you have left.
Not so with their own free one year Apple TV subscription that came with the latest phones. If you cancel that, there's an alert telling you you'll lose it immediately.
Clearly tilting the playing field again and again.
While it's a shame that the OS X, er, Mac OS Server "app" has basically been killed over the last few years (maybe more than a few at this point), I suspect it just wasn't selling very well. I know it had some great tools for configuring and setting up some services, but if you're a engineering-focused company you're probably going to use Linux for office servers; on the business/office side, you're very likely going to just go with Microsoft; if you need web or server hosting, you're almost certainly doing it off-premises, not with a Mac mini stuffed in the corner.
That mostly leaves enthusiast and small office scenarios, most of which can get by just fine with "non-server" macOS. After all, they've always literally been the same operating system. My Mac mini is a headless media and file server and has, in the past, been the printer server as well -- and all of that's enabled just by clicking boxes in the "Sharing" control panel. (Heck, I could even turn on "content caching" to have it keep a copy of all macOS/iOS software updates on it, which would be great for a small-to-medium office.)
Yup. It stinks.
They made a decision to leave the backbone/infrastructure market.
I don't blame them.
The main thing about MacOSX Server, is that I don't trust it to be around too much longer. A Linux box will be around, in one form or another, forever.
However, it's easier for me to buy some canned solution; either pure software, an SaaS, or a hardware solution.
For context, the majority of server features were removed in macOS Server 5.7.1, with a few features migrated to macOS High Sierra and the remainder left to be handled by open source alternatives. The initial announcement from Apple was discussed on HN in 2018. [1] [2]
ohh i remember those days. Was using Xsan with 2 xserves attached to infortrend eonstors. I spent so long trying to get it stabilized and reformatted the entire FS so many times. Eventually got rid of osxserver and learned how to install gentoo linux on the xservers. Tracked all the issues to a faulty riser card in one of the boxes (but had to use linux to get the low level details)! Eventually went GPFS and trashed the xserve hardware as well.
It is Apple's rebranding of StoreNext's SAN product. I used to run a setup, and other then a ton of knobs to turn for tuning with little explanation, it was a pretty good product. I wish Apple had really run the ball there (e.g.: ad-hock clients), but for a server setup my memories of it are nice.
IMO Apple deserves all the hate on this one. It was good once. Thank god I moved on to proper servers with linux. They should just kill it and let the community take over.
Real admins run linux anyways and should have no problem implementing common services.
Supporting every little mom and pop that doesn't understand DNS, ports, file permissions or network users, which is generally looking for the cheapest solution will be be trying to integrate the lowest of the low end equipment and may cause even more problems beyond the control of Apple to be able to reliably support. What most small business need are just a NAS with network printer support.
Companies maybe. I bought the last of the previous generation to have a small desktop with redundant internal drives. A Mac Pro would have been budget overkill.
At the time is was useful, as the Mac desktop was my daily driver. Unfortunately current macOS versions are really sluggish, so its currently a home "server" running Fedora 31.
The 16GB RAM limit is becoming a bit of a problem, so I will likely replace it with an Intel NUC.
Think so? I didn’t know Apple even made a server package since 2010 or so. I bought a Mini so I could have a low cost, decently specced desktop Mac hooked up to my two large monitors at home. An iMac was too expensive and I’d rather use standalone displays anyway. We also have lots of them in my wife’s office for the same reasons.
I mean, companies literally made rack mounts for Mac Minis, to install 6-8x of them at once in proper server racks. They didn't build this because no one was using it.
The post I was replying to said that was a large part of the reason to buy Mac Minis.
First, even though there were quite a few decent-sized Mac Mini deployments like you're describing, I'd be genuinely surprised if that accounted for a decent chunk of Mini sales overall. Some, sure! But most? I wouldn't expect it.
Second, does any of that require running Server? We have a database and application server in the office here, but it's not running Server: it's just plain macOS running daemon processes.
At least a decade ago, it wasn't that uncommon for me to see Mac minis kicking around as "office worker" desktops in places that gave developers Mac laptops. Get one of those, bring your own cheaper display, plug in a wired keyboard, and go. My suspicion is that "cheap desktop" has always accounted for more Mac mini sales than it's usually given credit for.
Note that "server" is a role you might use a Mini for, but "Server" in the article's title is a specific software package. You can have a server not running Server.
Also, I bought a Mini for my desktop because it's tiny, cheap, and powerful, and I can use my own screens.
Mobile developers build huge mac mini rack farms because they need to virtualize osx on top of Apple hardware for CI/CD/testing/store releases (or they used to back when I last worked in mobile dev). That's the bulk of where I've seen MacOS used to control and provision minis (usually with plenty of ansible/chef/etc as well).
10+ devs running integration tests/etc that require different xcode targets, branches, etc. I haven't done this in years but it was a pretty basic mac mini farm 8+ years ago when I did do it with macos + ansible. I really doubt it's changed much (albeit 3rd parties like testflight/etc might be better integrated nowadays). It's a huge reason Macstadium, etc exists where you can just rent mini farms.
I have a Mini running stuff like that but it's not running the Server package. I don't think Server was ever very popular, even on Macs that were being used as servers.
"Technology right wing" (my phrasing) numero uno apple fanboy John Gruber used to post so much about Mac mini colocs. Haven't seen much from him lately about that stuff.
I don't think this would actually compete. It would more likely be a compliment for certain kinds of usecases.
Apple just doesn't seem to care to put in the effort. I don't think their neglect of MacOS Server has anything to do with trying to protect their cloud offerings. Even a wildly successful MacOS Server strategy wouldn't be that much revenue versus some of their other lines, but it could provide real value and help to certain kinds of businesses and users.
Cook takes Jobs’ zealotry for pruning (which he probably developed in reaction to the struggles of “first” Apple) as religion. Anything that doesn’t make boatload of money gets cut or left to rot.
Macmini and OSX Server were products for niche markets that “second Apple” is completely disinterested in.
I'd venture it had more to do with not being able to actually compete at an enterprise level. Far more profitable to sell to individual users than take on the much greater cost of sales for server products. If it's just a server they couldn't compete anywhere near as profitably.
I'm beginning to think Apple isn't very good at making software that its users want or need because their focus is flashy fluff that demos well on stage.
I assume to show how neglected this has gotten. MacOS Server used to be a lot more popular and a lot more loved.
I understand this isn't a remotely core part of their business, but I don't understand why you would want such a quarter-assed product out there with your name on it.
A small team, which Apple can afford, could keep this a good offering that helps out certain kinds of Mac users.
This seems like a great opportunity for some small team that wants to get acqui-hired by Apple in a few years for a cool billion. I doubt the domain will get much love because it is a niche product that lives and dies on the whims of a mega-corp. But a small startup could probably bootstrap itself to profitability. Worst case is they get a good five-ten years of being your own boss until the market dries completely or Apple finally decides to compete (but not buy your company).
And that's the key-word here: appliance. A NAS is a standalone box that receives automatic updates, is 100% remote-management enabled by default, can be easily reset to factory settings, and can't be messed up by employees who think it'd be neat to run local native apps on it, "since it's a computer just sitting there."
Plus, since a NAS isn't going to be doing any desktop-OS things, the software for it can be slimmed down enough to run on lower-specced hardware, making the appliance itself much cheaper than the sort of machine required to run Server.app smoothly.
No SMB office-manager who needs this kind of functionality these days would buy a Mac or PC, to set up server software on it and leave it sitting there headless in the office. They'd buy a NAS.
And, IMHO, that's why Apple haven't kept Server.app up to date. There's just no demand for it any more. Not because of the cloud (which is mostly a complementary use-case) but rather because this software-solution vertical has now evolved into a turn-key hardware-appliance solution vertical. And it's one with slim-enough profit margins, that Apple doesn't want to compete.