I'm very saddened by what happens with what is left of Sun.
I used to work at Sun, and the Solaris codebase is the most amazing C code I've ever worked with. I'm probably going to be accused of bias, but the Linux code is really messy compared to Solaris.
Sun was already on the way down by the time I left many years ago, but what had happened since Oracle bought them has been nothing but depressing.
> but what had happened since Oracle bought them has been nothing but depressing
Oracle is nothing but a cancer. Everything they touch turn into goo. This is not the first company to be killed by Oracle acquisition. And not the last.
And don't let me started on the ridiculous range-check trial: this sums up the disgusting state this company has fallen into.
Once something gets bought by them, you know it is done. Slowly, but surely.
They perform a function akin to the maggots that destroy cadavers in nature. Part of the overall ecosystem.
ORA stopped being a tech co a while ago, now it is a finance play. Use cash to buy a business for its locked in customers, gut it to squeeze max money out of it until last customer is gone. Rinse, repeat.
I think that's a more apt description of CA, BMC, or Symantec. Places where tired old software goes to die a quiet death. What Oracle does is worse: kill software that still has plenty of life in it. I've seen them do it by acquisition, and I've seen them do it by stealing code or ideas from partners (personally, twice). So they're not so much a graveyard as a slaughterhouse for software.
I didn't know this when I joined CA. It took me 6 months to realize what I fell into - and I quit the next week. It felt like working in a hospice - most depressing few months of my professional life.
Oh and don't forget to tell the truth in your next job interview: "CA is a tremendously successful company and excellent employer, unfortunately, the role was not the right fit for me".
> ORA stopped being a tech co a while ago, now it is a finance play. Use cash to buy a business for its locked in customers, gut it to squeeze max money out of it until last customer is gone. Rinse, repeat.
Yup. Precisely what happened to Stellent, a company that used to produce great document filters. After ORA buyout, employees fled like there was a plague epidemic and prices spiked at such a level that you'd either cut your veins or consider migrating to another technology.
> ORA is the elephant's graveyard of software.
Well, except that some of the elephants were still alive and perfectly fine, and ora worms started eating them before they were dead.
Working on UCM Oracle for a consulting gig once was what I consider the low point of my career. I do believe that's the stellent product post-acquisition, right?
CenturyLink does this too - they're a tech holding company who buys and sells companies (Level3, Tier3, Savvis, Cyxtera, Electricbox, etc) in order to reap M&A tax benefits while squeezing the engineers.
Of course, these businesses don't make enough money to cover the massive shareholder draw, so it's all a stage play to convince Wells Fargo to loan them enough to pay those dividends.
It's an untenable and irrational position in the long run, but markets can remain irrational longer than individuals can remain solvent. And many businesses are in this same situation, needing bank loans to pay never-reducing dividends.
> order to reap M&A tax benefits while squeezing the engineers.
This is true also of companies that are taken over by private equity firms, although that is probably common knowledge by now. e.g. every single one in my team at Rackspace has left for a different role (all at different companies, all at different times) after being acquired by PE firm.
Sometimes I wonder how the engineers can't see this coming. Sure there are cost savings to be made by streamlining product offerings, cutting the "recreational budget" etc (i.e. money for office parties). But the biggest cost center for tech companies are its employees (probably that and real estate).
ORA just kills software with no market success. All companies do that. If you like solaris, you had an opportunity to make your vote using your wallet.
I am actually surprised they invested into Solaris for so long, considering its long time half dead status on the market.
Sure, if you ignore the massive effort that was being invested in revamping Solaris to be more FOSS friendly and engage with the community prior to oracle's sun takeover..
I suspect if the takeover hadn't happened, the FOSS systems landscape would look a whole lot different. Like what, I don't know, but definitely different.
Sun really needed to FOSS license Solaris before Linux hit critical mass. That would have been amazing. Linux is pretty good for what it is, but imagine if it had been Solaris that won instead of Linux, and thus running on most of the servers of the world and potentially iOS and Android as well. It might have even made some desktop inroads, given it would have launched out of the FOSS gate with a more developed desktop than Linux at the time. Such a huge missed opportunity.
I doubt Apple or NeXT would have ported their platform to Solaris (from BSD/Mach). it seems to me that BSD would have been Linux if Linux hadn't happened.
> I suspect if the takeover hadn't happened, the FOSS systems landscape would look a whole lot different.
Sun was in bad financial situation at that point, there is a high chance all these engineers would be laid off 8 years earlier if takeover hadn't happened.
> I suspect if the takeover hadn't happened, the FOSS systems landscape would look a whole lot different. Like what, I don't know, but definitely different.
It's possible IBM would have been a significantly better steward. Of course, it would have been hard to be a worse one.
That's a bit unfair. IBM would have probably canned hardware (and hence Solaris) much earlier than Oracle did, because they already had their own offering in that space and they were as invested as Oracle in Linux (ok, they don't have their own RedHat clone, but they are definitely big on supporting Linux).
IBM was buying Sun because of Java, and planning to chuck the hardware out one way or another. Oracle bought Sun because of the hardware and basically kept everything else going, albeit in a reduced fashion where there was overlap (MySQL) or they were not interested in the niche (OpenOffice).
I think Oracle had some decent ideas and didn't execute well on them, partly because of cultural issues. I don't think IBM would have done better, certainly not on the hardware/Solaris side. They would have probably done a bit better with Java/OpenOffice but that's about it. Solaris was doomed by the Linux boom, there is very little anyone could have done about that by the time the acquisition became inevitable.
With the _big_ difference that Oracle are _really_ good at writing invoices and cashing big cheques...
All of the really talented Oracle people I know are enterprise sales people, and they're _really_ good. All the good tech people I know "joined" via acquisition and jumped ship as soon as financially sensible.
I vehemently agree with you. Their sales staff is vicious and effective... however comma ...
My stodgiest, most risk averse (Oracle using) clients are beginning to migrate to Postgres. I'm so shocked by this it still sounds like a lie as I type it. For most of my career it's been "nobody got fired for picking Oracle". Now that nobody gets fired for picking AWS the penny pinchers are looking at the RDS pricing delta between Postgres and Oracle and all of the sudden it's a no brainer. Maybe because the only sales person involved is from Amazon? IDK but Oracle will slowly die by it's own worms.
I'm surprised companies still buy Oracle software, I figured that they were only in business because of Vendor lock in from governments and large corporations.
Because Oracle do not sell to the sort of people who read Hackernews, they sell to people who read Financial Review and CIO Monthly or whatever. Same as SalesForce, Adobe, and a whole bunch of other "enterprise" software businesses that most dev's ignore or write off as pointless.
I worked at a place where I saw two renewals for AdobeCQ licenses paid while I worked there at ~$600k/year, all the time we were using Alfresco underneath (and I see now that site is running on Sitecore...)
Probably, although I work for a government department that uses Oracle extensively but no new projects are on the platform, eventually everything will be migrated off it (mostly to the SQL Server hosted on Azure).
And let me guess - the project timelines for "eventually everything will be migrated off it" run easily into seven figures of Oracle invoices? (And realistically the delays in those migration projects will mean double that gets paid to Oracle before they're completely out of your billing system?)
I don't think there's any real timeline to get rid of Oracle just that new projects won't be using it as a DB and eventually it will no longer be in use.
Like eventually nobody will be using mainframes... except they are still there, and somebody is still cashing cheques for their support.
Ironically, it's Oracle itself who is busy self-destroying. The move to cloud-based subscription services, where switching to competitors can be so much easier, looks good in the immediate but it's pulverizing their stranglehold on partner ecosystems, and making their long-term outlook more fragile.
Yes, bean counters. When the only sales person involved is now the AWS Architect the bean counters put Postgres and Oracle side by side and it becomes a no brainer. You're going to see lots of enterprise development moving away from Oracle as they move to AWS.
I'm not so sure it's the stereotypical "beancounter" here. There seems to be absolutely no penny pinching going on (which is the impression I get from "beancounter").
I'd say it's more the "Contract Signers" who're at fault. The devs and AWS Architect are perfectly happy to use inexpensive AWS options, but somebody _else_ goes golfing and drinking with the Oracle sales team - and arrives at work hung over the next day with a shiny new half million dollar a year Oracle licence which everyone else is now required to use.
The Oracle database is very good at its job: take data in, give it back fast and reliably, optimise bad queries, etc. The interface can be hair-tearingly awful, but the software fundamentally works.
The bad part of running Oracle is absolutely everything else, especially the bit where you ever have to talk to Oracle. When we moved our stuff from Oracle to Postgres, the best bit was never ever having to think about licensing.
The DB course at my university uses Oracle software and references to DB software in my professors lectures are almost exclusively about Oracle (SQL Developer or something).
Sales people entertain clients, this doesn't imply they're alcoholics, only that eating/drinking is part of the job. That said there also are many customers that are led by functioning alcoholics and tend to select vendors they can drink with. It varies.
Enterprise software sales can be a lot of fun (I am in sales engineering, not at Oracle) as it is well paid and ultimately about enabling customer successes. But I have to look myself in the mirror at night, and can only sell something I believe in - things like open source, or cloud computing benefits, etc. Oracle database used to be worth believing in maybe 10 years ago: boring but valueable software. But these days the company is such a blight on the industry that it would be hard to work there.
Is it just so they can say "not my fault, our project is only using big name frameworks, DBs, etc so I did all I could?"
Or is it because they don't even begin to understand what's going on in the tech scene and just buy what appears to be the most shiny, expensive solutions?
Support is one reason. If your database goes down, they'll send a team of engineers to figure out what's wrong. If they have to, they'll bring in hardware to duplicate your systems, and have engineers working on it 7x24 until they find out what's wrong. Not many vendors can provide that level of service, and if your business requires it, you're willing to pay for it.
I was once in a meeting on site. The task was to get some data from an AS/400 to a Linux server and display it on a website.
Text files would have been enough to cache the data. But you never know how a project can change over time so I proposed PostgreSQL or IBM DB/2. The latter in case they absolutely wanted to pay for a database and they are already using an IBM AS/400. (We were using PostgreSQL and Informix.)
At the meeting they said they have already bought a license for an Oracle DB. Without consulting us. So we were forced to use it without prior experience.
I've had an enterprise vendor offer to drop money into my personal account to make sales. Not naming them, but the sad thing is they actually did have the best product in their space by a long shot. The ones who sucked were probably even better at bribes.
I have seen a high-profile CTO dismissed for just cause because he accepted an invitation to speak at a vendor conference in Paris. When hired, we signed a very strict code of conduct document that prevented us from accepting any valuable gifts from vendors.
A vendor once sent me a mug. I had to open the box in front of a witness.
That sounds like the definition of a bribe, which is a crime. I would advise you to not make this information public or at least public under an account that can be traced back to you.
I discovered the Illumos project about a year ago and was extatic to see that Open Solaris had come back to life. I've been using OpenIndiana as my daily driver at work for a while now.
> I used to work at Sun, and the Solaris codebase is the most amazing C code I've ever worked with. I'm probably going to be accused of bias, but the Linux code is really messy compared to Solaris.
Its not bias. Im a (mostly) C/C++ dev, and very curious about some open source implementations, and the Solaris C source code is the most beautiful complex C codebase i've know.
Linux is what it is now, because of the amazing amount of man-hours into the codebase, but given the quality of the Solaris code compared to Linux, i bet if we could measure how many hours/enginners/money it would be necessary to make some features in parity for both OS's, not only Solaris would require less people and be implemented in less hours, but it would probably be less buggy.
If Solaris were open sourced in the right time, and not too late as it was, im sure it would probably be the top Unix flavor by now.
better bsd's > not as clean bsds > solaris > linux
with not as clean and solaris being on roughly the same footing- much like SMF requiring XML, solaris code, while very clean, seems very overengineered and not as elegant to me.
that said, i really haven't hacked enough kernel code to deserve to be commenting, so yeah..
It seems as though SMF was a bit controversial in the community. I do agree that it's a bit overengineered, but on the other hand I would take SMF before systemd any day.
Would it have been different if any other company had bought Sun?
Solaris was competing against free, without much to justify the large added cost. It's been a very long time since I heard of anyone buying new Solaris installations.
I think that would very much depend on when: Sun's primary problem was bad management so an early 2000s change might have enabled them to compete against Red Hat — treat usability as a concern, wrap all of those cool kernel features in a non-joke userland, sell support, etc. Sitting out package management for a couple decades really hurt them and that's a relatively cheap engineering commitment. It's interesting to imagine ZFS and zones bringing containerization a decade earlier, but every time they came up the reaction from most of the sysadmins I knew was roughly “call me back when they have apt/yum”.
A coworker who used to work at Sun maintains that they really needed to go private to avoid years of chaos from waves of layoffs when they were profitable but not enough to satisfy Wall Street.
> It's interesting to imagine ZFS and zones bringing containerization a decade earlier
I have a legacy production environment that is Debian based OpenVZ and Nexenta. Containers and ZFS were in heavy use a decade ago, the marketing just wasn't there to make it "cool" like Docker.
Sun was great at many things, but they were never great at marketing.
> A coworker who used to work at Sun maintains that they really needed to go private to avoid years of chaos from waves of layoffs when they were profitable but not enough to satisfy Wall Street.
Short sighted capitalism is imho a threat to society. But finding an acceptable solution is not going to be easy.
> A coworker who used to work at Sun maintains that they really needed to go private to avoid years of chaos from waves of layoffs when they were profitable but not enough to satisfy Wall Street.
It still surprises me that Dell was able to somehow go private before Wall Street killed them too.
ISTR rumors that Google was interested in Sun for their IP. I think that would have worked out well. No stupid Java API trial, Solaris truly open source, ZFS in more places. Fujitsu probably would have wound up with the hardware side of SPARC, and I could imagine that Dell or IBM would have taken the server/storage stuff for close to what Sun sold for.
Google is a Linux shop that doesn't care at all about product continuity. Why would they have cared about Solaris? I'm sure it'd have got the chop instantly if Google had bought Sun. Sure, the engineers might have been able to find other opportunities within the firm, but Solaris would have been scrapped instantly (maybe open sourced, maybe not).
The original ZFS engineers now work on OpenZFS, and have for almost a decade. While the base is "owned" by Oracle, most of the innovation in ZFS (and other Solaris technologies) have been done in illumos and are owned by their respective authors. While I feel empathy for the engineers who lost their jobs, modern Solaris innovation hasn't been funded by Oracle since 2010 and the OpenSolaris debacle.
The IP has enough shadow over it that it precludes Apple from using ZFS, even though all the tech was developed outside. That's the main issue. Things are murky and is scares people.
From the article: "Finally, and perhaps most significantly, personal egos and NIH (not invented here) syndrome certainly played a part. I'm told by folks who worked at Apple at the time that certain leads and managers preferred to build their own stuff rather than adopting external technology, even technology that was best of breed. They pitched their own project, an Apple project that would bring modern filesystem technologies to Mac OS X. The design center for ZFS was servers, not laptops—and certainly not phones, tablets, and watches—and the argument was likely that it would be better to start from scratch than adapt ZFS. Combined with the uncertainty above and, I'm told, no shortage of political savvy, the anti-ZFS arguments carried the day. Licensing FUD was thrown into the mix; even today folks at Apple see the ZFS license as nefarious and toxic in some way, whereas the DTrace license works just fine for them. Note that both use the same license with the same grants and same restrictions."
This seems like FUD, people have analyzed the licensing, and the openzfs project is independent of any company. Would you say OpenJDK was murky? I don't think so.
Whatever reasons apple had for not using ZFS at that time wasn't some "IP shadow" as the CDDL includes a patent grant.
OpenZFS is open source and there is nothing murky or scary about it.
There is nothing stopping Linux from mainlining ZFS at the source level apart from kernel developers reluctance to give into "layering violations." Somebody can correct me if I'm wrong.
To wit: I am not sure if linking and terms of binary distribution matter after sources have merged for open source projects. If you have the source code and have right to modify, merge and distribute it, arguing about static or dynamic linking and binary distribution is like arguing about the color of your car door after the car has been made. It's inconsequential compared to amount of IP and resources put into the source code, and easily changed by any user (to a different architecture lets say).
Modifying has no meaning when it comes to binaries. But it's core to copyleft and open source. You would only care about binary licensing if you were a closed source product, and had to have ultimate control. If somebody had complete copyright over a ZFS binary, they could say how it can be used or not. The way an EULA would restrict you. Since no such copyright holder or binary exists for ZFS, and only source does, I don't think most people would stop collaboration in source code once the licenses are compatible.
Linking exceptions are for those who do own all of the copyrights and want to distribute along side open source software, and having a distinction otherwise in the open source world adds to license proliferation and makes no sense. ZFS doesn't have a single copyright holder acting on it's behalf, it has lost certain privileges because of this. I'm sure Oracle would be troll about this too, but they would probably be wrong given FreeBSD and OpenZFS.
I do think the GPLv3 fails a little bit because of the same argument. But I'm no lawyer.
I doubt (given what I heard from lawyers on the topic) that the legal system cares if a work has static or dynamic linking technology. What matters are the legal concepts which previous legal cases have been settled on.
Intent by everyone involved such as the author, the accused and the law writer. If the author intended that the work is used in one way, and the accused knew this but decided to go against it, then that carries a lot of weight. Similar, if the law writer intended the law to address a specific situation, that also carries weight.
Precedence from cases that involve derivate work. There is a fuzzy line when two works merge to create a third. Music has a large legal history, parts which are contradicting itself.
And last there is the law itself. Modifying for example is a explicit exclusive right in some places (such as the US). One case involved a person who bought a painting, cut it down into squares, and rearranged them into a mosaic version. The painter sued and won the case, arguing exclusive right to create modifications. If something is binary or source code should irrelevant to the question about if the "work" has been modified based on what the author originally created.
I think we know what Sun wanted, they wanted to make money from Opensolaris and ZFS by building a community around it and selling services and products around it. After a decade, all that's left now is the source code, and if Oracle wants to wait until ZFS becomes bigger to sue Canonical or Netgear, well, that shows their intentions as well.
As this timeline and some Googling shows: https://en.wikipedia.org/wiki/OpenZFS#History
Sun did work in good faith with Apple, and the Linux community to get them to adopt ZFS, unsuccessfully (successfully with FreeBSD). Additionally, the fact that Sun did successfully open source quite a few things (virtualbox, jenkins, openam, solaris, staroffice, netbeans, etc..) and relicense Java from SCSL to GPL, makes their intentions towards the open source community pretty clear. Yes, they wanted to make money, but they probably open sourced and created more open source communities than any other company in SV history.
Now, about modification, any open source license listed by the FSF will grant modification rights to users. I don't think compiling is making a derivative work. It's like unfolding a chair to sit on it. It's just a part of normal usage of software, you can decompile a binary and learn from it, also normal usage. The compiler is a tool, like a screw driver or paint gun that will let you assemble a chair or paint your car. Reading and learning from source code is usage too. Modifying the actual source code would be a real modification and could be making ZFS work on an Raspberry Pi, which is allowed by open source. Given that Sun wanted ZFS to be widely adopted in open source, they adopted the CDDL to let people modify ZFS so it could be used by OSes other than Solaris. This is what the OpenZFS community enables, and is completely compatible with GNU/Linux or Apache open source norms. Oracle might come knocking for money, but that's not the history of Sun or current ZFS contributors, who are just out to make better software using the open source process. They would probably not disagree with what Netgear or Canonical did, and if they did, it would be on the OpenZFS mailing list and in a news story or two. It's not.
You can't copy books and sell them, and I can understand you can't modify an original artwork and not affect the copyright owner's rights. You can correct an error in a book or claim inspiration from a painting to make another. You can't claim copyright if someone uses a binary in a VM when you didn't intend it. You can give others the right to modify source code, and ask that others do the same. That is open source and the GPL. OpenZFS, FreeBSD, have as much standing as Oracle, which is really none, to actually stop someone from porting ZFS to anything they would like and distribute it along side proprietary or open source software.
I have heard a few different version of the intention behind ZFS license, through I can't say that for a legal case I have enough information to have a definitive opinion. Some people say that ZFS license was created with the explicit purpose to be incompatible with linux in order to not compete with solaris, but I would agree that Oracle has a thought case in court if they want to wait only to sue later. That is a practice that the legal systems tend to strongly dislike.
The other side is of course each one of the linux developers, each holding the full power of copyright. To cite SFLC, no free software developers have ever sued an other free software developer over license incompatibility, so its very unlikely to happen with ZFS. Such court cases really on happen between companies.
So to sum up, a case over ZFS is very unlikely, but I would not bet on what would happen if android suddenly started to use ZFS.
I think the Sun leadership at the time wanted ZFS to be on Linux up to the point of licensing it under the GPL. The employees wanted a more BSD-like license[1], so that's the correct context under which you can look at how Solaris was licenced. It's not about being in Linux or not, it's do we want Solaris to be under a more BSD like or GPL license. I think this conversation was bigger a decade or more ago, and frankly the GPL has had more commercial success since then. If Solaris had been GPL'd is an interesting thought experiment, and too bad it's just that. Netbeans was GPL and CDDL dual licensed.
I wonder what Linux Torvalds thinks about merging ZFS into Linux now, he wasn't too keen a decade ago. Sun is no longer around, someone worse like Oracle has taken their place. A couple of lessons for the open source community here, I think. And Brian Cantrill nails in on the head in the youtube video link.
ZFS will need to be on Linux first before it can show up on Android or media centers or gaming consoles, and I don't doubt Oracle's ability to find a way to patent troll anything. But it will be just that: patent or copyright trolling.
To add a couple more things: Oracle has been developing ZFS sans open source for the past few years as well, which means they've stopping caring about OpenZFS and/or have lost the right to.
Canonical, Debian and SFLC have really done with right thing by distributing ZFS on Linux, using AFS as a precedence. I hope more merging like this can happen in open source in the future.
That's FUD. Apple actually did have a ZFS port for the Mac they were working on. And they successfully ported DTrace way back when as well (you can still use it today). I expect that the lawyers wouldn't have allowed anyone to work on a port of a project with an "IP shadow". The most likely reasons were probably either NIH, or worries about the greedy resource requirements of ZFS.
ZFS would have been mainlined into Linux much earlier if it had been someone like Google who had bought Sun. Now, Google is pushing into hardware and deploying POWER9 servers. Not to mention the lawsuits with Oracle could have been avoided as well. But hey, atleast they got Motorola's IP portfolio.
I don't know if it would've been any different for Solaris, but I can imagine Google would've been a whole lot better off owning Java. They probably would've been far better stewards too.
Even if they win Oracle v Google, it's been a huge distraction, a huge cost in lawyers (who knows how much they'll recoup), a big unknown and the search for an alternative has to be costly and time-consuming.
$8.3 billion almost seems like a bargain in hindsight.
To be fair I don't think that Oracle have been particularly bad to Java, except beeing Oracle.
I doubt Google was ever an alternative. Realistically it was a fight for power over the enterprise customers between IBM and Oracle, with most (non OS) products built or integrated with Java.
Maybe it was even a hot potato- someone needed to keep their strategic bet alive, since Microsoft also share the same strategy in the enterprise app segment. (Sell complex but underperforming applications to ill-advised customers using long feature lists.)
Someone who (at the time) was senior enough in an IBM/Red Hat joint role told me the plan was that IBM would buy Sun, essentially for their Telco and a few other key market segments. To avoid any anti-trust risks (which IBM was still institutionally paranoid about) that might come from owning AIX, Solaris, WebSphere, Java, and various other overlapping properties, they would spin much of the software division (including Java and Solaris) off to Red Hat.
Unfortunately for the free software world, IBM's lawyers got cold feet when it turned out that Sun were mired in bribery claims. Oracle didn't care. Pity, because actually having Java properly open sourced, and likewise with various other Sun technologies (e.g. an OpenSolaris that didn't rely on a couple of binary-only libraries, for example) would have been quite a net win.
By "properly" open sourced you mean no proprietary builds at all? Because virtually all of Java is available in OpenJDK and Oracle has continued to fund and open source major new Java upgrades.
By properly open sourced I mean that the various testing and validation suites for Java and key Java tech (like J2EE) would be available, that proprietary bits (like the Windows installer) wouldn't be silently parachuted into the JDK install, as two examples.
IBM is facing the exact same problem selling AIX boxes as Sun/Oracle and HP do selling their Unix hardware and OS, so why would IBM have been a better fit to buy Sun?
I think Sun was (mostly) acquired to prevent Java IP being sold to other, more nefarious parties (such as MS or patent trolls). Both Oracle and IBM were (and still are, though to a lesser extent) heavily invested into the Java ecosystem. Sun OS/Solaris has historically also been the reference O/S and platform for big-time installations of the Oracle RDBMS.
No, Oracle at the time was big on the "fullstack" play, and thought Sun hardware could provide the last piece of the puzzle: selling absolutely everything to a business, from metal to apps and services, in one neat box. Java would have been safe with IBM too, that wasn't much of a worry - although I'm sure Ellison hoped they could leverage it against competitors invested in it, like they tried to do with Google.
The enemy #1 for Oracle at the time was SAP, which they couldn't force out because it's another cancer (the lock-in is huge); so they developed a strategy of buying loads of ecosystem apps to "surround" it. At that point, they could sell the database and apps in one package, and then slowly erode sap away. Hardware was a natural addition to that strategy.
Unfortunately they borked execution. They didn't invest properly in making solutions ready-made, so after you bought a (very expensive) box, you still had to pay tons to consultants to set it up, making it uncompetitive on the whole. The industry shift to cloud did the rest. Now they're way too busy turning into "bigger Salesforce" to care about metal.
Exactly. Oracle+Sun allowed 'vertically integrated' solutions with their Engineered Systems. You could literally buy your entire rack, software, storage, infiniband, even the network portion (Oracle has copper ethernet switches believe it or not) straight from them. It would be under one contract for support etc.
I meant it more in the sense of how an old friend/rival could respectfully carry their casket to the grave, as opposed to the way Oracle desecrated and devoured the flesh of the corpse.
That Oracle is moving to Linux on Sparc tells you they still think there is an enterprise business that Exadata/x86 couldn't win (after years of destroying Solaris). IBM+Sun would have been most of the E-biz and those customers would have been more comfortable with Oracle as just the DB provider.
Solaris on PPC existed at various times, IBM could have provided a convergence path for the Fortune 500, AIX/Solaris/Linux/Java all on PPC.
It was actually a close thing, but someone else will have to write that story.
SPARC died the moment every developer had a good enough x86 desktop running a Unix flavor (and Sun decided it'd be happy not to build desktops anymore). At that point, we cease to imagine deploying our stuff in SPARC. We test on x86 and make it run well on x86 before it gets compiled for SPARC.
At that point, unless the SPARC hardware has some definitive cost/performance advantage, we'll buy x86. SPARC is for legacy.
Same applies to POWER, BTW. How many new apps have you seen in the past 10 years that were designed for POWER?
What accelerated it even further was when Linux went 64 bit and hardware improved to the point where it was a suitable replacement for big iron Solaris (the SunFire 6900 for example). That was around 2005 from memory.
And the 6900's features with regard to HA in the field fell well short of advertised.
Now you can get a Linux box with 4Tb of RAM so no-one should buy Solaris over Linux.
Sparc is also effectively dead. Solaris and Sparc exist as ongoing projects only to the extent Oracle is contractually required to maintain them--e.g. to Fujitsu. If you accept that Solaris is dead, then by the same evidence Sparc is dead. If you believe Sparc is still viable, then by the same evidence Solaris is still viable.
Agreed. I think IBM would have fixed the licensing issues that kept ZFS off Linux and would have kept Solaris Open Source, perhaps with a better license. It's tragic that Sun went to Oracle and not IBM.
I dont' see why, they'd have been able to integrate the various buts of Solaris that where ahead of the Linux curve and since RH has been good about open sourcing stuff the rest of the ecosystem would have benefited.
In our market, it's at least 1:10 (CentOS is ten times as popular as RHEL). We don't actually record a difference between them in our license system (they both register as generic "redhat" with a major version), but we distinguish between them in our ticket tracker.
We had a bug in our installer recently where it wouldn't work at all on RHEL (I stopped testing on RHEL a few years ago because compatibility was so reliable that it never behaved any differently from CentOS). It took almost 24 hours to get a bug report about it (we see about a hundred new installations a day). So...it may be even less than 1/10th.
That said, we mostly operate in the low-end web hosting market: solo web developers, small design shops, web hosts selling to solo web developers, small businesses, etc. Our software rarely ends up in huge enterprise deployments. Even for our customers that are big businesses that use RHEL on the backend, they might have CentOS on their web server, because it's just a rental and that's what their web host installed for them. So, our numbers are certainly skewed toward CentOS because margins in web hosting are razor thin.
Decisions like that trickle up from what developers and IT guys want. There was a generation that was raised on Solaris, and those folks built the first generation web on Solaris-on-SPARC with Oracle databases. But, then the next generation came along, and they hadn't learned UNIX in college on Solaris boxes. They'd learned on Linux in their parent's basement.
I came up during that middle era when the shift was happening. I was an early adopter of Linux, but all of the real training I got (my employer at the time paid for it) was Solaris-based. But, even with the training and access to Solaris, I prefered Linux. I just had more comfort with it because it was my daily driver. When people I worked for were making decisions about OS, the recommendation they nearly always got from me was "Linux". And, I believe that played out millions of times to get us to the world we're in today.
So, yeah, Solaris was competing with free, but not always at the business level...the part that mattered was "how many people with influence are using this OS as their daily driver?" And, Linux was/is a phenomenon. People love Linux. People loved Solaris, too, but it was a much lower number due to lack of access...early days of Linux, you couldn't even get Solaris without a SPARC box to run it on. Later, they made x86 Solaris free, but it was too little too late, and by that time Linux was better than Solaris on a number of extremely important metrics (package management and package selection, for example, but also in terms of just plain fun).
I can easily relate to this, started with Xenix, moved into DG-UX at university, we moved into GNU/Linux after that server eventually died, so it was mostly used of OS related classes.
At work GNU/Linux was just an internal server, all real work was being done in Solaris, HP-UX and Aix servers.
So fast forward to modern times and even Microsoft uses Linux kernel syscalls on their new POSIX personality subsystem, instead of actually supporting POSIX.
POSIX support is in the API, not the syscalls. musl libc places great emphasis on POSIX conformance. So if you ran a musl-based distro atop WSL, that should give you what you want, or at least something closer.
Not really. Sun would have survived and thrived if it had thrown its weight behind Solaris x86. But they were too worried about cannibalising SPARC and when commodity kit started to beat them on price/performance they had nowhere to go.
I remember when we ripped out our 3 6800s and replaced them with 9 Dells, for way more power at a fraction of the price. Would have loved to recompile on Solaris, but the hardware savings easily covered the cost of a port...
Not really. Sun would have survived and thrived if it had thrown its weight behind Solaris x86.
Except they did; anyone working in Solaris engineering at that time or even now could tell you that x86 was just as important as SPARC. From a technological perspective, they are completely equivalent.
For example, the ZFS Storage Appliance is x86-based, not SPARC-based.
From the end user point of view, Sun recommitted (and later lessened their commitment) to Solaris x86 support so many times over the years that it's hard to imagine what would convince a user that they were really "throwing their weight behind it." We really mean it this time... wait, where are you going???
But it is true. For a while, AMD based Acer Ferrari laptops were the primary development hardware for Solaris, SPARC or otherwise. It showed up on i86pc first, then it would be built on sparc during the night.
SmartOS for example only supports i86pc; there is no sparc port.
I did my internship at Nortel. A decade later and having worked from small shops to IBM and beyond, nothing comes close to the quality of people/code that I saw at Nortel.
Software gets shittier because speed of development and time to market continues to outweigh any reason for quality. Code written 20-30 years ago almost always looks cleaner to me than the code I'm working on today :(
> Code written 20-30 years ago almost always looks cleaner to me than the code I'm working on today :(
You aren't looking at the average workaday C program from that era. I guarantee you that OpenSSL, for example, does not look cleaner than code of today.
Many people don't have the luxury of write and jump ship. The vast majority of my career was not greenfield development. It has almost all been maintaining pre-existing software. Some of which is more than a decade old.
The challenging part of us in the software maintenance job is to balance a need to refactor with a need to add features. It colors your opinion about a lot of things. You start to evaluate methodologies, technologies, frameworks, and even library choices by their impact on long term maintainability.
I frequently find myself at odds with a primarily greenfield developer in tool choice because I'm looking forward into the future and it doesn't look pretty.
Greenfield developers look at me funny when I say we should just use Spring or .net for new projects. I actually have to maintain my projects for years, and I've been burnt many times before.
Nortel was an unbelievable loss. To this day you still see Nortel phones in every corner of the world. If they hadn't over-extended themselves they'd probably still be on top of that market.
Even more disappointing was that RIM/Blackberry had every opportunity to take that mantle (business/enterprise IP telephony) and didn't even try.
The real answer is for Sun to have remained profitable. Unfortunately, due to their workstation focus, it probably wasn't possible. e.g. Digital was an excellent company, but also went down with (in their case) mini-computers. Christensen's frustration at how such a great company could get killed was part inspiration for developing his "disruption" theory.
I'd hate to think Sun's demise, in the alternative, was due to their hippy open-standards approach, which is very appealing to engineers...
That's a lot of very highly skilled staff which they won't be able to reassemble for another product or project for years. Those people will scatter to the winds now. It's a shame they lacked the imagination to make them do something new.
But then Oracle doesn't seem to have the organizational capability to start major new successful product lines anymore. They grow through acquisition.
I think most of the folks able to work on something different left a while ago. I am sure working on Solaris support at Oracle they saw the writing on the wall.
Also, some of those "firings" come with a decent chunk of money; maybe some of the folks who stayed made a rational choice of waiting until fired, then will move to a prearranged job somewhere else.
Anyone still left is likely far too comfortable working on Solaris, not really interested in working on something different or like you said they would have left.
I’ve said it before and I’ll say it again: if Oracle acquires your company GET OUT. Do not wait, bail out immediately.
Oracle is expert at slowly bleeding teams while suppressing pay to milk products for all they’re worth. They are developer-hostile (including to employees). It is career death.
If Oracle acquires a partner you depend on, you have 12-24 months to find an alternative before they cut your legs out from under you and steal every last drop of profit from the relationship you have with your customers.
Don’t believe any promises to the contrary. Oracle promised ours would be different. They gave us pay raises to stick through the transition. It was all a ruse. Once we were in the jaws of the machine stack ranking took over, raises and bonuses were crap, and a lot of architecture astronaut garbage was rained down from above. They increased the price of our product by two orders of magnitude which lead to massive revenue gains. They simultaneously shrunk the team and claimed there was no money for bonuses or equipment. Developers have a 5-year laptop replacement policy.
I call FUD. Oracle acquired Sun in 2010 - that's seven years, and it wasn't until this past January and August that major project changes were made and large number of staff laid off. "Get out now" seems unnecessarily alarming. Also, this may vary from org to org, but our (sparc/solaris dev) laptop replacement was every three years.
I'll admit that the way they've handled the recent layoffs is atrocious, with most employees finding out via FedEx notification and a pre-recorded concall message. Rumors of this major cut have been circulating for months. I've lost many good friends with 10,20,30+ years in Sun/Oracle. But I think Oracle gave hardware a fair shake.
Full disclosure: I worked in a Solaris dev/sustaining group until this past week.
I was part of the Hyperion acquisition. A relative of mine was dependent on Micros. I am speaking from experience, at least on the software side.
I’m telling people forcefully because Oracle has been doing the acquisition game for a very long time; they’ve figured out how to string people along to get the maximum value out of the acquisition. I personally lost out on thousands in pay by sticking around for too long.
Oracle as a company does not value engineers. A software engineer is scum compared to sales. If you want to be an engineer and make the real money (and get any respect) work in Sales Engineering. You’ll be away from home for 40 weeks a year but you get decent hardware and a small commission from the deals.
For those with career ambitions or self respect my original advice applies: get out.
I think that Solaris and SPARC had different fates in this regard: Solaris was dead the moment they (re)closed it in 2010 -- there was simply no way that Solaris was going to survive as a proprietary operating system (the era for which had passed half a decade before).
As for SPARC, Oracle does seem to have invested heavily, in part because of the elaborate self-delusion that Ellison seemed to have that he could develop magical database hardware that would somehow repeal the laws of physics.
As for the warning, it is indeed apt; Oracle is a mechanized and myopic profit-maximizer -- a remorseless and shameless corporate sociopath that lacks the ability to feel anything at all for its customers. Yes, your products will die of asphyxiation and incompetence and so on, but the much more acute damage will be to one's sense of purpose in the world: working for Oracle is a nonstop trip to either an existential crisis or a mercenary's existence (or both). And as many discovered on Friday, working for such an entity out of a noble (if misplaced) sense of duty or loyalty is pointless; Oracle feels nothing for you, its employees, for the same reason it feels nothing for its customers or its partners or the domain or the industry or society writ large: because it feels nothing at all.
The moment Oracle acquired Sun, early 2010, I called a meeting with my boss and boss's boss and said "Speaking as a Solaris admin, with several years' Solaris on my CV, we need to get off Solaris immediately and move to Linux." Took us a couple of years for most of it, and the last went when we finally got rid of Oracle and hence its SPARC box. Stack was substantially internally-developed Java.
As for SPARC, Oracle does seem to have invested heavily, in part because of the elaborate self-delusion that Ellison seemed to have that he could develop magical database hardware that would somehow repeal the laws of physics.
Any idea what (if any) the academic or other foundations of this delusion were & how far Oracle got before cutting their losses? Rock seems to have been suffocated before the ink on the acquisition was dry so I'm assuming that's not it.
I love reading about the dead-end roads of computer engineering, especially those that had a few gigadollars driven down them.
OpenOffice was 100% garbage from day 1 until Libre actually showed up. It was never viable under Sun's watch. Source: I used it full time for 6 years, filled with hatred.
Agreed. Oracle does kill products and end teams, but no more so than other large acquireres. Hell, it is NOTHING compared to Yahoo! or HP. HP is the absolute kings of ruining products and teams long term, while Yahoo! has killed more acquired products than any other company I can think of. Oracle has good benefits, takes pretty good care of their employees, but they'er old school so people don't get to work at home and have to drive to Redwood City every day. Not the worst, but not the best either.
That. I was at PeopleSoft when we got acquired by Oracle.
Went to a bunch of architecture meetings, and saw that nobody had a hint of a clue. "Project Fusion" was supposed to fix everything... As far as I know they're still working on that, some 12 years later.
Then I tried to get myself laid off; there was supposedly a list you could yourself on to be laid off with severance. After one month waiting I had enough and quit. So for one month I "worked" at Oracle. Best decision in a while.
That said, I do have some engineering friends who work at Oracle, and they generally like it, so your mileage may vary.
My company was using big machines cpq... While I personally think there are better options, the support under now "Oracle CPQ" has been considerably better, and have seen some improvements under Oracle.
We are planning to move to another CPQ in the next few months, but that's not cause oracle at all or the product got worse under Oracle.
But then we also use oracle DBMS part of our product, and we are moving away cause we hate oracle licensing/ support costs and while oracle can do alot, we are only using limited subset of functionality.
I don't know. I left a company in 1992 that I thought at the time was imminently doomed. They had been acquired and layoffs were rolling. However, the location didn't actually close until last year. 24 years later. Not acquired by Oracle I should say. Oracle at the time was this odd thing that somehow implanted consultants into some of our business analysts' offices with better Sun workstations than we had for developing the product.
Roch Bourbonnais managed to write a few detailed technical blog postings about the evolution of closed-source ZFS in the past couple of years. If you read those it will be clear that Oracle's version of ZFS continued to improve after Oracle ceased developing it in the open.
Among the more interesting topics Roch wrote about were some enormous changes to arc and l2arc, the zil, encryption, spa_sync, sequential scrub & resilver, which LBAs to choose when writing, and scalability of rw locks. OpenZFS is still reinventing several of these (e.g. sequential resilver and persistent l2arc are in github PRs now) albeit in generally very different ways.
If he is able and willing to participate in OpenZFS development, the whole project and its close relatives (e.g. ZFS on Linux, OpenZFS on OS X aka macOS) will benefit from his having explored the invention and development of similar wheels.
I do not know if he is still with Oracle. Either way, if you move quickly, the blog is likely to survive until after Labour Day.
I was just remembering that we discussed this a few days ago. This really does suck, and I hope you can find some work at Nexenta or Joyent/Samsung or one of the other businesses that help develop illumos. There's a page on the illumos wiki with links to job listings: https://wiki.illumos.org/display/illumos/illumos+Jobs.
Sad to see the loss of diversity in the operating system space. Thank you SunOS & Solaris for all the goodies over the years - Zones, ZFS, NFS, AutoFS, dtrace, etc.
All of that lives on with Illumos and SmartOS and companies like Joyent. You can even run Ubuntu inside a zone.
But such reactions happening now seem to indicate that people missed the attempted reproprietarization of OpenSolaris or at least missed out on marketing for the successors? Well here's what I see as kind of the canonical video detailing everything up to that drama point: https://www.youtube.com/watch?v=-zRN7XLCRhc As far as I understand it Oracle has been pretty irrelevant for things to do with Solaris since then.
It is very unlikely that Oracle will do the right thing. Oracle, as a company, is known to be ethically challenged. A company like Oracle that is known for its apathy towards its own developers cannot be expected to make careful consideration towards the software it is killing.
For those, who have not worked in Oracle or have little understanding of Oracle's internal culture, I recommend this nice article about why James Gosling, the creator of Java, quit Oracle: http://www.eweek.com/development/java-creator-james-gosling-...
On the topic of Oracle and its corporate culture, you may also want to see part of the talk "Fork Yeah! The Rise and Development of illumos" by Byran Cantrill[1]. The language he uses to talk about Oracle is rather colorful.
Is open-sourcing a codebase for an abandoned product somehow expensive or complicated? Are there legal or accounting issues? If not, it seems like buying a bit of good-will for the price of a bit of paperwork would be a useful investment.
If there is code in there that turns out to be copy-pasted from somewhere else, open sourcing makes it more likely that people will find it. That could be expensive (e.g. when some BigCo owns the copyright on code they have been selling for decades) and/or have even more serious consequences (e.g. when there's a GPL-licensed code fragment in there, and they linked it with a part they want to keep commercial)
Answering that question conclusively can be very expensive. They may not have a full history of the code, and if they have, it may not contain all metadata needed.k
Valid point, so here is a similar, but different reason: patents.
They may fear releasing the source opens them for patent lawsuits or may have patents they aren't willing to give up, and fear that open sourcing it without any patent clause will not give them much goodwill.
That's the problem James Gosling encountered trying to convince Sun to release the NeWS source code for free. Parts of it like the font rendering code were owned by AT&T and other companies.
I was at Intel once upon a time when they open sourced some telecom stuff they wanted to get out off. The issues were crazy because of the size of the code base that had accumulated from multiple acquisitions spanning many different divisions in different countries over many years. We couldn't even find people who could tell us what large chunks of the code did or whether they contained anything under patent, trademark etc. It's can be quite expensive and the only reason we did it was there were customers who were willing to pay for the effort.
It's entirely possible everything added since it was acquired by Sun is STILL under an open source license. Just because they haven't released the code or provided it to end-users under that license says nothing about how it's structured internally.
Sanitization. Remove all things that shouldn't be there. I recall the cleaning of the code - things where developers got a bit... salty - for the Mozilla open source project. Stuff that people really didn't want others to see.
Copyright. That intern who worked at SGI or NetApp who (un)knowingly reused some of a project they had on their laptop at their new place of employment that was actually software owned by their old place of employment. Scrubbing that or getting proper (open source) licensing for that bit, along with all the legal headaches that entails. Remember that we're talking about code that has been around since effectively 1982 with Sun UNIX 0.7 (or SunOS 4 as Solaris 1.x in 1991).
Licensed from other. I recall that various parts of non-Sun operating systems had licensing for parts of NFS. It wouldn't be surprising to find that parts of Solaris had licensing from other companies too. Including code directly licensed likely wouldn't be compatible with the license from the other company. Removing the licensed code to make it linkable is an option, but a time consuming one that diminishes the value of the overall project ("what do you mean I need a license from HP for something that DEC wrote?")
Your patents. Some open source projects have patent clauses with them. Sure, you can do the stuff to license those patents under the terms of the open source license... or chose one that doesn't have them. The former isn't at all in the interest of Oracle; the later is "Here's some BSD code... we don't know what patents are in there, but if you use them we will sue you."
Other patents (part 1). Surprise! In open sourcing the software, it is discovered that some intern reused the methods learned at another company in part of the product that has made its way to today. Now you've got the lawyers looking for blood for a decade of royalties.
Other patents (part 2) Recognizing that NetApp didn't want ZFS to be open source and there were some cross licensing aspects with Oracle with the ZFS settlement... they'd probably have something to say about it. There's probably some WAFL patents in it now.
Competitors. There are things in Solaris that would help competitors to Oracle products. Yes, open sourcing with a copy left license would mean that those competitors would be more challenged to use the software, but its out there.
Parteners. There are things in Solaris that help partners. Open sourcing Solaris, while providing good will to the community reduces the leveraging power with the partners for good deals. Some of those partners may also be interested in parts of Solaris not being open source.
So... nope. There are lots of legal issues and many business reasons to keep it closed source.
Consider also that this is development from March 2010 (first release under Oracle closed source - prior to that it was CDDL) until October 2015 (the last release) and the amount of value that has been added compared to the amount of effort for the above.
I have worked in Oracle for 2 years and I have worked at other Fortune 500 companies. Oracle, by far, is the most soulless and lethargic company among all of them.
I don't understand the negativity towards using consultants and off-shoring.
Using consultants is a business decision which helps the company hire people for the short term of the project and also offload risk. Not all companies have the capability to handle all kinds of risk. For example, software companies don't specialise in financial models and investing. So they don't take on financial risks by investing in derivatives and other instruments to make profit. Whenever possible, risk outside core competencies is outsourced. This is good for the company as then it can focus on the core business and make money.
Off-shoring is bad in the sense that jobs are lost in the local economy. But, this is again similar to having a factory in China as opposed to San Francisco. It brings in more expertise at a reduced cost. Off-shoring helps make things cheaper in the end. For example, as your insurance company uses off-shore consultants to make their software, it is cheaper directly translating to lower insurance premiums. the same for many other products.
While I understand that software is a different beast to build unlike toys or other products. Once built, the normal theories of economics still apply.
They both reduce institutional knowlage. This tends to be extremely detrimental in the long term.
Outsourcing software is like burning all the design documentation for hardware your having someone else build. Even something as well known as injection molding tends to work vastly better if you have experienced staff as part of the design process. And software is worse than that because the design process is part of development, so outsourcing means you don't even understand the problem space.
If I wouldn't pay the consultants in the office across the street to work on my core IT (outsourcing), why would I pay someone separated by oceans, timezones, language, culture... (offshoring)?
While working in healthcare, our corporate overlords repeatedly rammed the "blended shore" model down our throats. Which never worked. (Got to know some nice people, though. So there's that.)
The easiest part of our job was the coding. Requirements gathering, analysis, project management, customer relations, QA/Test, etc. Working shoulder to shoulder with our clients, it was hard enough. No way our work could be further delegated while still delivering something useful.
One other big concern is the incentive structure. A contractor will do whatever their contract incentivizes, and any as complex as software has plenty of areas where that can be gamed or will encourage bad outcomes. Charging for defects encourages litigating each bug; not charging encourages skimping on QA, etc.
Not having in-house expertise has the really big problem you mentioned because the only way to avoid these problems is if you have experienced oversight and that's almost inevitably the first “expensive” staff cut.
Off-shoring takes all of those problems and amplifies them with a big communications latency hit. I've never seen that go well except when an entire product can be handed over, including the management.
I agree with your point. Its just that not all consulting or outsourcing is bad.
A decade ago, at least in my experience, Chinese products were synonymous with bad quality and were not considered reliable. Nowadays, while those kinds of products do exist there also exist very good products designed and made in China (eg: DJI, OnePlus, etc.).
I reckon its still early days for software and going forward, we will learn how to build higher quality standard stuff and even high quality novel products. Generating this stigma against against consulting and off-shoring before we have had time to properly analyse the cost-benefit tradeoff doesn't help anyone.
They might be and are probably bad in many cases, and cases where they did not work should be publicised and studied. But making broad statements and correlating them with bad work environment is not helpful.
I happen to do enterprise consulting, so I get to live through "us vs them" situations within companies with cultures similar to Oracle quite often, hence my example.
Oracle is lethargic from the perspective of its employees because the company is extraordinarily sluggish and apathetic.
The pace at which Oracle develops software, the flagship Oracle Database for example, is ridiculously slow. The Database team takes about 3 months to 6 months to develop tiny changes (say 10 to 20 lines of code). In other Fortune 500 companies I have worked for, I have seen such changes taking about a few days (5 days max!). I am not kidding! What takes say 3 days to develop in a normal software company may take about 3 months to develop in Oracle. And mind you, Oracle Database is one of the premier departments of Oracle; other departments are even worse!
Oracle is also remarkably apathetic to its employees. The link[1] shared by foo101, i.e. has a few anecdotes that highlights this apathy. In fact, when I read James Gosling's account of Oracle in that link, I thought, "Wow! This is so accurate. Even someone of the name and fame as James Gosling had to face the same lowly problems at Oracles that relatively unknown developers in Oracle face."
Oracle lives in a market protected by high barriers to entry and low customer expectations. Protecting corporate databases from corruption and unauthorized access is crucial to any IT department. Incentive to change the DBMS vendor is nil compared to the expense and risks of converting. And no one, from CTO to DBA, makes any demand to substantially improve the product, such as by rectifying theoretical problems with SQL that have been recognized for decades.
It's no different with SQL Server or DB2. Each vendor has its captured market, with very little power to enlarge its share and commensurately little need to innovate. Customer demands amount to operational window dressing. Why spend money on engineering when customer and vendor are both satisfied?
> Oracle lives in a market protected by high barriers to entry and low customer expectations.
This is false. Oracle has a very tough competitor called Microsoft SQL Server. In fact, Oracle tries to catch up with many existing Microsoft SQL Server feature with its every release. Like it or not, the NoSQL database servers like MongoDB, ElasticSearch, etc. are also competitors of Oracle. That is why Oracle was forced to introduce supporting for storing, indexing and querying JSON in their database. This was a completely new feature that required Oracle to develop new querying syntax, new querying mechanisms and new constraint validation syntax. There are many such examples where Oracle is forced to improve due to competition.
> And no one, from CTO to DBA, makes any demand to substantially improve the product, such as by rectifying theoretical problems with SQL that have been recognized for decades.
This is true. But then everyone from CTO to DBA make a lot of demand in substantially improving the product in other awys, such as providing new features regarding scalability, robustness, security and auditing. This is why Oracle Database has seen a lot of enhancements in multitenancy in the last few versions.
> Why spend money on engineering when customer and vendor are both satisfied?
Oracle does spend a lot of money on engineering. Why? Because it has a lot of development to do in their database to remain competitive. If anyone thinks that the field of RDBMS is mostly a stagnant field and no new development happens in this area, then that is a gross misunderstanding of this market. Database market is still very competitve especially when Microsoft SQL server leads the game with modern features and as open source databases and NoSQL databases are eating away the market share.
See the following two URLs for example to see how Oracle has been adding new features in the last two releases:
While all this looks good on release notes, it is only someone like me, who has had the misfortune of being an Oracle developer, that can vouch that the development process and the development pace within Oracle is hopelessly archaic and painfully slow. Oracle still follows the waterfall model of development for example. There are probably hundreds of reasons and contributing factors to this. A few of those hundreds off the top of my head.
* Management that does not care about absolutely anything apart from their own promotion.
* A culture that rewards talking out loud rather than actual work.
* Top-down heavy handed management that provides zero autonomy to engineers, thus no motivation in engineers to innovate and improve engineering practices.
You have no further to look for Oracle's captured market than the US federal government. When Snowden talks about the realtime interception, tracking, and monitoring of any and all electronic communications, worldwide, he's talking about the NSA using Oracle databases to do it. Ellison made the company successful by selling the as-yet-unproven technology of a relational database to the FBI (IIRC), and it's just continued from there. This is the environment that led Scott McNealy, then CEO of Sun, to famously quip, "You have no privacy. Get over it." He knew that the NSA was collecting everything it could, and storing it in an Oracle database running on Sun hardware. Well, the commodity hardware caught up, but Postgres is still struggling to match the features Oracle had 20 years ago, so Oracle DB is still the king of enterprise databases, where cost is no object. Ellisonowns government IT, which is what leads him to be so smug about his success. Even if all Fortune 500's would cut Oracle off, Oracle would continue to rake in piles of cash from the government. It is the definition of a captured market.
You buy an Oracle DB because (a) you have a fleet of Oracle DBAs who agitate for you to keep Oracle, (b) plenty of software you want to buy only works with Oracle (or only has Oracle as the top-tier option) and once you've bought Oracle and hired Oracle DBAs it's hard to justify having other things, and (c) Oracle are very good at selling their product up the management chain; their enterprise sales teams speak the language people who run companies do, and IT departments don't.
> Also, asked whether in hindsight he would have preferred Sun having been acquired by IBM (which pursued a deal to acquire Sun and then backed out late in the game) rather than Oracle, Gosling said he and at least Sun Chairman Scott McNealy debated the prospect. And the consensus, led by McNealy, was that although they said they believed "Oracle would be more savage, IBM would make more layoffs."
OpenSolaris is a discontinued, open source computer operating system based on Solaris created by Sun Microsystems. It was also the name of the project initiated by Sun to build a developer and user community around the software. After the acquisition of Sun Microsystems in 2010, Oracle decided to discontinue open development.
Having tried various Illumos-based operating systems, I could care less if they reopened Solaris with one exception: ZFS. I would like to see them alter the licensing terms of it so that it could be properly integrated into Linux.
I believe they should consider this because Btrfs, which Oracle itself started, it going nowhere fast; and because Oracle customers who run Linux will benefit as well.
Oracle won't do it because they are Oracle. They actually had zero valid reasons to close OpenSolaris to begin with, so I suspect they'll care even less about their dead project.
Sun also bought Cobalt Networks (for $2 billion), the company behind the Cobalt Qube, soon after their successful IPO. The Qube was a cool Unix server appliance, targeted at ISPs, etc. [1] I worked for a short while at a startup that used one of them.
Ahh yes... the days of the Software Wars. http://mshiltonj.com/software-wars/ (last update 2003) - it might be interesting to reimagine that in today's world. From the 2006 map, Oracle's assimilation of the lion's share of the "south" (MySQL, Sun, Java) and the battle between Apple and Google in the "north west".
That said, consider the flip side of the heterogeneous aspect. You were unlikely to be able to run software on two different platforms that could communicate in a meaningful way. It was duct tape everywhere. There was no "cloud" that one could get significant computing resources on. You could pay (much more) for time on a shell at uunet or another isp... or buy your own for $$$.
A 250MHz Octane MXE with 128MB RAM and 4GB disk has a US list price of $47,995 in 1998. That's $72k in 2017 money. Making consistent technology stacks has reduced the cost to the point were we think very little about the hardware anymore - and by making those decisions unnecessary it has allowed for improved portability of skills and not worrying about the abstraction of the hardware (until it leaks).
Good points and I'm not arguing them. Things are now extremely more convenient then they were. What's missing is differences in approach to solving, well, everything. Everything was different from system to system (esp. mid 80's to mid 90's). People were still figuring out what to settle on. From a perspective of someone who likes to tinker with stuff, it was a blast. From a business and maybe usability perspective - it was a nightmare.
Now, when most stuff is settled-upon, it's like cars. There are differences, but not really. Turn lights to your left, wipers to your right, wheel turns left and right, there's a manual stick or automatic, pedals... it's all there, where you expect them to be. And that's good! Times were a bit more pioneering back then, naturally.
Maybe! :) But, when I try to look at it objectively, as far as I can, I can see there aren't any big paradigm shifts / explorations in OS' and computer architectures anymore. With a reason, industry has matured and moved from tectonic shifts to iteration.
When I was in undergrad it used to drive me nuts when I sat down at a terminal and the switches to `ls` or `ps` didn't work how I expected, and then I had to look them up. Things didn't seem appreciably different in a way that was interesting or useful, just different for the sake of difference.
Ha, that's funny. I had almost forgotten about the SunOS-vs-Solaris wars. Everybody wanted to hang onto SunOS as long as possible. Seems ironic that (many) people feel this way about Solaris now.
Remember the poster they were giving out at Usenix with a picture of the BSD Tie Fighter blowing up the AT&T Death Star, and the mathematical formulation "4.x > V for all values of x from zero to infinity"?
It just didn't make sense that Sun kicked AT&T's ass with BSD Unix, and then capitulated to them by switching over to SVR4.
Yeah, yeah, I'm sure there was some business reason, but it was a bitter pill to swallow.
Will some illumos related projects be interested in those people?
And on related note, I suppose Oracle won't open their diverged Solaris even if they plan to shut it down? In the past, Sun also planned to open their Sun studio and C/C++ compiler. That never happened because of Oracle.
Yes, absolutely. Based on the number of conversations I have had in the last 72 hours, I can assure you that the illumos community will be gaining some terrific talent over the coming weeks and months!
I remember my college days having to use various Sun/Unix machines and the delete key was invariably misconfigured particularly with vi or some other editor. I had to figure it out myself how to fix this. I thought it was part of some hazing ritual until I saw the same shit on a machine at my new job. Thankfully with the advent of free software distributions, these little details things started working out of the box.
Ha. The defaults on Sun workstations (at least mid-late 90s) weren't that bad. HPUX seemed to default to making delete the interrupt character, at least in a shell when I telneted into them remotely.
You need billions and billions of dollars to keep such project alive. And even then if you cant attract the right amount of interest you are doomed. It is bloody expensive to keep these people employed and i guess oracle is not competent enough to manage these resources. It is sad to see solaris go but this is what was about to happen with any proprietary technology with a limited stream of revenue. I mean i do not think customers cared that much about what they run on as long as their apps and dbs were fine. Since windows and linux are way cheaper it was a matter of time.
> Oh you seriously believe linux is not where it is due to huge sums from intel,samsung,red hat and countless of others ?
With the exception of Red Hat I totally concur that Linux has moved very far because of financial infusions from industry. But at the same time that 'toy operating system' was already quite usable before any of that happened.
As for Red Hat, they exist because of Linux, not the other way around.
We can agree to disagree on that point. Redhat is basically the universally supported platform in the enterprise. You want to call inf or a support case on that SAS HBA? You need to be running Redhat. You have a flaky NIC? Redhat. New Fibre Channel HBA? Redhat.
And rightly so, the vendor on the other end needs to know they've got an actual live person to work with on troubleshooting. Without Redhat I have no doubt Linux would still be alive and well, but it would NEVER have gotten the foothold it has in the enterprise today (coming from someone who worked at one of those hardware vendors back in the day and tried to push for support of other distributions).
I lived through these times, trying to port and support major server applications on Linux. Your thoughts on this are entirely inconsistent with my experience.
Honestly the fact that this can happen seems like a better reason to stay away from proprietary software than any other reason. Even software that is open source but owned by some company.
On a not completely unrelated note, there was something I read in the Kubernetes Steering Committee bootstrapping process that sounds really logical in the context of this news.
In Kubernetes Steering Committee, there will be no more than 33% membership from any given company. So if Docker, and CoreOS, and Weave, and Google, and Microsoft, and Amazon all come to the table and somehow get equal representation, which seems possible given how I understand the voting process, ... that's great, and no one company can "silent EOL" the product of Kubernetes.
And even if one of those companies is significantly over-represented within the list of members of standing that will vote for the Steering Committee members, and the second of those companies significantly eclipses any of the remaining nominees, the steering committee will still probably be in the hands of at least 4 companies.
I'm really quite miffed about a few well-liked community driven things, suddenly getting shut down by ownership lately. Not going to name any names, but in meetings to determine our organization's future direction in software, it's going to have to come to everyone's attention that in general overall momentum is a whole lot more important than corporate backing.
So far they only announced to go with a heavily modified ARM ISA for their next-gen supercomputer (forgot the name), while saying yadda yadda remain committed to SPARC yadda yadda. At least as far as I know. Probably still means life-support-only for SPARC.
We shall remember Solaris for all the good things that came out of it!
Highlights
ZFS one of the best file systems including copy on write snapshot functionality.
Solaris zones. Proper containers before Linux and LXC/Docker existed.
Dtrace for application and kernel performance.
And the SUN hardware workstations and servers that Solaris powered. Still remembers watching 4th of July fireworks being live streamed remotely on a Sun workstation using Solaris.
I feel the same way as a client. Everything I've used that they've purchased has turned out for the worse. Be it neglect or price increases the promises always exceed what's actually delivered.
Moreover they're transparent about their desire to lock you in and then press that to their advantage.
I actively avoid few companies but they're at the top of the list.
I came her to reminisce about the beauty of Solaris from a long time ago and your comment struck a nerve.
Sad news all over industry. Red Hat creating huge mess throughout linux with systemd for years. Solaris killed. Thankfully zfs continues living trough FreeBSD. MariaDB is forked at the last moment. Some of my work depends on VirtualBox.
Quote from American Gods:
“A single product manufactured by a single company for a single global market. Spicy, medium, or chunky! They get a choice, of course! OF COURSE! But they are buying salsa.”
Every major Linux distro uses systemd. I know its opponents are vocal but a bunch of us are silently enjoying the simplicity of .service files and systemd timers.
In the meantime, a bunch of us developers are desperately trying to figure out what the fsck broke this time, drinking our sorrows about this new life where we can't debug anything that happens at boot, and frantically setting up BSDs on our laptops at home, so that we can at least get a break from this mess when we're at home.
I (thankfully only) used to do Linux BSPs in a former life. In the last year or so of doing that, I think we spent about 15-20% of a project's time debugging systemd problems and working around it being too smart for its own good. 20% for the bloody init system sounds fine until you realize the rest of the time included stuff like writing or expanding device drivers.
"I (thankfully only) used to do Linux BSPs in a former life. In the last year or so of doing that, I think we spent about 15-20% of a project's time debugging systemd problems and working around it being too smart for its own good."
It would be great to see in-depth experience reports for systemd, good and bad. The overwhelming majority of anti-systemd commentary has just been noise for so long, and as somebody who is very much in favor of systemd, I'd love to see some real discussion and actual informed criticism.
I avoid systemd like the plague, but have hit these bugs anyway:
They keep having embarrassing security exploits (like remote code execution in the dns reimplementation, or handing root to strange usernames by design).
Many people say they broke logging because the binary files create administrative nightmares and are flakey by design. Ubuntu LTS' systemd logging subsystem definitely broke a bunch of production machines I work with by stealing control from rsyslog during a botched update. We have a bunch of tooling for log processing and shipping. The systemd binary format is a usability nightmare compared to .gz files.
Being a member of the "video" group is no longer enough to use DRI or the new rootless X11 stuff. One of the crucial system calls has been hardcoded to only work when invoked by UID zero. The kernel maintainer rejected a one line patch to fix it. The argument is that systemd can launder the call through its own authentication subsystem, so the kernel doesn't need to implement workable permissions for /dev/ anymore. I have no idea how far that brain damage has spread. Just "chown root:root /dev/video; chmod og-rwx" if you want systemd! Don't proactively break every non-systemd distro out there by intentionally crippling the kernel API!
I've noticed that systemd debian-derived desktops age poorly -- uninstall a bunch of packages and reinstall, and you will find you can no longer log in correctly. I never managed to root cause it. It looked like init issues with multiple repros across multiple OS vendors.
All the talk about BSD and launchd has me thinking Shepherd might be on to something. Launchd is XMLed up from here to Sunday, whereas Shepherd can have all the benefits of Scheme's S-EXPRs being tree structures while also having a great scripting language (Scheme) at your disposal.
Except FreeBSD is considering moving to something like launchd or systemd itself, because honestly a SMF isn't a bad idea compared to init scripts, which are rather bare bones.
No they are not. The former head of Launchd development at Apple was pushing to have FreeBSD adopt Launchd, but got rebuffed. So instead he has forked FreeBSD into NextBSD.
FreeBSD will never, ever adopt systemd. That I am certain of.
Even if the license wasn't an issue, and you could easily yank out all the non portable Linux code, and adopt it into BSD, the code quality, and the engineering of it would not meet BSD standards for adoption.
TrueOS just started using OpenRC and seem happy with it so far.
The TrueOS people still need to add proper service management and a decent service management toolset.
I suggested to them back in January 2017 that since OpenRC has s6 integration, they might do well to add s6 to that to gain full service management. I never received a reply. I haven't heard that Laurent Bercot was contacted, either.
I personally use the nosh system and service managers on FreeBSD and TrueOS, of course. I just wrote up a more detailed account of how I used them on TrueOS to run the PC-BSD desktop login and chooser utility under proper service management, and to improve several parts of that subsystem.
I don't. I just want the core components of Linux to be 1.) done by people who have enough experience to know the perils of overengineering and have done some serious software maintenance and debugging themselves 2.) done by project leads who don't think that changes that break things in a major way can be buried somewhere down in the changelogs 3.) done by people who don't react in a jaded way to every piece of criticism, regardless of the tone.
I don't think Linux would have gotten this far if its core wouldn't be influenced by the design principles of Unix and the kernel project wouldn't be run by a person who takes care regarding incompatible changes. Look at ReactOS or Wine. I'm worried that systemd might prove to be a major headache in the future.
I'd like something better than UNIX V and that I can understand.
In my experience, systemd fails on both points. For example, and understanding of user permissions under systemd is probably beyond 99% of developers' expertise.
> As an application developer and hobby sysadmin, systemd is a godsend over the misconfigured and broken stuff distributions have delivered for years.
And that is roughly the size of the problem. If you're an application developer or a hobby sysadmin then probably systemd is good for you, but if you're an experienced sysadmin it spells 'fixed what wasn't broken' and it re-introduces many issues that were already thought about, taken care of and laid to rest.
Bingo. Way way too much of Linux these days is by devs for devs, and screw everyone else.
Note the rise of "devops" that is basically about getting a straight line from devs to management so devs can sideline ops and their naysaying of the latest shinies devs wnats to sprinkle the projects with.
Another thing is that there is less and less interest in maintenance, because maintenance is not fun. The GNU generation i slowly leaving, and is being replaced with the "fun" generation that is hell bent on rewriting working, if crufty, systems using the latest language fads over a caffeine fueled weekend...
There are two jobs that were rolled into the developer role that we will sorely miss in the longer term, the first was 'analyst', the second 'sysadmin/sysop'.
Dumbing these down to make it possible to run these highly skilled and specialized jobs as part-time job without relevant training is one of the main reasons the state of software is what it is.
Exactly this. I have seen hundreds of systems built by experienced gray beard Unix admins, which have been steady as rocks, some lasting 15+ years without ever a peep of trouble. however I have seen many "new" boxes built by "devops", which can't survive a single upgrade cycle. If left alone for more than a few weeks, usually eat themselves and cause an outage.
On one hand, am a little thankful of the devops box, it will be crashed long before it gets compromised due to neglect.
Devops is doing their job. No one really cares about the uptime of a single box anymore at cloud scale. Servers are cattle, not pets. If one instance goes down, spin up a new one to take the load.
P.S. this is why, contrary to graybeard whinging, boot time matters and sysvinit cannot possibly keep up with systemd in the cloud. The faster your instances boot, the less capacity you lose while they're down.
This is only really true for compute instances that don't have big caches or long warmup times.
Even then, I think you'll find you want the bare metal the instances to run on to have high uptimes (on the order of years, not minutes), since the hardware with optimal $/perf can fit more and more workloads per machine (I think this is all Moore's law is doing to help compute these days). That means you need a decreasing number of physical machines to hold your workload. At some point your "cloud" has 10 nodes instead of 1000.
Fun exercise: "cloud scale" code is typically 5-100 slower per node than single machine scale up code.
How much money would you save by consolidating smaller workloads to big machines? More importantly, how much developer productivity would you gain by eliminating network latency / marshalling for internal requests?
I think you'll see an increase of developers "coding around" devops over the next few years. I could be wrong, of course.
> if you're an experienced sysadmin it spells 'fixed what wasn't broken' and it re-introduces many issues that were already thought about, taken care of and laid to rest
As an experienced sysadmin that's a really sweeping claim to toss out without details or supporting evidence — the latter being especially important given the amount of hyperbole bandied about.
The next largest SysV replacement was Upstart, which solved many problems but had curious oversights (e.g. restart with a delay or backoff, needing many releases before adding stdout/ stderr logging or launching as a user other than root), and SMF/launchd which weren't compelling enough to overcome their respective platforms’ drawbacks. Yes, you can install alternate init systems or run things under something like supervisord but supporting that was quite tedious compared to a solid standard init.
As a software developer, being able to target one init system which has all of the features I need and no real drawbacks is similarly a very nice change from the past needing to support variants for each major Linux distribution while wishing they'd hit feature-parity with Windows NT 3.1 (1993!).
The fact that every major Linux distribution has adopted systemd suggests that whatever reintroduced issues aren't gross exaggerations aren't as important as claimed; similarly, the features commonly dismissed as unnecessary inevitably turn out to be useful to part of the larger Linux community even if a particular detractor doesn't share those needs.
No true Scotsman fallacy creeping in here but it seems to me that anybody that has time enough to be a developer likely isn't a full time sysadmin. Now of course there are some miracle workers out there but I've met enough syadmins to know I'm not one of them even though I can probably hold my own on the UNIX command line and manage to get through a working day without having a feeling I've wasted my time.
> The fact that every major Linux distribution has adopted systemd suggests that whatever reintroduced issues aren't gross exaggerations aren't as important as claimed
It might simply mean that when RedHat moves the crowd follows because it is impossible to sustain the parallel development of two init systems.
And I'm all for that, I'd rather have one system than yet another fragmentation but it feels as if in this particular case that decision was not arrived at in a way that takes into account all the criticisms leveled against the 'upstart' new init system. (Pun intended.)
I've both been responsible for hundreds of systems as a full-time job and written software as a full-time job. Over a couple decades the ratio has changed from job to job but I'd also describe my style of system administration as automation-heavy and had been encouraging people to think of the job as writing code to manage systems rather than manual work since around the turn of the century so there was definitely non-trivial crossover even before the DevOps coinage became popular. (And, lest that sound vain, that wasn't exactly something I came up with. It took a long time for the automation community to go really mainstream – HPC was years ahead by necessity)
I won't claim that systemd is perfect or that I'm happy with every detail of its development history but in practice I find it's not something I need to think about very often. That was true of later Upstart releases, too, so I mostly don't get the bitterness some people have: flip a coin and either way we have a nice quality of life improvement over SysV. Yes, Red Hat carries a lot of weight but they also employ a ton of open source developers so it's not like that's unearned.
We need a system that people can install at home, and that never needs someone to configure or maintain it.
We need a system that people can throw on a VPS, and that needs no maintenance or configuration.
We need a system that a company can deploy over clusters of tenthousands of systems, and just works.
If you need a human to manually configure this stuff, it’s broken. The only situation where systemd isn’t useful is when you’re a small company, but large enough to be able to afford an ops guy for every issue there is. Generally, if you need ops to configure the base OS, you’re doing stuff wrong.
This whole point about devops and system is about automating sysadmins away, and this is a very necessary and worthy step.
>The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair.
You can make all these quotes, and comments, but they don’t help in the real world.
If you want that every child can run linux, that you can run linux on physical Internet of Things devices that are supposed to run for decades without maintenance (because you cannot access them), then you either have to build something so this can work,
or you end up with Windows 10 IoT and Windows 10 Cloud running everything.
No one’s gonna hire a sysadmin so they can manually upgrade every lightbulb and fire alarm on the planet, and so they can upgrade all of the servers running your containers manually.
Do you think Google has sysadmins manually pulling every update for every server, writing every config? Do you think they will just because you eliminate automation?
No, I side with the KDE fraction: Everything "just works" by default, but if you want, you can configure everything, and customize everything, and change everything. But if you don’t want to, it still "just works".
And that’s exactly what systemd does. It provides a baseline that just works, but you can always dive in and modify everything.
> So number of opponents is very minor but they make lots of noise. While the 'normal' people, those who really enjoy dictate are great majority, alas, not very vocal. This is communist lingo bro.
You never hear anyone going to post "oh wow I love systemd/my dell laptop/this website/" unless specifically asked, or when answering seemingly biased or sometimes incorrect "bad" reviews. I personally enjoy managing my computers much more since systemd.
That's like other technology stacks. For instance, some people were burned by CMake in 2008, didn't try to understand its logic and you still see them complaining on forums up to this day and you may think that this is a shitty unused buildsystem; but when Microsoft and Jetbrains did actual surveys it came out as very strong (more used than make) and growing:
Then say them. Simply posting that systemd is bad, and appealing to your position as an experienced UNIX user without giving specifics is just signalling at best.
I probably count as a Veteran System Administrator myself, and personally I am tired of anti-systemd people coming into conversations that have little to do with systemd, and trying to hijack the conversation with content-light posts.
I have used systemd only as one of the data points in overall state of the industry, please read my initial post. It is important point for me because I used to use Linux everywhere and rely on its modularity and security. I am looking at the big picture and how trends correlate, not particularly focused on systemd only. Rest of the conversation emerged because 'silent majority' (lol) focused on systemd only.
You are aware of the feature creep and taking over of every sane unix/linux utility? Usually with very questionable results, changing su or dns resolver, for the worse?
You know about number of vulnerabilities that are introduced by systemd, then dismissed and not-our-bug or wontfix?
You are aware of the tying in systemd with every other program they can get their hands on? That makes harder and harder to run properly secured machine that does not have systemd. Not saying to you since you know system administration but: This is a BAD thing.
You probably know about all vulnerabilities introduced by systemd? Coupled with inability to not use systemd, with its huge attack surface this makes Linux much less secure and monolith.
Back to my initial intention: there is huge trend throughout the industry that kills variety and ties in systems in a monolith mess. Systemd is just one of the data points. I care same way about Oracle buying Sun with one of the main reasons being MySQL. Thanks to MariaDB folks that lock in didn't happen but I was aware they will try it as soon as Oracle got hands on MySQL.
No need since I can choose what I start, in what order, and I can check every simple script myself. They were short and doing one thing well. Good luck doing that with 300+K lines of code with systemd.
Arguments like this really don't help systemd and show either ignorance or worse.
I’ve got 1400 init scripts my systems, each over 800 lines long, before I switched to systemd
And in a language without linters or typecheckers.
It’s literally easier to read the entire systemd code than trying to debug the interactions between these init scripts, all slightly buggy, interacting in slightly unexpected ways, and working mostly, although they never should.
> I’ve got 1400 init scripts my systems, each over 800 lines long, before I switched to systemd
There is something wrong with those numbers. Can you upload them somewhere or let me know what distro it was? I am curious to take a look.
If there is 1400 of them, each over 800 lines long then you have 1.1M lines of init script code, which doesn't seem right.
How many of them did you use daily? All 1400? 5? 10? Because checking sanity of ten of them, even if they are whooping 800+ lines of code (I'll have to see that) can't be compared to over 300K lines of monolithic systemd.
> How many of them did you use daily? All 1400? 5? 10? Because checking sanity of ten of them, even if they are whooping 800+ lines of code (I'll have to see that) can't be compared to over 300K lines of monolithic systemd.
All of them, it was over several servers, running different distros, with similar services, all with similar but slightly different init scripts for the same packages, and all having to interoperate. All scripts from the distro’s packages.
All of them were running daily, and many of them were constantly causing issues.
In fact, I have a single sysvinit script left on my systems, and it’s exactly that one that doesn’t work reliably.
I have been using systemd on Arch since (I think) 2012. As soon as it was available anyway. Is that enough of systemd experience? No, it is sadly not quite good.
> Red Hat creating huge mess throughout linux with systemd for years
Care to elaborate on the 'mess' created? I highly doubt you even used it yourself, sounds like something you've heard and are repeating for some cheap karma. The init scripts used before were the real mess, if you ask me. It's getting really tiring to hear these systemd rants with no good reasons to back them up. I guess ranting against systemd is the cool thing to do, just like calling Apple users sheep once was, there's no need for facts, just hyperbole.
While systemd isn't all bad and in some ways for sure an improvement over init (not that that is hard), there are also very questionable architectural decisions that have been made in systemd.
Next up you have the attitude of the two lead developers, Lennart Poettering and Kay Sievers and the way they handle community interaction and bug reports. If it doesn't fit into their rather limited view of how you should use your system and you break it, it's most likely not a bug they're going to bother fixing.
Last but not least, systemd is forcefully being pushed down our throats. Not with well-reasoned technical arguments, but with mostly emotional arguments about what they think is best for everyone. And since more and more of independent functionality is being integrated into and replaced by systemd, it becomes constantly more tedious to maintain software without hooking also into systemd.
It's probably equally as tiring for the people reading this to see you drag up the old false dichotomy of "init scripts used before". The world was not van Smoorenburg rc before systemd came along. (This especially ironic given that we are on a page discussing Solaris, for goodness sake!) If you want other people to try avoiding tired old fallacies, set a good example and avoid them yourself too.
You couldn't care less about karma? If you're downvoted it is because your post is downright rude, and full with fallacies: argumentum ad verecundiam (stating your irrelevant age and experience, stating you lived before the internet existed), argumentum ad hominem (calling a random person on the internet whom you don't know a kid), non sequitur (none of what you wrote directly supports your claim).
My explanation about the systemd hate is that it isn't for technical reasons; its cultural ones. Specifically, people dislike change. Especially older people don't want to relearn fundaments which have been reliably stable throughout the years. Young people, OTOH, lack that connection and are more open to change provided they agree with the rationale.
> OTOH, lack that connection and are more open to change provided they agree with the rationale.
Nice way of saying that young people need to redo errors of the old.
As for systemd - I'd wouldn't mind if it was executed properly. Booting speed is worst argument you can do for systemd - with ubiquitous SSD it doesn't really matter.
At what point did OP focus on boot speed? Systemd has a bundle of other useful properties, like simple unit files and handling the entire boot process.
If you do want to talk boot speed, compare a sysv install versus a systemd install on an 850 Pro. There are a notable few seconds of difference!
LOL I'm older and have 35 years of experience in the technology.
Doesn't mean for one hot second I think I can just puff my chest up about a topic and blow people off when they ask a perfectly good question.
For me the jury is still out on systemd. My biggest concern is it seems to be slowly taking over everything and thus violating the core unix philosophy of "do one thing really well". It feels like systemd was started to "improve startup times" for desktop users so they'd be happy. Meanwhile servers were sort of forgotten about and many of the complexities added by systemd make getting things done harder for day-to-day system admins. I've made some unit's myself and I like the many options I have but honestly it's not a daily job. Writing three lines of bash in /etc/init.d was way quicker and easier to rationalize about in the heat of "getting shit done".
All this said I have been using systemd on my machines for a while now, at first I would back it out the second I created a new install image but now I'm trying hard to learn it and understand.
Glad to read there's people with such long experience who still have an open mind and are willing to unlearn and relearn. It isn't easy as you age to keep that spirit alive.
The unix core philosophy of "do one thing really well" is such a cliche though. "Do one thing really well", yet using a monolithic kernel (Linux, Solaris, *BSD). Microkernels like OpenVMS and GNU Hurd allow one to restart (including hot patching) a part of the kernel. The same is true for running something like Qubes. Apart from the kernel debate there's tons of monolithic software. Software statically linked on a commercial UNIX? You bet. Plus you use a full-blown DE, a web browser, Emacs. Vi? Sure, Vi. Yet people use Vim with all kind of plugins.
> My biggest concern is it seems to be slowly taking over everything and thus violating the core unix philosophy of "do one thing really well".
The larger attack surface, reduced "git 'r done"'ess when you're in the midst of a hot outage, and increased complexity to trace what happens give me concerns. Some touched upon in this StackExchange [1] conversation thread, but lots of good threads elsewhere on the Net along these and other lines. Personally, I'd rather see the entire idea of "booting" be looked at again.
The reason sysadmins value the "git 'r done" aspect of System V init is because servers are not booted frequently. But init scripts are changed more frequently than servers are booted, and business application teams forbid booting the server more than utterly, absolutely, necessary; and the sysadmins wanting to boot to test a modification to the init script doesn't count. Dev/QA/Pre-prod change control environments help, devops-based source control discipline helps, but the next time a server is booted is always at least a "sideways-glancing-to-see-what-breaks" moment for many a sysadmin. When the startup sequence breaks somewhere, it becomes a hot outage, and especially if correcting it requires application-specific domain expertise, outside the OS. In the middle of such a hot outage, the ability to get closer to the problem domain within the shell script is appreciated. Systemd's init compatibility indirection layer helps, and hopefully, some thought in the future is given to streamlining this layer.
The entire notion of "booting" has rubbed me the wrong way for an increasing amount of time, though. Microkernels tried to address this, but they never caught on. Solaris and AIX try to address this, and Linux is exploring this, with their live kernel migration features, but they don't really do much to help higher up the stack. The best I can do to mitigate this itch for the time being is highly-available three-node clusters, and regularly moving the application to one of the opposite nodes, booting the inactive node, and testing changes to that boot, and the aforementioned devops-orientation and source control. Having an OS that lets me "re-home" a running application Tandem-Kernel-like/VMWare-Live-Migration-like, to a newly-"booted" state of the OS though, would be the bees' knees.
> I've been here before internet existed. You better believe I know what I am talking about.
So all you have is using your age as an appeal to authority? Pretty sad. All that's telling me is that you're used to doing things a certain way and are now upset that your knowledge is being uprooted and you need to learn something new. Otherwise you'll use facts instead of insults to push your argument.
I think what he is alluding to are basically two things: experience with software design (how to avoid overengineering to make maintenance and debugging easier) and experience wrt. dealing with dependent projects. The systemd team's track record on both has been quite abysmal.
As AnonymousPlanet said, scope of systemd and number of bad decisions (or good decisions with horrible implementation) is way too big to discuss here. Linux is no longer secure or reliable system it used to be. Engineering results and outcomes are bad from just about any angle you can take.
If you belong to the silent but great majority (lol) of users that enjoy and cherish systemd -- just carry on by all means.
If anything there is A LOT of aggression any time somebody says anything 'unwelcome' about systemd.
Can you point to an article or blog post pointing out the bad decisions?
> If anything there is A LOT of aggression any time somebody says anything 'unwelcome' about systemd.
The downvotes came when you stated your age, "believe I know what I am talking about" instead of listing arguments and calling the parent poster 'kid'.
Downvotes are really interesting in a sense they provide me with perspective that some people don't even read what is the main point I am making. Systemd discussion is data point, not the main point for me.
Parent poster did clearly state I am making it all up and that I have zero experience with systemd. Cultural differences? English is not my main language but I prefer clearly stating what is what, not really skilled in underhand 'polite' underhand attack. 'Politely' saying you are lying = fine, responding that I may be too ancient or experienced to even be able to explain all the falacies = bad. Gotcha.
Cliche but I am pointing at the moon and people are yelling at the finger. So be it.
I have been thinking of a server vendor for startups company like Sun was trying to do with Schwartz as CEO. Ideally it would use local server manufacturing instead of Chinese ODMs. Of course, not every startup is interested, but 1TB+ RAM and fast SSDs might be attractive to some of them like GitLab.
I was present for training at Informix headquarters in Menlo Park when it was announced that IBM purchased the company (I believe around 1999 and 2000). I can still recall the deathly pallor of the trainer as well as the shocked silence of the employees. A couple months later I barely located the new Informix webpage on IBM.com; it advertised for DB2.
After evaluating various Linux solutions, then FreeNAS, I went with SmartOS (an Illumnos variant) at home because it was the only one with rock solid secure containers, virtualization and zfs.
Unfortunately, I never managed to get single node docker compatibility to work, and then there was a design flaw in the inexpensive atom server processors that it runs well on that leads to failure after a year or so.
Faced with a >>$1000 hardware expenditure to get a reliable replacement NAS that's compatible with SmartOS, I jumped ship to Synology and haven't looked back.
My Synology box is way more available than my ISP or Amazon Cloud Drive, and I reproduced months of setup work from SmartOS in an afternoon with the Synology.
There's an argument that Solaris/Illumos have features for which the equivalent doesn't exist in Linux (native ZFS, dtrace, some zone/container features). But for most people those are outweighed by using an operating system that is actively being developed by a large community including many large companies.
I used to love working with E10k/E15k boxes back in the day. X86 just couldn’t compete with 128 CPU SPARC systems. It was amazing! Sad to see Solaris go.
Does anybody know how many people were laid off? I am interested to know to figure out how many people you need to make a modern operating system these days.
I used to work at Sun, and the Solaris codebase is the most amazing C code I've ever worked with. I'm probably going to be accused of bias, but the Linux code is really messy compared to Solaris.
Sun was already on the way down by the time I left many years ago, but what had happened since Oracle bought them has been nothing but depressing.