Hacker News new | past | comments | ask | show | jobs | submit login
Joyent Public Cloud EOL (joyent.com)
317 points by rubin55 on June 6, 2019 | hide | past | favorite | 275 comments



Can anyone explain Joyent to me? My understanding always was that they're some special kind of hosting provider. But I never really got it. They sponsor Node.js, they sponsor all kinds of ex-Sun Solaris-related stuff, employing high profile people from both communities. What does supporting (and, at a time in the past, basically running) Node.js get a hosting provider? What about Joyent's products is related to Node at all?

But oh, they're cancelling the public cloud, which I assume was the hosting service? Then what are they now? I went to their site but I totally didn't get it, I'm obviously not the target audience.

So, what does Joyent really do, how to they make money, and how does it make business sense for them to do all that illumos/zfs/node etc etc stuff? Are they an AWS competitor? A Heroku competitor? An Oracle/ex-Sun competitor? I really can't place them.


Technically speaking, they were quite far ahead of the curve. While VMs were all the rage, they were shipping container-based products (their Sun refugees kept developing Solaris Zones and ZFS in this context). The technical abilities of the Joyent folks were well beyond just about anybody else for a long time.

One could argue they were too early. For several years they had the technology to beat Amazon, until Amazon's massive scale washed over the whole market. The founders did well, so it's not like they "lost", but it's kinda sad to see a technically visionary set of products reach a dead end (one could maybe even view this as the final commercial death of what Sun built).

Also worth noting, their stack is open source in ways that nobody else is, I think. Maybe Google is doing right with Kubernetes, but everybody else is consuming OSS software to build proprietary clouds and giving basically nothing back. Joyent were doing most of the work, and giving most of the tech away.


I don't know if it was the immediate cause of their lack of success, but they had also brought with them from Sun the Sun attitude of doing stuff that their engineers thought was cool (and it probably was) without paying attention to how much people are willing to pay for it and whether their focus is correct from a market perspective. For example, Bryan Cantrill, Joyent's CTO, liked blaming Oracle's Larry Ellison for the demise of Solaris, on which Cantrill had worked, but by the time Ellison acquired the failing Sun, Solaris, cool technology and all, had been pretty much dead or dying for a while -- at the hands of Linux, and not much helped by the Sun attitude. Cantrill was furious and blamed Ellison of being monomaniacal about making money ("don't anthropomorphize Larry Ellison," he said) but perhaps it was Cantrill who was monomaniacal all along, not keeping his eyes on the market. Then again, luck plays such an important role in these things that nice narratives rarely give the whole picture.


To be clear, I have never blamed Ellison for the demise of Sun or even Solaris -- but I do blame Oracle for re-proprietarizing OpenSolaris, and thereby ripping up the social contract of copyright assignment in an open source project. If it must be said, this wasn't an isolated incident -- and in hindsight, was merely the front of a conga line bad Oracle behavior.

As for my own putative ignorance of market demands, one of Sun's growth areas at the time of acquisition in terms of revenue was the group that I had co-founded within Sun[1][2] -- in part because we had listened carefully to the market, and built a product that it very much wanted.

Finally, if we want to engage in the perennial musing as to the demise of Sun, I think for my part I still broadly stand by my analysis from 2011.[3]

[1] http://dtrace.org/blogs/bmc/2008/11/10/fishworks-now-it-can-...

[2] http://dtrace.org/blogs/bmc/2010/07/25/good-bye-sun/

[3] https://news.ycombinator.com/item?id=2287033


"perhaps it was Cantrill who was monomaniacal all along, not keeping his eyes on the market"

Joyent made money and was acquired, all while producing metric shit tons of incredibly cool Open Source software. We should all be so "monomaniacal".


We have an art installation at the corporate campus where I work.

It's a parody of 1984, with inspirational quotes flowing over the entire building (300meters long). But they are not inspirational, rather artsy sometimes edgy and provocative.

The sentences change every day, and are controlled by an artist in California somewhere.

One of the sentences one day was: "Monomania creates success"

I keep a logbook of the strangest entries.


Would you happen to be working in Germany?


No, but maybe there are several?

There are 430 statements. Holzer’s work reflects both a political and with a focus on interpersonal relations and a moral perspective. With her fleeting forms – texts that sweep past – she symbolizes aspects of life itself. Holzer represents a melancholic and committed voice in contemporary art.

* Strange section *

In a paradisiac state the basic survival tasks you need to perform all seems clear and your objective is right there in front of you.

There is a period where you know you have gone wrong but continue

Monomania creates success

* Very dark section *

Scream so powerful that it empties all the kindness you could have shown and blackened the entire fabric of your soul

Form a noise so true that your tormentor recognizes it as a sound that can come from his throat

Murder has its sexual side

You are caught thinking about killing anyone you know

Sometimes you have no other choice but to watch something gruesome occur.


Solaris's death at the hands of Linux can be attributed in part not just to "the Sun attitude" but specifically to the attitude of Bryan Cantrill personally. In response to Dave Miller explaining how Linux's networking was technically superior (at that moment) due to more care for things like TLB footprints and system call overhead, and also had more accessible technical support, Cantrill famously decided to respond by making fun of Miller for being a nerd: "Have you ever kissed a girl?"

https://www.osnews.com/story/28261/have-you-ever-kissed-a-gi...

https://web.archive.org/web/20110721130404/https://cryptnet....

Admittedly, this was in 1996, and it's possible Cantrill regretted this remark, or the nerd-hating attitude behind it, at some point in the intervening 23 years. But I've never seen him admit it. I see he's participating in this thread, so if you see him not replying to this message, that's him doubling down.


Ha ha -- wow! I had no idea that I possessed such power over time and space! As has already been pointed out to you (and has been pointed out repeatedly on HN over the years), I do regret the remark. So, there's that.

I would also note that the rise of Linux in the late 1990s and early 2000s corresponds with the performance of x86 surpassing the performance of the RISC microprocessors during that same period -- a time when the only other open source Unix was operating under a legal cloud from AT&T. It seems likely that these forces played a greater role than a twentysomething engineer shooting his mouth off on Usenet...


FWIW when I asked the "what's Joyent all about" question I never expected or hoped this would become a "let's blame Bryan Cantrill for random shit" flamefest. I'm not sure where all of that is coming from and I'm sorry if somehow I triggered all that.

You've been a great example to me and many others, in doing open source, in taking a principled stance (even if I subtly disagree with some of your principles) and also in showing that yes, it's totally possible to found a company with an incomprehensible business model, grow it, have it contribute lots and lots to awesome open source, and somehow despite all the free work and the incomprehensibilities, flip it for a profit. One day I hope to do the same and all of HN will be like "huh?"


Hey, thanks for that! The HN comments here today have been full of surprises, not least the intense personalization of it all! I believe that our impact in the world is much more in the people we inspire than in those we rankle -- so in that regard, it's deeply gratifying to know that I have been of service; thank you!


> full of surprises, not least the intense personalization of it all!

I'm surprised you're surprised by this, considering that you have contributed much to the personalization of such matters. The personal "psychologization" of complex social dynamics is pretty much the trademark of many of your talks. If you find psychological explanations attractive, why are you surprised when others do, too?


I think you're right regarding x86 performance being a huge factor. Also my own experience, admining a number of Solaris systems in those days, I just found it more frustrating to use. The first thing I would do was put GNU userland on there to make a Solaris system more usable.

Later on when I had more say in what systems things ran on, I'd reach for Linux because it was easier. It was good enough. I could easily run it at home too, and it ran on cheap hardware.

I sometimes wonder how much that usability factor played a role...

Though AWS seems to be doing fine as a business, so maybe this theory about usability isn't so great.


When you could run Solaris on the same hardware (x86), it was noticeably slower than Linux.


That was over 15 years ago!

It's now faster than GNU/Linux on the same hardware and has been for the past ten years.


Those were the days!


He has said he regrets it in a Reddit AMA[1]. I find it quite strange that you attribute the death of an entire company to the attitude of a single engineer in an email from 1996 where he made a silly remark. You'd think if Cantrill was that powerful of a king-maker you'd want to be much nicer to him. Not to mention that Cantrill himself is also clearly a nerd, so "nerd-hating" is a bit of a weird statement.

[1]: https://www.reddit.com/r/IAmA/comments/31ny87/comment/cq3e4y...


Consider the remark not as the cause of death of the company, but the remark as an instance of a general attitude, and that attitude fitting in quite well at the company.


I have never in my life dealt with engineers so competent, professional and kind as the Sun Microsystems folks were. If they were technologically arrogant, they earned it: their inventions outlasted the company and live on in many products.

I cannot name a single GNU/Linux "hacker" inventing anything. Everything they implemented is a shoddy copy of what the engineers at hp, SGI and Sun Microsystems did. They might have shoddily implemented it by writing the code from scratch, but they blatantly stole everything. For instance, the manual page for chkconfig(8) will even tell one that the command comes from SGI's IRIX.


While I definitely agree that Sun, HP, and SGI had amazing engineering this section of your comment is particularly ridiculous:

> I cannot name a single GNU/Linux "hacker" inventing anything. Everything they implemented is a shoddy copy of what the engineers at [...] did.

If we ignore the obvious counter-arguments (GCC, Apache, nginx, arguably git, and so on), and just look at your statement at face value -- you're saying that nobody from the largest free software community in the world has invented anything new in the past 10-15 years. What possible evidence do you have that this is true? It's not just wrong, it's offensively so.

Heck, ZFS-on-Linux is now the repo-of-record for ZFS -- that's a specific example of the GNU/Linux community implementing things alongside the rest of the free software community (and currently ZoL has more features than any other ZFS port).

It's also the case that those brilliant engineers did the same thing. Zones are arguably a re-implementation of Jails with some slightly different design goals. IOCP is objectively just a copy of kqueue (to the point where Cantrill said he wished they'd just ported kqueue). And so on. I don't think this is a bad thing at all, but it's quite strange to put them up on a pedestal to the point where you effectively say that any engineer who didn't work at Sun/HP/SGI in 2003 hasn't invented anything.

There are obviously plenty of examples of Linux not learning from others (Jails/Zones vs containers, kqueue/IOCP vs epoll, DTrace vs eBPF+bpftrace, and so on). But arguing that nothing innovative has come out of the GNU/Linux community ever is just awful.


> the obvious counter-arguments (GCC, Apache, nginx, arguably git...

...and Wikipedia, BitTorrent, the WWW (open-source, though developed on NeXTStep), the Objective-C compiler that made it possible, Emacs (and its idea of an extensible self-documenting editor), Perl, bash, the dpkg/apt system the app stores ape poorly, the CTAN and CPAN systems it derived from, most aspects of decentralized source control in the form of arch (and then later Mercurial and Git), rsync, Docker, Nix, GNU make, Python 3, gold (the linker), iptables, Enlightenment, the Hurd, MySQL, PNG, Pango, Numpy, IPython/Jupyter, Ruby, the X11 Render extension, LADSPA, Valgrind, asan/ubsan, basically everything on CPAN, all the inventions in x264, ...

I'm not sure Apache belongs in there, though. Rob McCool might take exception to being called a "GNU/Linux hacker", though I don't think Robert Thau would mind.

> There are obviously plenty of examples of Linux not learning from others (Jails/Zones vs containers, kqueue/IOCP vs epoll, ...

Oh c'mon, is the problem that Linux doesn't innovate enough or that Linux innovates too much? Epoll is an example of Linux arguably innovating too much.


Not a single one, huh? I'll help: https://git-scm.com/


There is a counter-argument to git -- namely Sun had TeamWare[1] (the prequel to BitKeeper) and thus you could argue that git is still a "shoddy reimplementation". But this of course ignores the fact that TeamWare was almost completely unknown outside Sun until BitKeeper came along (at which point it was just an element of BitKeeper's history), and that git is objectively superior in many metrics to TeamWare.

Personally I think GCC, Apache, and nginx are much better counter-arguments.

[1]: https://en.wikipedia.org/wiki/Sun_WorkShop_TeamWare


Quite unfortunately for all of us condemned to use GNU/Linux:

- GCC exists because Sun Studio compilers weren't free back then;

- Apache is the next generation NCSA web server;

- nginex is a re-invented Apache wheel.

Do you have any more examples to get corrected on? I'll be glad to set you straight, in the hope that unlike your Linux buddies you are capable of learning.

They are simply amateurs wanting to play engineers, but they never were engineers and never will be, and it shows in just how shitty GNU/Linux based operating systems are. Hacked up together by hackers according to the "hack it 'till it works, man!" motto. And they don't learn from their mistakes or from the mistakes of others, on top of being deeply convicted that they are the best.

Find me one technology invented in GNU/Linux which UNIX did not already invent and I will publicly retract my statement here. Just one.


Okay, here are two examples: rsync and restic[1] (an evolution of borgbackup[2]). You already know what rsync is, but the other two are backup tools that offer encryption and content-based deduplication. There is no distinction between a "full" backup and an "incremental" backup.

Now, I already know that you're going to say that "zfs send" already does these things -- but it really doesn't. First of all, ZoL only recently got encryption support so built-in encryption with ZFS only came around recently (and was developed by someone from the GNU/Linux community, by the way). Also, ZFS's dedup is so expensive that it's very strongly recommended that people don't use it unless they really need it. Deduplication with restic and borgbackup is content-based which means that it's far more resilient to shifting bytes in files and it's effectively free because everything is stored as a CAS (to be fair, it's only mostly free because people don't use it as a filesystem).

Again, I really am not bagging on Sun here. I just think it's quite ludicrous that you're saying that any engineer who didn't work at Sun pre-2010 never invented anything. I refuse to believe that you honestly believe that, purely based on how ludicrous of a concept it is.

[1]: https://restic.net/ [2]: https://www.borgbackup.org/


Sorry buddy, but https://illumos.org/man/1/filesync predates rsync.

Borgbackup is a third party, unbundled application. I don't see what it has to do with people hacking on GNU/Linux like mad.

"I just think it's quite ludicrous that you're saying that any engineer who didn't work at Sun pre-2010 never invented anything."

You didn't get it quite right: I'm saying that professional engineers worked at Sun Microsystems, hp and SGI. That's statement #1. Statement #2 is that people working on GNU/Linux are amateurs who didn't invent anything. Statement #3 is that because they are amateurs who are incapable of learning, they just keep hacking shit together haphazardly and will never be engineers like the people from Sun, hp and SGI. Especially SGI: SGI had the best engineers. Those people were way ahead of their time in every technical aspect imaginable.

"I refuse to believe that you honestly believe that, purely based on how ludicrous of a concept it is."

Believe it. GNU/Linux people are amateurs. They don't have it in them. They want to be thought of as engineers but they aren't. They just aren't capable of it. That's why GNU and GNU/Linux are garbage and will never be anything more than a pile of haphazard, shoddily slapped together hacks. That's the world we live in now, where IT is shit thanks to their shitty, shoddy work.


Dave Chinner is an ex-SGI engineer who has been working on XFS since the beginning of the project and is currently a Linux kernel maintainer. Brendan Gregg is ex-Sun and worked on DTrace and currently is helping improve Linux's tracing capabilities. Are they "amateurs who are incapable of learning" or "professional engineers"?

I've seen comments from you about how GNU/Linux is awful in every respect ever since you created your account ~3 years ago. I really can't imagine being so stuck in a mindset that you feel the need to spend so much of your time being angry about such a large community of people. Honestly, I just don't get it and I earnestly hope that this is just an online persona you have.


Dave and Brendan are former SGI and Sun engineers. They are refugees. They gave up. So once again we're back to what I said: either one came from SGI, or hp, or Sun Microsystems, but one sure as hell didn't come from the GNU/Linux being an engineer, nor will such a thing ever happen. For 20 years they've been hacking furiously on Linux and still the basic things don't work correctly. That's what happens when one hacks instead of system engineering solutions.

"I really can't imagine being so stuck in a mindset that you feel the need to spend so much of your time being angry about such a large community of people."

Computers were my life, my passion, my calling. They destroyed all that passion in me with their GNU and their GNU/Linux. I will not forget. I will not forgive. They made my professional life a living hell. I hate working in IT because of those people and their mentality.

"Honestly, I just don't get it and I earnestly hope that this is just an online persona you have."

  You must be the change in the world you wish to see.
And I believe that, deeply. I'm absolutely convicted about it. Even if I single handedly must teach apprentices, I will continue to fight until the last drop of blood against GNU and against GNU/Linux and teach people, person by person if I must, as I have been, showing them one on one how awful it is and how superior and easy a real UNIX like SmartOS is. Either until I die or manage to get out of IT. Because truly awful things like GNU and GNU/Linux deserve to be relegated to dark history of IT, and some good and beautiful, well thought out things like SmartOS are worth fighting for. I will continue to fight against ignorance and trend pandering and for enlightment. I will strain like buddha strained if I must. Some things are just worth fighting for and some things one must make a stand against, no matter the cost to oneself. I am forced to work on and with GNU/Linux because people think like you, but I will fight it and resist it every step of the way and I will continue to teach because working on Linux and with GNU/Linux is more work, not less work for me. That alone is more than enough to motivate me.


https://www.youtube.com/watch?v=Rqb4V9GxaBo

If I am not mistaken, Cantrill stole the line from this skit.

"The good old days." I actually went to a Star Trek convention as a kid back in the late 70's/early 80's, don't remember the year.

It was a different time. "Cool" meant something different back then. People really could say "Get a life!" and it was not an insult. It was truth. There was more to life back then than computers.

(There still is, but today this simple truth is being lost on many more people.)


This is patent nonsense. Sun lost because they were not price competitive. They thought they could command outrageous sums and 46% profit margins because they sold UltraSPARC-based servers and in the case of AMD / intel servers because they bore the Sun Microsystems logo. Meanwhile Linux ran on shitty personal computer hardware and was just good enough for enough kids who didn't have a clue about correctness of operation or reliability and still to this day don't, as is often evident from many, many comments on the subject of GNU/Linux here.

Sun Microsystems stubbornly refused to lower prices to undercut the generic shitty personal computer used as a server and that cost them the company. hp and IBM made the same mistake at expense of PA-RISC / HP-UX and POWER / AIX respectively. Oracle corporation is in the process of repeating that mistake. They all thought they could get away with outrageous profit margins and pricing. It's like once they started shipping servers to enterprises they all lost compass. Meanwhile, the whole market commoditized. They all wouldn't accept that. Now they've been reduced to rubble. hp "Enterprise" is missing in action somewhere peddling a server here and there, IBM started whoring itself around as a "re-invented" service company (the next "Computer Associates"?) and other than Exadata I don't know of a single business willingly buying Oracle servers. Yeah, it's very hard to predict how that will end. "Very hard".

At any rate, all of that has nothing to do with Bryan. Zero.


Just as a side note here, Google is doing Kubernetes as a commoditize your compliment strategy (https://www.gwern.net/Complement) so they can sell other cloud compute services. This makes it easier for people to migrate their on-prem workloads to Kubernetes, and then to the cloud. This commoditize much of what Amazon was attempting with ECS. Forcing Amazon to fast follow with EKS, arguably a worse and much more complex to deploy product than Google GKE.

It makes it so it's hard to make money off container orchestration, but you can still make money off what Google is good at, or at least thinks they are good at. Selling compute, proprietary databases, and stuff like TPUs.

My point is that yes, Google is advancing the whole industry by releasing Kubernetes as opensource, but it's also a strategy to sell more of GCP and deny some revenue to competitors.

It remains to be seen if it works. GCP is still a distant 3rd in the cloud race and has a long way to go to improve their enterprise support. (Their usability is much nicer though...)


> it's kinda sad to see a technically visionary set of products reach a dead end (one could maybe even view this as the final commercial death of what Sun built).

I think it's not really the end of the technology, but it is an end to a generally available public commercial offering of it. It sounds like they're still offering services, but only for "single-tenant", which after some careful reading (because I wasn't sure how to interpret that initially), I think means that they'll provide their software and managemest system for private install for large orgs in their private deployments, maybe along with a support/management contract?


That's basically putting technology out to pasture. When you've got a handful of large corporate customers that depend on a product, you can continue to extract money out of them for years or decades without the product having a future.


The question is whether they are still actively pursuing enterprise clients. If they aren't, then I agree. If they are, then it's not putting it out to pasture, it's just a different business model. There are plenty of companies that do this. My brother works for one that does it for large network storage deployments using proprietary technology, and I imagine they don't want to hear from you unless you are willing to put out hundreds of thousands to millions of dollars on a deployment (and yet they still have new deployments). I definitely wouldn't consider them putting the technology out to pasture because of that.


"but it's kinda sad to see a technically visionary set of products reach a dead end"

It's not a dead end since any company, public or private can use the code and the product (I for instance build my own modified SmartOS from source) to power their infrastructure.

Technically there is nothing to prevent anyone from taking the product and the code and starting the next Amazon. I know of at least one internet service provider in the UK who is leveraging SmartOS to provide commercial, for-profit service. There are probably more such businesses, but this is something which squarely falls under "I'm keeping my mouth shut and making money" category.


What would you state as the top reason(s) to choose SmartOS vs Linux for those use-cases?


1. full-blown virtual UNIX servers running at bare metal speeds (joyent-branded zones).

2. ability to run virtual Linux servers are bare metal speeds (and their applications).

3. simple and easy container management of 1. and 2. with imgadm and vmadm.

4. OpenZFS - easy capacity growth, checksummed data, silent data corruption detection and self healing.

5. lightning fast, faster than GNU/Linux performance on the same hardware.

6. same application portfolio as GNU/Linux (14,000 applications available with a simple pkgin command invocation).

7. ability to create virtual routers and switches out of thin air with dladm, join and partition physical links in terms of percentages, simplifying network topology in virtualized environments.

8. OS paranoid about protecting and self-healing itself and the data it hosts.

9. ability to pick between KVM (slower) or bhyve for paravirtualization of non-Linux OS's like Microsoft Windows, OpenBSD, FreeBSD et cetera (although this process is poorly documented).

10. NFS V3, V4, fiberchannel and iSCSI which work correctly.

There is one major downside (I don't consider it as such, but I know others do): one must pick the hardware compatible with SmartOS from the Joyent's hardware compatibility list, not the other way around: https://eng.joyent.com/manufacturing/bom.html.

But when it does fire up over the network and once one groks it, holy shit, it's like being taken into the future with a time machine.


This. Well put.

when eveyone was hooked on VM's and other nonsense they had 'branded zones' running.

The learning curve was high, but worth it once you had it running.

Sad to see them leave. wonder what i will do with my SmartOS based media server now?


For what it's worth, we're still maintaining SmartOS and it's still open source!


> Technically speaking, they were quite far ahead of the curve. While VMs were all the rage, they were shipping container-based products

> The technical abilities of the Joyent folks were well beyond just about anybody else for a long time.

None of that was ever true. Whether they had container-based or VM-based tech was a tiny unimportant detail for a cloud company. Although over engineering in this area certainly made it impossible for them to compete with more traditional hosting companies. And to be able to compete with Amazon they needed something they didn't have - distributed systems background, not Sun background, which was anything but.


Sun Microsystems literally invented the network is the computer. They were the first company ever to even begin to think in terms of distributed systems on a network scale (DEC had done some pioneering work on the hardware front prior to that with VMS, but not the network). The network file system, NFS, exists because Sun Microsystems engineers willed it so -- they wanted to multiply the computational power of smaller systems into a larger, distributed one.

They were a pioneer in distributed systems computing. Not many computer companies can claim that. Even fewer companies can claim engineers which hold patents which now power the InterNet, like Radia Perlman does for the spanning tree routing protocol. If that's not distributed, I don't know what is.


Wow, they invented the slogan. Sun alumni seem awfully thin-skinned about not getting credit for every little thing they did, which is why it's pretty unforgivable to deny DEC credit for anything more than "some work on hardware". I was working on DECnet-based remote file access software in 1989. It wasn't new, but NFSv2 was. DEC already had real clusters doing real work before Sun did. Within the company there was already a globe-spanning LAT network. I had friends at Apollo who were also way ahead of Sun wrt distributed computing. AT&T's RFS was at least contemporary with NFS, as were Netware and Vines. Sun was definitely a player in that space and made some seminal contributions, but to say they were the first even to think about it is unmitigated BS.


I am not a Sun Microsystems alumni. I wish I was, just as I equally wish I were an SGI alumni.

Yes DEC did have clusters, and I remember booting my Alpha workstations from a Sun SPARCStation 20 over DECNet as an AutoClient and I know DEC had hardware clustering and VMS working together as a distributed system, but since I came at the end of that era and only worked with Ultrix and DECUnix, I don't know any more about it which is why I couldn't go into detail. I only go into detail when I know about something and usually I've done that something myself.


Prime Computer Inc. had transparent remote disk access across PrimeNet in 1980. You could not distinguish being on a local disk vs remote disk, other than performance. PrimeNet ran over token ring (they also had fiber optic token ring running at Ford's main Dearborn, MI campus), full-duplex synchronous, and half-duplex dial-up synchronous network connections.

Prime had remote procedure calls. Their MIDAS product (index sequential files, basically key-value store with multiple indexes) allowed multiple nodes to access the same file by allocating a "slave" on the node hosting the file, forwarding all calls over the network, and returning call results. OS file access worked the same way, using the same slave for all remote procedure calls. Source code designed for local file access didn't require any changes to be used for remote files, just like NFS.

I'd say Sun had a pretty good example to go by for NFS. In fact, if you want to see this in action, I wrote a Prime minicomputer emulator and have remote disks available between 5 different revs of their operating system, spanning a 10-year period.

Telnet em.prirun.com 8001 and enjoy!


All of this experience is pretty shallow and completely irrelevant to distributed systems in the age of the cloud computing and the internet. For some reason Sun ignored the internet and didn't move into distributed systems at all. So, for example, this experience at Sun couldn't help the engineers in any way to build a distributed key-value storage for the cloud, while Amazon was publishing stuff like Dynamo paper.


You know, whenever I vow to stop explaining Sun, there is some yet-more outrageous claim that I just can't seem to summon the willpower to ignore, for fear that it will join a corpus of alternative facts that poison the wellspring of future generations.

So, in that spirit: Sun emphatically did not "ignore the internet" (?!) -- and was in fact a pioneer of distributed systems in many ways. Instead of rattling off the many important, early distributed systems that Sun built -- or the many pioneers of distributed systems that Sun employed -- I will vector you to a single Sun-authored paper circa 1994, "A Note on Distributed Computing"[1] (and to Chris Meiklejohn's discussion of it at Papers We Love[2]). The paper itself drips with wisdom that still feels current and important a quarter of a century later -- and it very much reflects the zeitgeist of the time at Sun. Please read the paper -- and more generally, take the time to educate yourself as to the history of our domain and the many people and companies who have made seminal contributions to it; the generations whose shoulders your work stands upon will thank you for it!

[1] https://github.com/papers-we-love/papers-we-love/blob/master...

[2] https://www.youtube.com/watch?v=z79mjsLdkpY


Moreover, Sun’s legacy in distributed system goes far beyond theory, people forget about Jini (and Javaspaces) a distributed computing platform that shipped by 2000 or before (I had the pleasure to work with and it was the only reason I would develop in Java), SOA? micro services? In-memory distributed computing? It was all there at work. Want P2P?, enter JXTA. There you go.


Just to add to the "ignored the internet" bit:

This is the company that had "the dot in .com" as its marketing slogan.

https://www.youtube.com/watch?v=njnNVV5QNaA

https://slashdot.org/story/00/04/20/1542217/sun-no-longer-th...


Ignored the internet in terms of distributed systems, which I assumed was obvious from the wording.

Look at that 1994 paper, none of the things in the paper show any background to build a distributed key-value storage or really anything for the cloud. I'm not sure why people are trying to claim otherwise. Even ZFS from a decade later reflects both a complete lack of experience in distributed systems and ignorance towards the internet.


Why does ZFS reflect a lack of experience in the internet and/or distributed systems? Simply because it didn't follow a questionable tech trend of "everything is a DHT?"


Because they didn't seem to know any of the simple and nice ideas from distributed systems to solve pain points of managing space, since adding, removing nodes, sharding/partitioning is something you have to figure out pretty much from the start.


[flagged]


> Distributed systems and write arbitration on the same file are still terra incognita.

That's far too aggressive a response for someone who's being sloppy with terminology themselves. "Distributed" covers many different scales. You also use "clustering filesystem" as though it's a synonym, which it very much is not. Among those who have actually built both, like me, distributed and cluster filesystems are very different things. It's fine if you think "true" distributed systems means global scale, but if you're not even explicit (let alone correct) about it you should hardly be condemning others for using definitions closer to general usage. And you definitely shouldn't be pretending to read people's minds with claims about what they "gave no thought" to or "have no idea" about.

P.S. Since I can't reply to your comment below which was quite rightly flagged into oblivion, no, I didn't have anything to do with GFS.


Aggressive? Precise would be more like it.


Flagged your comment for this. You should at least glance over my comment history on distributed computing before attacking me on things you don't understand.


I do not care who you are. Your comments on this thread made it crystal clear that you are not familiar with the subject matter: for you distributed has a very narrow meaning of a sharded key-value store, you've succeeded in making that clear. "Cloud", except that's not it.

And whether you "flagged" my comment or not, that's irrelevant to me.


It doesn't have that meaning to me. But in the context of a failed cloud company of Sun refugees the only experience in distributed systems that matters is the experience Sun refugees didn't and couldn't have. For some reason you are claiming absolutely irrelevant things to any of that. And, of course, resorting to personal attacks.


Would you please not post personally abrasive comments or do flamewar on HN? Not cool.

https://news.ycombinator.com/newsguidelines.html


[flagged]


We've asked you repeatedly to stop posting like this to HN. If you keep posting flamey comments we're going to have to ban you. I don't want to do that, so would you please review https://news.ycombinator.com/newsguidelines.html and follow the rules strictly from now on?


"The network is the computer" doesn't feel like a slogan that a company ignoring distributed systems would use


Ahh yes, knew I was forgetting an important one...

Oh, and SUN was the Stanford University Network.


I guess I rephrase: they didn't do any of the distributed systems stuff necessary for the cloud if people so desperately want to call all the network stuff they did "distributed systems". Technically they were distributed systems, sure, just like anything client-server, but irrelevant to the challenges of fault tolerant geographically distributed asynchronously communicating systems.


... Does this statement come from having actually used things like Hadoop on Sun Grid Engine or any of Sun's other products? Or... what? What knowledge are you drawing on?

Edit: I mean, this is the company that gave us the 8 Fallacies of distributed computing, as they were 20yrs ago


I completely disagree, see my comment above. I was doing asynchronous distributed data replication and compute over a grid in 2001 using Jini and Javaspaces, and I was the one late to the party.


So old broken networking tech (Jini and Javaspaces) over a fast reliable local network (a grid) in 2001. How could this knowledge help anyone with fault tolerance, high-availability, partitioning/sharding, CAP consistency/availability trade offs, etc., which internet companies at the time were figuring out?


Sun Microsystems literally invented geoclustering! Wow... just... wow.


> Sun Microsystems literally invented geoclustering!

They didn't. Not that any of it is important anyway. All 80s and early 90s networking tech was ridiculously broken, which should be obvious, since distributed systems field was just born at that time (first papers in like 1978). Things started to get useful only by the end of nineties. But by that time Sun's tech was nowhere near anything internet companies were doing. So later when things got to infrastructure Sun's engineering was really really behind, they had to start from scratch on anything that internet companies were learning and doing for many years. And this was very different time consuming knowledge, not something where someone could use their OS level expertise. Which also makes it very hard to compete with companies that are much farther ahead, especially with this silly "our tech is the greatest" attitude making it hard to even realize how far behind you actually were.


Sun Microsystems was the #1 computer company by revenue at the end of '90's. Everybody ran their hardware and Solaris; that's where the slogan we're the dot in .com came from, because it was true. If you had been around for it, you'd know it. Your comments date you...


I believe ZFS represented a missed opportunity by spending all that effort on a local filesystem instead of trying to do something interesting in the distributed space. I even said so at the time, since I was working on distributed filesystems at the time, but that's just a strategic issue (and one that's quite arguable). It's not evidence of ignorance or lack of experience.


Well, on that part I was referring more specifically to the vdev mess and raidz, how you cannot just add a disk to the file system. Which is pretty silly, considering that at the time there were plenty of ideas how to do it much better, although not for disks, but for nodes in distributed systems. But apparently no one on ZFS team had any idea about any of it.


ZFS doesn't work that way: it works on pools of storage. Disks are added to the pool.

Filesystems (or datasets) are carved out of a pool, not out of physical disks directly. There are vdevs on top of physical disks.

That's like people who are stuck back in the '80's because of ext3 arguing that ZFS is crap because it doesn't come with fsck or "some sort of repair tool" -- they just can't wrap their head around the ZFS concepts.


That was a pretty common pattern at Sun. Engineers there did a lot of great work, no denying it. Like many other projects, ZFS solved some pretty difficult and important problems while ignoring others. Which ones get talked about ad nauseam? We all make mistakes, of course, but some of those ignored problems were well known at the time to be worth solving. Did those self-professed "complete engineers" learn anything from those times they missed the target? The fact that Sun doesn't exist any more suggests an answer.


The problem is they cannot all be solved with just one single thing. Jeff Bonwick and his team knew that. They knew what was left on the table and they had to make that call. A completely fault tolerant, distributed system which can correctly arbitrate a write to a single shared resource (usually a file or even a block) from multiple nodes at the exact same time is a problem which has not been solved correctly yet, to the best of my knowledge and belief. I myself have spent considerable amount of time (20 years, to be precise) trying to solve that particular problem correctly for all edge cases. That problem is terra incognita in computer science and yet to be throughly researched, let alone understood. All attempts at solving that particular problem 100% correctly have not been successful, by anyone, but if you know of someone, please do tell, I'd love to be corrected on this and start using such technology immediately.

The ZFS team was well aware of this and that's why they made the trade-offs they did. They picked a subset of problems and they solved them. Even in hindsight, to me at least, it's obvious why.

But to suggest that Sun Microsystems doesn't exist because the company's engineers picked only a subset of problems they knew they could solve is... either terribly ignorant or wilfully malicious, or both, since malice often stems from ignorance. Sun Microsystems lost because they were too expensive, more expensive than garbage intel personal computer servers and because Solaris wasn't open source code in 1993. It's that simple.


You seem to believe a shitty general key-value store with no SQL interface and no ACID guarantees qualifies as a distributed system, which is terribly shallow and misguided.

I worked at a company circa 2005 where we had a platform as a service built on top of Solaris 10 and zones, on-premise infrastructure which you now call cloud. Fully configured and serving Sun clusters would come into existence by merely DHCP booting from the OpenBoot PROM (Sun had to give us patches for the OBP for DHCP booting). I happen to know this, because I was one of the senior system engineers who helped architect it and then built it. We had it powering tens of thousands of servers. We were one of the first users of IP Multipathing in Solaris ("IPMP"), a technology Sun engineers invented to provide redundancy for network connectivity for the clusters. Amazon wasn't even a smear yet. That was in 2005. Which year is it now? How many years ago was that?


[flagged]


"If you compare Sun to their competitors at the time: IBM, DEC, SGI, HP, and even Microsoft. Sun was way ahead on distributed systems."

Careful now, you're on thin ice: Sun Microsystems might have been a pioneer in distributed cloud computing, but nobody did it better than SGI: once they had the NUMALink technology from CRAY, they invented cluster routing in hardware which would linearly scale with added router modules. I had Origin 2000's hooked up and scaled them linearly thanks to NUMALink, the hardware routing bricks and the crossbar switch on the motherboard. And IRIX 6.5 was aware of it all and used it!

I'd hit the power button on one server and the other one would come up; after the systems booted, each having 8 GB of memory and 8 processors, I logged into a single IRIX instance with 16 GB of RAM and 16 processors. The only company to come even close to that was DEC, years prior, but nobody perfected it so throughly like SGI engineers. What CRAY started, SGI engineers finished. Sun only came with hardware which had the crossbar switch after SGI, and they never had their own hardware router bricks. By router I don't mean network router, I mean cluster scalability router which had nothing to do with TCP/IP networking.


That's not the distributed computing that the OP was referring to. Explicit reference was made to WAN-style networking that makes the crossbar switch effectively seem like motherboard traces.


Doesn't matter (and I don't care) what they referred to, only the end result counts: SGI designed and implemented cloud computing in hardware. Color me impressed. Haven't seen anyone successfully do it since, hardware- or software-wise (and everyone is sure as hell trying!)


Cloud computing in hardware is oxymoron.


SGI did in hardware what people are now trying to hack together with shoddy software implementations which can't survive an outage nor protect against data corruption.


Explicit reference was also made to NFS, which has nothing to do with WAN-style networking. The local Sun mafia are trying really hard to claim credit for distributed computing at all scales, and there's nothing off-topic about addressing one part.


WebNFS was explicitly designed to work over WAN, and it's predecessors allowed for and worked over WAN.


"Sun mafia"? That's really uncalled for. Why are you angry with those people? What did they do, in your view, that makes them deserve such an insult?


Many many things over the years, going all the way back to 1990 when I was at Encore dealing with all the dirty tricks they played to hobble competing NFS implementations, but that's not the point.

My real issue with them is hubris. There's a certain cadre of Sun alumni, highly visible on forums such as this, who would have us believe that they were absolutely always doing the best work on the most important problems. I think that denies credit to others who were doing work just as good on problems just as important, at companies which weren't driven into Oracle's arms by all the waste. Having known many of those people over the years - it's not for me to judge whether I'm one of them myself - I find that offensive. Sun just wasn't all that and the sooner we shed that particular piece of idol worship the better the industry will be.


You know why Sun engineers are all that? They made all these technologies they invented wildly popular in professional, elite IT circles. There are still many people who have no clue who Bryan Cantrill or Jeff Bonwick or Matt Ahrens or Nicholas Droux are because they're not in that elite or enthusiast circle of IT professionals dealing with those technologies, but for us who understand what they do and their importance to the overall computer industry, yes, we appreciate them for inventing them. Those people came forward, blogged and documented their work over the span of ten years, did you? I didn't read anything from you until yesterday, and it's not for the lack of trying, let me assure you. They made videos, presented at conferences, a lot. So yes they're popular, as they should be and they get credited, as they should.

It's like watching an Amiga demo: one has to understand how hard it was to code the effect to appreciate it -- a normal person off of the street just sees some graphics and hears some music. Sun engineers not only did great things well, but they popularized them too because they wrote intimate diaries in the open about their work over the span of ten years. They raised an entire generation of IT professionals and taught them what it takes and what it means to be an engineer.

I wish I knew who the engineer(s) behind SGI inst(1M) is(were), but unfortunately I don't because they never put names and faces on their products.


If you're going to persist in this kind of "I'm elite, you're no big deal" posturing (and borderline ad hominem) then we're done. This isn't about who's the biggest blog loudmouth. I don't care that you failed to find something to suit your narrow biased in interests in five minutes of stalking through my old blog two years after it was discontinued. Both Bonwick and Cantrill were able to find it when it mattered. They stopped by and we had a couple of interesting conversations. Those kinds of interactions are what matter.

Please, try learning something new instead of getting stuck on the first thing you found when you first entered the industry and spitting all over everything else. Go look through the conference proceedings, even from Sun's glory years, and you'll find plenty of papers from people elsewhere that have had a far more profound effect on software practice today. This fanboi behavior does nobody any good, least of all you.


It would do you well to stop being so hurt that you didn't get credit or recognition for the work you did. Why didn't you blog about it instead of being pissed off Sun engineers get the credit?

Yes I'm a big fan of their work, not least because their products work damn well, especially when solutions in GNU/Linux are compared to theirs. Did they screw up over the years? Yes they did, the USB subsystem in Solaris is an infamous fuck-up of epic proportions. Ironically it has nothing to do with distributed systems. But the things they really put effort into like Sun clustering, NFS and networking work and work damn well.

As for me and learning something new, I do that all of the time. The problem is that most of the new things coming out aren't new at all and certainly aren't better.

And you know what, for a passionate computer enthusiast, I'm sick and tired of IT and all this bullshit.


> There's a certain cadre of Sun alumni, highly visible on forums such as this, who would have us believe that they were absolutely always doing the best work on the most important problems.

Well, I'd be the first to say that Sun was often doing substandard work. The implication that they were doing no work is just insane though.


Whatever... They are now history and lately remembered by hanging over Java to Oracle... Everything else will be buried in history or 42nd page of Google search results


Because "Linux is the best", or some bullshit like that?


This is just willful ignorance.


[flagged]


There is more, they couldn't even survive the age of the internet and distributed computing.


I think it is very safe to say their failures had little to do with not appreciating the Internet or not understanding distributed computing. You might note that a LOT of companies that didn't survive that era actually understood both very well, and yet failed nonetheless. In a lot of cases you can point to startups these days that essentially retrieved those ideas from that era and reintroduced them to a world that had caught up with where they were at.

Sun moved in to this stuff. They understood it. They just weren't successful at figuring out how to run a business with it.


Technologists certainly tend to seek solutions, and find blame in technology or the application of it. Getting technology right is necessary, but not sufficient.

In the end, the world is much bigger than the tech we build & consume.


"And to be able to compete with Amazon they needed something they didn't have - distributed systems background, not Sun background, which was anything but."

lol. "The network is the computer."


I'm reminded of something a smart engineer said at work today. "Our customers don't care how we implement the system, they just care that it works"


That's how you end up with unmaintainable solutions and code-bases which die under the cost of just trying to keep the system running, and because they lose the ability to evolve.


I guess I should have provided more context. It was in context of talking about a rewrite into another language to make something more maintainable and performant. To explain how hard that was to get right, and that if we messed it up, customers only really care that we messed it up.


Joyent has been a subsidiary of Samsung for a few years (presumably running Samsung's internal private cloud) and I guess they are now focusing almost entirely on Samsung and their public cloud was a distraction that didn't make much money.


Samsung acquired them because of their public cloud infrastructure -- they were Joyent customers for a long time and clearly decided that buying them would be the cheapest way to continue using them. As mentioned in TFA, they are only shutting down their multi-tenant cloud and are keeping their dedicated cloud business (which is what Samsung uses).


This origin story is not entirely accurate[0]. (I spent almost four years helping build datacenters for Samsung _Mobile_, who actually bought Joyent.)

[0] https://github.com/joyent?q=manta


Joyent was a public cloud provider, that offered a VPS/Container style offering built on Illuminos (open source Solaris). The expanded into offering Linux using some clever stuff to support Linux runtime on Solaris (kind of like WINE, but much simpler since Linux is much more similar to Solaris and also fully open source). All with DTrace so you had some great transparency on what was going on with the underlying hardware.


> Joyent was a public cloud provider, that offered a VPS/Container style offering built on Illuminos (open source Solaris).

When they first started they used FreeBSD jails IIRC.

The main selling point is/was being closer to the hardware and thus getting better hardware utilization. Virtualizing at the machine level was a lot slower (especially before Intel VT-x and AMD-V came along) and so one wouldn't lose (say) 30% right off the top:

* https://en.wikipedia.org/wiki/OS-level_virtualisation


> When they first started they used FreeBSD jails IIRC.

Nope. Solaris Zones.


Incorrect. FreeBSD was in use first, as founder David Young’s own history [1] describes. OpenSolaris and zones came later.

[1]: https://www.google.co.uk/amp/s/davidpaulyoung.com/2016/06/17...


Hrm.. I don't even remember them having a BSD jailed offering, but I'll take your evidence over my lack of evidence.


I thought Joyent was built around SmartOS? Although I never really understood the significance of the different Open Solaris forks. Or are these the same?


Disclaimer: I have a smartos setup because for now it still runs quite stable.

Joyent/SmartOS was the one that added KVM support and a bunch of other features to the kernel, but I think it was merged into IllumOS. It's quite limited and has a clunky interface. When I tried to contact anyone there to add UEFI boot support there wasn't even so much of a: "we don't care".

The problem with SmartOS, Illumos and the rest of the Solaris world is that while for Linux you can find heaps of books, forums, stackoverflow entries there are 3 books for OpenSolaris. SmartOS has seemingly great design, but it's absolutely nontransparent on how to use it. The documentation is subpar, lots of the wiki reads more like an experiment than an actual documentation.

If you hit a bug, be prepared to spend hours on github issues to see if someone found a solution to it(might as well skip it because it's unlikely).

If you use Triton and the API for it all is good, but if you want to use a SmartOS cluster directly, everything thing will be fine until you make a mistake or try to do something non standard and then good luck to you.

There's a great/fun talk by Bryan cantrill about how management killed Solaris. But if you ask me it's only one half of the story. The other half is elitism.

If you want people to engage in your open source project it needs to be decently reasonable to work with and you should be able to get some sort of feedback, even if that feedback is "lol, what you're doing is completely wrong why don't you RTFM here".

But the attitude I saw was more like: "Our stuff is greater than everything else so why should we bother with you".


As communities go, I think we in the illumos and SmartOS communities pride ourselves on being helpful, so I'm sorry that that wasn't your experience. Speaking personally, I have always tried to meet applications where they were -- and I believe that this is reflected in our work on KVM and LX and bhyve. That said, documentation can always be improved -- and in particular, the wiki was an infamous mess that was recently replaced by a much better docs site[1].

My apologies again for your experience, and I don't think it's indicative of the community. I would encourage anyone who wants to get involved on the mailing lists or hop into #illumos and/or #smartos on freenode![2]

[1] https://illumos.org/docs/

[2] https://illumos.org/docs/community/


Not a IllumOS/SmartOS user, but can you comment on how good the man pages are? What you state is sometimes a complaint I hear about OpenBSD, but from my experience using OpenBSD (which was mostly almost a decade ago now) is that's because they actually put a huge amount of effort into making sure the man pages were not just correct, but comprehensive (to the point of requiring code patches to include an accompanying man page patch if it added/changed a feature or changed functionality).

When first using OpenBSD this was jarring, because I just wasn't conditioned to expect that system level features would have useful man pages, as that was almost never the case in Linux.

As a simple example of this, here[1] is the OpenBSD man page for ifconfig. Note that it contains liberal references to other man pages of similar quality and completeness where a protocol or functionality might be better explained. Once I learned this, very little outside help was required to setup OpenBSD firewalls stateful failover of firewall connection state as well as VPN tunnels with automatic IP failover with CARP. All from a handful of man pages.

So, that said, does OpenSolaris have good man pages? If it does, it might be that RTFM is the correct initial response to most questions, as it would be the up to date and definitive information about the what and how of things. A simple comparison of ifconfig[1] makes it look somewhat good, but that doesn't mean it's kept up to date and accurate. Can anyone comment if this is the case?

1: https://man.openbsd.org/netstart.8

2: https://illumos.org/man/1M/ifconfig


These are so different use cases. Good luck trying to find out how to do your network design with the dladm and good luck trying to understand the capabilities and limitations of etherstub from the manpages.

But yeah the structure of the OpenBSD man pages looks ok. Where you put proper structured documentation doesn't really matter IMHO.

It's conceptually different. You can easily use the OpenBSD manpages, because they map nicely to every other Unix system concept.

https://smartos.org/man/1M/dladm


Honestly, looking at the dladm man page makes it look pretty good to my eyes. It's technical, and long, but the fact that it seems to go into the capabilities in great detail and how to use it as well as includes twenty separate examples at the bottom is something I see as a good sign. This is exactly the kind of reference, that included with a couple test systems to play with, I could use to figure out how to implement a network design.

Whether it accurately explains the capabilities of etherstub I don't know, but it seems to do a good job explaining what it is and is used for. The basic usage is detailed within one of the examples. I'm not sure what you're comparing it to that you think would be a superior way to document it though. I honestly can't imagine it being better in Linux in any way. Windows, it would probably be explained better, but you might have to pay a few hundred dollars for the documentation or training, whether it be from Microsoft or a third party.

> It's conceptually different. You can easily use the OpenBSD man pages, because they map nicely to every other Unix system concept.

Not any more than SmartOS, I'm sure. IIRC, CARP is an OpenBSD designed protocol to compete with Cisco's proprietary VRRP. pfsync is their tool and protocol to sync PF (their own custom firewall implementation) states across a network link. sasync is their own protocol and implementation to sync IPSEC security associations across the network. These were eventually adopted and ported to FreeBSD, but they are far from "every other Unix system concept".

I view OpenBSD and IllumOS/SmartOS to be even more similar in approach now than I did prior to this discussion.


It really is disorienting when you pop open the man page and actually is all the documentation you need.


“The world is full of magic things, patiently waiting for our senses to grow sharper.”


No need for a disclaimer. Thanks for the comment.


SmartOS is an Illumos distribution. Notable others include: OmniOS, OpenIndiana, Nexcenta. https://en.wikipedia.org/wiki/Comparison_of_OpenSolaris_dist...


...I get voted up and then a day later I realize I made my usual mistake of typing in "Illuminos" instead of "Illumos". Apologies to everyone who suffered through reading that.


"Can anyone explain Joyent to me? My understanding always was that they're some special kind of hosting provider. But I never really got it."

Here is an old, historical thread that gives some context:

https://news.ycombinator.com/item?id=4391669

I have a hazy recollection of their being part of the "free thumper" startup program that Sun Microsystems offered, briefly, and then later, blowing up one of those thumpers and losing a whole clouds worth of data... here it is:

https://adtmag.com/articles/2008/01/21/strongspaces-10day-cr...

Later, they were briefly (in)famous for voiding everyones lifetime shared hosting agreements:

https://en.wikipedia.org/wiki/TextDrive

"Customer backlash to the announcement turned out to be fierce."


The lifetime hosting was clearly never going to work. Even when they launched it, anyone with any sense - and me - knew it.

But it got them cash and publicity, and didn’t kill them when they cancelled it, so, on balance, it made their business possible


My understanding is that Joyent is comprised primarily of Sun refugees who've attempted to compete in the same OS/server space that Solaris operated upon way back when. They were acquired by Samsung in 2016-ish iirc. Solaris/illumos has failed to gain traction and the already-tiny community has only dwindled over the last few years with the expiry of OmniTI and other major illumos users.

The death knell really being the abandonment of illumos-gate as the authoritative ZFS upstream. If old-school diehards like Matt Ahrens are throwing up their hands and saying that the momentum in ZFS is in ZoL, the battle is over IMO.

node.js was always a diversion IMO. I think it was a play to maintain relevance, something Joyent has always acknowledged a need to do, but I don't think they've ever had the manpower or the buy-in to go full throttle.

I don't know if this is the end of Joyent as such or not, but IMO it's finally time for Cantrill et al to turn in. They've undoubtedly fought the good fight, but illumos is a farce at this point. Better to accept that their old-school big Unix development model is a thing of the past and put those efforts to something more practically useful, like FreeBSD. At this point, they're torturing the legacy of Solaris more than maintaining it.


While I appreciate your thoughts and prayers for my future endeavors, know that you will likely be disappointed: the beauty of open source is that it endures on the strength of its ideas alone -- and telling a group of people to stop working on a technology simply because you find alternatives scary is unlikely to be convincing. One of the most important lessons I have drawn from the many open source communities that Joyent has participated in is that large communities are not necessarily stronger ones -- and that sometimes the smallest communities are the strongest of all. In this regard, software communities are reflections of their shared values, a notion that I expanded on at length in a Node Summit talk a few years ago.[1] So if by asking me to "turn in" you are asking me to forsake my own engineering values for purposes of your comfort, well, no, I'm afraid that isn't going to happen.

[1] https://vimeo.com/230142234 (slides: https://www.slideshare.net/bcantrill/platform-as-reflection-...)


> So if by asking me to "turn in" you are asking me to forsake my own engineering values for purposes of your comfort, well, no, I'm afraid that isn't going to happen.

Of course I'm not suggesting that you forsake your values. I'm sure anyone who has followed you for any amount of time knows that will never happen. There is no stronger advocate for the principles upon which your work is based, and your fervor and commitment is extremely admirable.

My suggestion is merely that it's perhaps time to acknowledge that the market is not going to adopt illumos in a major way. An outflow of that is considering the possibility that the considerable engineering resources at your command may be better deployed in attempting to instill and carry forth the same crucial, badly-needed principles and values into projects that have more realistic prospects, to the extent possible, rather than remaining 1000% committed to a lost cause, where the principles and values will flare out in obscurity. "If a tree falls in the forest..."

illumos is an immense contribution to our collective engineering heritage and I entertain no illusion that it should be totally iced. Personally I would consider it a glorious resurgence to see it more widely adopted, and to see it become the basis of a new generation of systems. SmartOS is simply a joy compared to the train wreck that passes for systems management in today's mainstream, and I sincerely hope that the wheels turn to make it a more viable option. The ongoing open-source contributions of the faithful volunteer keepers of the flame will be invaluable in leaving that possibility open.

While there is no doubt that illumos's open-source heritage ensures it will live on, we must be realistic and admit that for all practical intents and purposes, the sun has set on the project's general commercial viability (at least for the time being -- the thing about sunsets is, they're followed by sunrises).

The question I pose is whether your advocacy may be more effective in bringing the values and contributions of yourself and your team in force to more active venues, than by continuing to toil in the last fleeting fragments of the twilight of Solaris's commercial viability.

I personally know many young engineers who admire your talks but have no exposure to the actual process of working with you directly. What would change if they got that? Would there be a Cantrill Youth? What's the likelihood of passing on that flame within illumos v. expanding your horizons to other groups?


There’s a gap in perspectives here, I think, which is roughly similar to that between Minix and Linux. Software doesn’t always have to compromise to pragmatism, because there’s nothing requiring that software be developed to be used.

Sometimes a reference implementation is intended to be just that, and nothing more—a reference, for other implementors of other systems to crib ideas from, especially as pertains to how the smaller ideas cohere architecturally in practice (which is hard to communicate by just writing papers about the individual ideas, or building them as standalone libraries, or building them into other systems where they must interoperate with components that don’t cohere into the desired architecture.)

Or, to put that another way: sometimes the best textbook is the codebase of a good implementation. And some people are more interested in writing those textbooks, than in personally getting their ideas into production systems. Especially because it’s far easier to refine the ideas in the context of such a coherent presentation of them.

Someone else, who reads such a “textbook” codebase, can put the ideas to work. But the ideas themselves will be better-developed for having had time to stew in the same pot as other novel ideas. This serves the people who put the ideas into practice as well.


Also, software doesn't have to be developed to spread widely.

I'm sure they're fine with it being Their Own OS that's only actually used for the Samsung Internal Cloud Thingy, while the source is public for anyone to take so it won't disappear from history.


whynotboth.gif. illumos can continue in purity while the full-time team interacts with others. That is the best way to see the principles can be successfully grafted in -- show that they have practical and immediate utility in the real world, even if they can't be implemented wholesale overnight. A pure textbook implementation is great as far as it goes, but it only goes so far, especially in today's highly-competitive internal engineering environments.


Sometimes it's worth fighting for to the end if that something you're fighting for is the right thing. And sometimes what happens is that the right thing does win out in the end, because it's better.

You're asking Rembrandt to stop painting realistic paintings because Picasso nowadays generally passes for art. What you don't consider is that there is nothing stopping Rembrandt from continuing to paint his realistic paintings for his own sake and pleasure.


You are the guy who warned about anthropomorphizing Larry Ellison, right? So true and so fun.


That is something Bryan has done.


+1 for thoughts and prayers!


> My understanding is that Joyent is comprised primarily of Sun refugees

This is by far the best explanation I have heard. Suddenly Joyent makes sense to me after all these years. They have been continuing to do what Sun did.

In a very similar way to how Sun followed the fatal strategy of simultaneously trying to make money selling proprietary Unix hardware, while also making their software open source and porting it to more popular architectures, while simultaneously developing Java as a portable programming language intending to erase all differentiation in underlying architecture/OS differences …

Yes: Joyent is (was) a hosting company trying to compete with AWS, claiming technical superiority in the software stack. Yes they also make an OpenStack competitor. And had their own programming language runtime, node.js, which seemed to be an odd implementation choice for their classic old-school unix tech. All of the above.


Technically SPARC is an open standard, and importantly Fujitsu sells their own hardware. OpenBSD has done some pretty cool mitigations on SPARC. What killed SPARC was that there hasn't been entry-level hardware for over 15 years. ARM become popular because of things like the Beagleboard and later Raspberry Pi. Enthusiasts and developers could experiment and familiarize themselves with the ecosystem, building mindshare and expertise.

Sun would have done better to have made SPARC hardware more accessible than to have open sourced Solaris. Hindsight is 20/20, but now that we've reached the limits of single-threaded performance Sun's emphasis on multi-threading and specialized ISA extensions would have made both Solaris and SPARC competitive today.

Getting there would have required cannibalizing their enterprise income, though, and that's difficult if not impossible for any company to do. They made the gamble on Solaris because it was less risky--major enterprises were always going to stick to Sun's Solaris--but low-end SPARC hardware absolutely would have hurt their bottom line.


Part of the overengineering in J2EE comes from essentially trying to make one app server be multitenant instead of just spinning up one process per tenant.

That this came from a company essentially selling mini computers.

In fact, we are trying to do multi-tenant all over again. To my mind, this cycle of containerization is still trying to fulfill all the promises we were given in the early 90's about memory protected, pre-emptive multitasking OSes that never came true. Which itself wasn't the first time (it's all copying mainframe ideas, badly). Given all of the problems coming out about speculative execution and data breaches, I'm already curious what the next set of promises will look like.


Didn't AS/400 fulfill the promise? OTOH it may been considered 'mini-mainframe'.


Yes. And it was awesome.


The original pitch of Node.js matched well with Unix, it was a simple programming model that exploited async I/O abstractions in Unix.


> My understanding is that Joyent is comprised primarily of Sun refugees who've attempted to compete in the same OS/server space that Solaris operated upon way back when.

I man not really, Joyent is a hosting company (cloud & virtualisation). That's built upon an Illumos distribution (SmartOS) but SmartOS is not their business.


I guess it needs clarification that I'm not trying to indicate they relied on proprietary hardware platforms like SPARC or a 1990s big-iron licensing model. These days, it's acknowledged that rent-seeking by a million cuts is the only viable way to make money. Even Microsoft is acknowledging that.

SmartOS may've been indirectly implicated in revenue, but let's be real, illumos and Joyent have always been tied at the hip. Joyent was never going to become a Linux-centric alternative public cloud like AWS, GCloud, etc.

If there had been less ideological fervor and with Samsung's backing, maybe they could've offered more serious competition to the incumbents, but the Solaris style oozes out of Joyent's every pore. It's fundamental to their way of thinking. That purity and commitment is certainly admirable, but it's simply not commercially viable anymore.


It's far, far from over as long as SmartOS remains easier to use and more reliable than GNU/Linux. The source code is there to fork, so just like GNU/Linux, there will be people working on it and using it. OpenBSD has 5.71 developers and yet they're kicking ass churning out security features and innovations like there's no tomorrow. I secured my copies of the illumos and SmartOS gates.

Joyent is proof that large scale cloud can be done with SmartOS.


I'm sure SmartOS is very reliable once you get it up and running, but "easier to use" than Linux is one of the most "[citation needed]" comments I've seen on HN in a while. Perhaps for folks who've had long familiarity with Solaris, it's easy-peasy, but I never found an OpenSolaris distribution/derivative that was remotely friendly to Solaris newbies.


I’m relatively a Solaris newbie, but I’ve always found smartos easier to manage than equivalent infrastructure on Linux: it’s a pain figuring out equivalent commands for free and top, etc. but the general system architecture is a lot easier to learn than the hodgepodge that is a typical Linux distribution (and I’ve been using Linux since Mandrake 7)


I'm an old Solaris diehard who grew up on Solaris, which was my first UNIX®️. I'm also a Solaris 7 and a Solaris 10 certified system administrator. So for me SmartOS is a walk in the park to use. I even prefer SVR4 tools over GNU.


You have five commands you need to know: zpool, zfs, imgadm, vmadm and man. The rest are the same applications as on GNU/Linux. I don't know of any other system where five commands suffice to run it.


"like FreeBSD". I was hoping Bryan would turn to unikernels. :)


You know he hates unikernels, so this must be some sort of an internal jab.


We were hosting customers of Joyent in 2006-2008. They were fantastic. I remember having a technical issue and talking directly to the CTO, who was most helpful.

Before AWS launched, Joyent (in my opinion) had one of the best hosting services. We ran everything on them, and especially liked Zones as a feature.

Separately, no one has mentioned that Joyent also had a full web application framework at one time (mail, calendar, etc.) that competed with the nascent Google GMail offerings. Joyent EOL'd back around 2010 if I recall correctly, but technically it was an impressive accomplishment.


From the announcement, they still offer > .... Fully-Managed Private Cloud, Managed Hardware Private Cloud, and On-Premise Private Cloud

If you look into their docs, it's a layer above components such as Docker, VMs, cloud hardware, even Openstack.


> My understanding always was that they're some special kind of hosting provider. But I never really got it.

The main selling point is/was being closer to the hardware and thus getting better utilization. Virtualizing at the machine level was a lot slower (especially before Intel VT-x and AMD-V came along) and so one wouldn't lose (say) 30% right off the top:

* https://en.wikipedia.org/wiki/OS-level_virtualisation


You will also hear a lot of bragging about how much more robust the partitioning is on Solaris versus Linux. Which is why they did the work to get Linux containers running on top of Solaris instead of just doing Linux.


I haven't been paying attention recently, but last time I checked, there were no known escapes out of Solaris zones, while there are quite a few CVEs about escaping Linux containers and even QEMU/KVM/Xen.

The one escape out of FreeBSD jails that I'm aware of (CVE-2005-2218) involved not a bug in the jails code, but by burrowing out via devfs.

So it seems that OS-level virtualization/security has a better track record.


Yes, for a very long time the Linux devs were emphatic that cgroups should be not be treated as an actual isolation/security barrier. This was flagrantly ignored and like most things in the Linux ecosystem, the functionality was eventually kinda-sorta backfilled based on the community's refusal to accept anything else. Meanwhile, FreeBSD and Solaris had quietly provided coherent tools to accomplish the same ends securely for many years.

The story of the BSDs and illumos is really a tragedy about the complete and total disconnect between technical quality+value and technology adoption. People utilize technology products as fashion statements first and foremost, and the Linux fashion was easily the most desirable -- indeed, aside from PC v. Mac, it's the only one generally known by average technophiles.

As a technical entrepreneur, I've discovered the sad reality that it's really only the most superficial 10% of any functionality that drives virtually all adoption. Everything else is marketing, even in supposedly heavily-technical endeavors like running an open-source kernel. The more you invest in engineering beyond that initial superficial component, the less you have to invest in brand promotion.

One can choose to gratify their own sense of righteousness and ignore this, and that is certainly a valid option to take, but then they shouldn't be surprised to see their lunch eaten by someone who took the other side of the bargain.


I disagree.

Linux have 18 millions lines of code in drivers. It can run on almost any chip/architecture. Distributions are fully featured and most of the things you can imagine have first class support.

I do not care how beautiful is architecture of BSD kernel if I cannot run my code on it.


Write the drivers for your hardware and bootstrap it and it will be able to run. Wait a minute, wasn't that THE one and only answer for "my hardware doesn't run Linux!" from the early '90's all the way to 2005?


does Linux run on a toaster? ;)


> People utilize technology products as fashion statements first and foremost,

It's true that tech has a popularity-breeds-popularity effect, but I don't think it's mainly fashion (at least, for industrial use of tech like PLs used commercially), but network effects around available knowledge and skilled users.

Abstract technical superiority often isn't a commercial win without reliable availability of people able to use it effectively at costs that make it efficient. There's situations where buying the limited knowledge pool, or paying to grow on house skill, is worthwhile, but often the “worse” solution that is well-known is better practically


Are you counting kernel exploits not directly related to jails/containers as an escape?


Generally, if someone was in a guest jail/container, there was no way of them being able to either (a) get to another guest jail/container or (b) getting to the parent context.

That was true at one point in time (i.e., no known breakouts). I don't know if that is still true, and if it is not, when that breakout CVE was published.


In this case, do clouds with physical bare-metal provisioning capabilities like EC2 / GCP / Azure / OCI kill the Joyent value proposition?


Docker killed the value proposition. "Worse is better". FreeBSD's jails and illumos's zones have always been far, far superior technically, providing actual kernel encapsulation and coherent administration utilities, but Docker had the triple combination of the familiarity of Linux, the cute whale, and the bonfire of VC dollars to drive adoption.

There is still a great deal of administrative/technical value in using anything-but-Docker if you can tolerate the social stigma, but who can? The CxOs need that Docker/Kubernetes feather in their cap so they can look cool at conferences.

Suggesting SmartOS/illumos gets you sideways looks at the very bare minimum. When I stated that I was thinking about using it here over a year ago, my inbox was immediately flooded with accusations that I was part of a Joyent-bankrolled cabal, because clearly no sane ordinary person would consider such a dead platform. In the interim, OmniTI has gone belly-up and OpenZFS has deemed ZFS on Linux the rightful heir to the ZFS throne, and now the Joyent Public Cloud is shuttering. If the impression that you had to be part of a conspiracy to use SmartOS for a greenfield project was reasonable then, it's only more reasonable now.

The sun may yet rise again on illumos some day, but it's time to accept that it's well into the night now.


Joyent's value proposition was killed (for the most part) by the experience of using their public interface. It would've taken a great deal of bravery to try that and decide a local install would be better. The node thing also did a lot of damage - Joyent wrote a lot of the SmartOS/Triton command line tools in node so they were slow as hell. Triton itself is a very non-trivial install although quite probably less so than a complete k8s rig.

There are also some really subtle Linux compatibility problems. For instance: IIRC a SmartOS malloc will not fail. Ever. But it might not return either. There's a really complicated mark/sweep type virtual memory thing and the documentation is good, for the nasty node stuff, and kinda terrible otherwise.

Finally, and with no lack of irony, you just can't run SmartOS on the cloud. For a start it needs to do its first boot off a CD and subsequent boots off a USB drive-type-arrangement. The hard part is that most cloud providers run a butchered Linux kernel and not a fully generalised VM - and on vultr at least, while they run a proper VM, for some reason the processors don't "come up".

I've not looked at all this stuff for at least a year so it may have changed. Either which way I believe the majority of the community is now on OmniOS - which I always got on better with anyway.


> Triton itself is a very non-trivial install

Actually I'd say the Triton install is not only trivial, but should be a model for other similar systems. During my testing with Intel NUCs, I had a usable system deploying containers in under 20 minutes. Most managed Kubernetes systems cannot even provision in that time frame.

> you just can't run SmartOS on the cloud.

The installer may lead you to the USB and install path, but it is not necessary. I built Vagrant boxes that had SmartOS on a regular disk, and I believe that more recently SmartOS has been running well in GCP. There is no reason it could not run in AWS or Azure, either.


> Most managed Kubernetes systems cannot even provision in that time frame.

when was the last time you used k8s?! I mean as of now there is even the expirmental k3os which boots a fully fledged k8s in seconds. you can even built a cluster with it. sadly HA backplane is still not available since k3s is still early in its development.


I'm talking here about the managed Kubernetes offerings from AWS and Azure. As for when I last used them: waiting for provisioning right now (18 mins and counting) is affording me the time to comment on this article.


maybe you should use gcloud gke. or even digitalocean, they provision in under then minutes aswell. not sure why aws and azure take so long.


I work on provisioning tools, I use all of them. I've definitely seen GKE take just as long as the others...


How long did it take you to install Triton?

How many systems did you need to install Triton?

How much memory did(do) the individual systems running Triton have?

Which documentation did you follow to install Triton?


The total time to install across 4 hosts was probably 15 mins or so. There used to be a post on the Joyent blog that explained the setup for small scale, I'm not sure if it is still there.


There used to be but I cannot find it any more, that is exactly why I asked...


"For a start it needs to do its first boot off a CD and subsequent boots off a USB drive-type-arrangement."

No it does not. I boot my SmartOS instances with PXE, DHCP and TFTP from the network. Served by a Solaris 10 intel server, no less. No CD's or USB images anywhere in sight.

In fact, I came up with a trifecta scheme where three SmartOS servers will all run TFTP and DHCP servers, thus being able to boot each other and all the other SmartOS servers over the network. In the event where all three would lose power simultaneously, THEN I reach for a USB stick with the emergency boot image. And I'll have two of those. But barring such catastrophic loss of power simultaneously, there won't be any CD's or USB sticks needed.

The network is the computer.


So not on AWS, then? Hence "on the cloud", I never said it was impossible in your garage.


Not really - since the bare metal provisioning tends only to be large, expensive instance types. The benefit of secure OS-level virtualisation is that you can co-locate the smaller workloads of multiple tenants on the same bare metal, while not incurring the overhead of hardware virtualisation.

When you compared the utility of JCP "tiny" instance (1 CPU worth of shares, 256MB of RAM) to a t2.micro or similar instance in AWS, the difference was astounding. Part of this is the CPU credit mechanism on T2 instances, but I'd wager that a good part of it was being in OS-level virtualisation.


> But oh, they're cancelling the public cloud, which I assume was the hosting service?

Public cloud != Private cloud

It looks like at the very least they still offer private "cloud" hosting.


Yes I'm interested to know if this affects what they were doing with VPC control plane for FreeBSD and the subsequent integration with Vagrant and VMWare.


The FreeBSD/VPC hypervisor efforts ran into a few “corporate speed bumps.” A few folks, myself included, are working to merge this code and make it generally available. The FreeBSD hypervisor effort certainly was the best systems work I’ve seen in my life, especially in such a short amount of time and given the resources at our disposal. My favorite anecdote was that the internal first-pass review of the benchmark data was that we had cheated because we had “exceeded the laws of computer science physics” when we saturated 100GE with VXLAN encap’ed guest traffic with no hardware acceleration, and we were only using a half a CPU core. If someone asked me to do this type of work again I’d use FreeBSD as the base again in a heartbeat. Good systems engineers are invaluable. Working from a solid technical foundation is equally critical, so is the community.

The control plane work likely won’t be made available (it was Linux specific, anyway). :-/

As for both the FreeBSD work and control plane work, it was good times, good memories, and a great team while it lasted.


"Corporate speed bump" is not how I would characterize any of what you did. Quite the spin!

I mean, if you want to publicly comment on your time at Joyent, please! I'm still pretty angry about it, and I left the better part of a year ago.


Wow pushing 100gbps on half a core without hardware acceleration is an incredible feat. Can you provide more information on how this was done?


Well you see, Sean is a known liar and conman. So it wasn't actually done. It is one of the reasons him and his team were fired by Samsung when their house of lies came tumbling down.

Or uh, so I heard.


Are there any blog posts/email threads you'd point people to if they want to read more about the work that was done?

I'd love to see and read about some of the things you ran into, and how you accomplished some of those high speed transfers.


The best talk I gave on this wasn’t recorded (and had another section on the details of the new MBUF struct that mmacy came up with), but here are a few pointers for the time being:

https://youtu.be/La4ekkKbM5o

https://github.com/joyent/freebsd/tree/projects/VPC

https://www.slideshare.net/mobile/SeanChittenden/freebsd-vpc...


It looks like that FreeBSD GitHub repo is private.


Thanks for the info! Yeah, I'd been lurking and reviewing what if any of it I could grok since I heard about it over a live session of BSDCan. Looking forward to it becoming GA you've got my blessings for sure.


Looks like they copied Sun's business plan. Nobody including Sun knew how Sun was going to make money after the decline of their hardware business.


They where a company that was around when node took off, so they offered products and services around that.

Last I heard Samsung had acquired them.


Joyent created Node.js


Ryan Dahl created Node.js prior to being hired at Joyent, and then sold them the rights to it.


The iojs debacle was because joyent had too much governance of the project and it was subsequently handed over to the board that currently governs nodejs. While the first Iines of what became nodejs weren't written by joyent, they can be quite reliably be credited to being the company that made it happen originally.


> they can be quite reliably be credited to being the company that made it happen originally

This is false, as someone who was actually there can attest. Node.js couldn't even run on Solaris when Joyent finally got involved, and Node.js technically and in terms of growth had already taken off thanks to Ryan Dahl.

Source: I'm the guy that ported Node.js to Solaris, and I have nothing to do with Joyent, then or now.


They recruited Dahl (and certainly contributed much through people like isaacs (npm, iirc he was at Joyent already) and more before he officially moved to Joyent) but if my memory of that time isn't all too blurry it certainly gained enough steam at that point to credit him as being the one kicking it off. It surely was the node we know today at that point (granted, with a much less enjoyable API surface). Not that this belittles their role, they certainly grew and provided a viable ecosystem for the large scale adoption that followed.


I can say without a doubt, if it weren’t for Joyent, my fantastic coworkers, and the experience and opportunity they granted me I wouldn’t be where I am today. I fondly remember debugging issues and learning so much with my talented coworkers. Congratulations Joyeurs on everything you’ve done to forward the Public Cloud and Systems Engineering.


The people at and the community around textdrive & joyent was a key part for me being able to learn a ton of stuff online about servers, writing software, good design etc.

Random kind inspiring people on the interwebs (most of whom I never met in person!). Incredible.

edit: also had one of their early "VC" deals for lifetime hosting. Very interesting feeling to have something you don't have to pay for anymore on a monthly basis, inspires to make the most of it. Got a ton of mileage out of it and wasn't upset about it being sunset, by that time hosting was quite cheap already.

edit2: When they started to adopt OpenSolaris I took a closer look and eventually ended up with it as my main desktop system back then & learned about ZFS and dtrace. Magical, fascinating times.


> Joyeurs

Please tell me this is a play on the word Voyeurs


TextDrive (bought by Joyent) was one of the first hosting services to support Ruby On Rails as a first-class service -- in the days when static hosting, PHP, Perl, and Java/Tomcat were the only options on Linux infrastructure. They were well-noted for their "lifetime hosting" membership deal, which was incredibly exciting at the time (this was in a day where flat-fee and free hosting outside the likes of MySpace and Geocities was unheard-of). TextDrive was where all the cool kids hosted their projects.

RIP Joyent public cloud (and the last remnant of TextDrive's legacy).


Also I just remembered Strongspace (which was one of their products back then), very compelling and well designed online storage (e.g. for backup) back then. In a way this could have grown into Dropbox or Box when you think of it.

The Joyent email + calendar + notes web app was also quite impressive at the time. So many memories.



Hey David! Good memories. David glad to hear you've sorted your demons out! If you are ever in Cape Town, drop by to say hi.


Strongspace still exists and is still hosting the VC storage accounts which Joyent shed (or at least, I think so... parts of their site seem to be dead).

ExpanDrive's "Cloud drive client software" is also well worth looking into.


They'll forever remain on my mind for their rip-off "lifetime hosting". I paid a lot of money for it, and it was supposed to last while the company lasted. But then they were folded into Joyent and the lifetime deals discontinued (with just a little bit limited time hosting offered in compensation).

I have stayed away from them forever since, and warned everybody I could about their terrible business practices.


Sad to hear. I selfishly hope the company does well for the sole purpose of keeping Bryan Cantrill employed so he can continue to give fantastic talks that I watch on YouTube.


Wow, 5 months is an insanely short timetable to ask people to migrate entire systems to another vendor. I’m surprised their contracts don’t have better terms.


Joyent really doesn’t have a great track record on shutting down services... That said my view on the company is very much tainted by what happened many years ago when they where a different type of company all together. I personally purchased a few of their ‘VC’ plans before they when all enterprise.


If anyone's curious, some of that history was documented here: http://billturner.github.io/2014/02/28/whats-happened-to-tex...


I once had an idea to research and write something about the history of Textdrive. Seems like there's something in there about how the internet changed from the '00s to the '10s and it'd be neat to interview some of the players and see where they are now.



If anyone reading this has less than 100 servers running with them, I am capable to handle turnkey documented migration in <60 days to a provider of your choice.


ex-cloud ops from Joyent here. I've done the literal migration of TextDrive and Joyent's shared hosting off them (300+ Solaris Zones and loads of legacy FreeBSD Servers) so ping me if you need help migrating. jmarneweck at gmail dot com.

I also have experience running SmartOS on Leaseweb (they purchased Uniquity Hosting where I setup SmartOS instances on numerous servers in numerous datacentres.


I would assume they've been discussing this move for longer with any customers who would need more time.

Decisions like this are long in the making, this is just the public announcement for the rest of us.


As someone whose previous company has been on their cloud for five years, they didn't reach out to us until 3 weeks ago.


I work at a shop (SADA) that can help with migrations if you feel the pinch: @milesward on twitter is a good way to reach out.



That doesn't explain anything really. What I want to know is what actually went on behind closed doors. Everything else is corporate public relations. In these cases hearsay is often far more accurate and reliable than these carefully worded statements.

The statement implies that Samsung's hunger for resources had grown so large that there is no more room for the public hosting side of the business, but considering how high performance SmartOS zones are (bare metal performance), I find that incredulous. How is that even possible?

But do let us, for the purpose of this mental exercise, presume that Samsung has indeed such great hunger for resources that public side of the business has to go: surely a chaebol as large and as financially powerful as Samsung has the funding to keep both its internal infrastructure hunger and the public side of the business sated, and easily at that?

Another option could be that the public side of the business was losing money, but considering just how trivially easy and fast SmartOS zones are to provision, and how little financial resources it costs to run them (it costs me literally peanuts and I run my entire datacenter at my own premise and expense), it is hard for me to fathom what could have possibly went wrong there: this technology is so advanced and ahead of its time that even a single customer running on it would be profitable, a bonus, and Joyent has more than one customer...


Let me give you some hearsay then from working in both megacorps and hosting.

Running a public cloud requires a massive amount of people, unlike what you seem to think. It takes one or two orders of magnitude more work to do anything public rather than internal. Bet there is no resource and no budget to keep running their public cloud anymore. It's a small company, don't have unlimited resources.

Second, there are no revenues, no future and no clients on their public cloud. Solaris for all intent and purpose is dead (it pains me to say that but that's a fact). What companies run Solaris in this decade? None. Plus hosting competition is at an all time high, DigitalOcean and other on the cheap side, AWS and Google on the enterprise side.

Samsung is a massive company and all massive companies have massive internal systems and need for internal infrastructure. I bet there is plenty of work that needs to be done internally that is much more valuable than public cloud. There they go.

If you've worked in any F50, you would know that the sheer amount of servers and desktops is larger than an AWS datacenter. They are all cloud hosting, internal or external, whether that's their main business or not.


Amount of servers? Not sure. Reasonable speculation suggests us-east-1 to be over one million physical servers at this point, based on Hamilton’s talk about AZ design and scale at re:Invent a couple years ago and their IPv4 consumption rate across AWS.

I don’t imagine your average Fortune 50 has millions of servers.


The talks a few years ago put a datacenter around 60k servers. us-east-1 region has multiple availability zones that each has multiple datacenters (both more so than the typical region because it's the largest one).

Consider that a Fortune 50 has hundreds of thousands of employees. With one desktop per employee that's already way more hardware to manage than an AWS datacenter.

Now think of how many servers, mobile devices, network gear, access points there are to run the business. 60k servers is not uncommon. A million devices is nothing special.


Perhaps this has more to do with goals and focus than cost/efficiency. In many tech companies the scarcest resource is man-hours for development/maintenance.

The order of complexity for public cloud/multi-tenant is much higher than for private/single-tenant, especially given the recent onslaught of CPU vulnerabilities.

Still a big loss. I imagine it must feel like a defeat, especially to Bryan, given the corporate culture at Joyent. Open sourcing triton is great, but it still needs hosting for people to use it. Amazon/Google/Microsoft could certainly cut costs if they leveraged bare metal so who knows.


I don't know that I had personalized it quite that deeply, but it's certainly an end of an era for us and for our public cloud customers. The technology is alive and well, and being used more broadly than ever for on-premises infrastructure. In that regard, the costs of the public cloud are less in the technology, and more in the surround of support, billing, etc. Certainly, the (significant) work that we had to do for CPU vulnerability mitigation needed to be done for our on-prem customers as well...


Will this affect Triton SmartOS?


Yep, I could easily imagine Bryan being crushed by this. I know I would be, knowing I have superior technology to anyone else and yet being told I have to withdraw from the competition.

It's like coming to a race track with a car and being disqualified because one's car is too light and powerful.


I don't know Bryan personally -I wish I did-, but I am pretty sure he's hard to crush. There's always something else to do.

Many of us Solaris Engineering refugees have gone on to do other things. And trust me, for me it was hard and still is because I don't yet get to use the new Linux eBPF stuff daily and w/o privilege for the equivalent of DTrace USDT, and I also don't get to use ZoL, and because systemd is a very poor not-SMF, and because there is no FMA for Linux, and because epoll makes me sad, and so on and on. But I don't dwell on all the mistakes made by Sun's executives from 2000-2010 (and they made so many terrible mistakes), not often. Since that seems popular in this thread, I'll tell you what I think those mistakes were (some of which Oracle is still making):

  - clinging to dying business (SPARC, J2ME) to the
    point that other, better business was forgone to
    avoid cannibalizing / competing with the dying
    business
    
  - killing Sun PS (2002)
  
  - insisting on unacceptable-to-Google licensing
    terms for Solaris (2002)
  
  - suspending Solaris x86 development (2002)
  
  - selling Sun to Oracle
There are more bad decisions, including buying MySQL (which was instantly forked, and that US$1bn price tag hurt).

Oddly, the torch of OS innovation has passed to MSFT. Not so odd, really, when you realize that MSFT learned the lessons of stack ranking (don't do it). And MSFT already had learned the lesson that to survive disruptions you must pivot. Bryan clearly learned those lessons early, so I think he'll be fine.


Apart from this entry I don't see any discussion on what Sun did wrong, but since you broached the subject:

- I'm furious about all the attention RISC V undeservedly gets while OpenSPARC is vastly superior in terms of mnemonics and virtual register windows and a free instruction every branch not taken and is GPL licensed;

- not devoting enough attention to i86pc Solaris was a major mistake;

- not selling cheap fully Solaris integrated desktops was the death knell: people knew Solaris from their student days, and universities had it because Sun made and sold workstations; once Sun decided to stop that, talent acquisition was stopped: desktop is needed for OS adoption;

- UltraSPARC T3 and T5 are blazingly fast, I know because I ran them, but now it seems too late;

- it wouldn't be too late if the RISC V nonsense pandering stopped and some OpenSPARC designed servers and workstations and even tinkertoys showed up by enthusiasts, like they did with Raspberry Pi;

- I'm glad Sun bought MySQL because Oracle subsequently destroyed it; MySQL is nowhere to be found now, and that is good so, because it was a shitty database which silently corrupts data; it deserved never to be made in the first place; that is one good deed in the world of damage which Oracle caused;

- killing OpenSolaris destroyed all that good will; had the project continued, it would have directly competed with GNU/Linux because we saw a constant monthly influx of GNU/Linux refugees on opensolaris.org forum, people who actually needed to get things done with their hardware;

- biggest mistake was that UltraSPARC based servers weren't cheaper than an average intel personal computer tin bucket server. That really killed them. And they would argue and refer me, a customer, to their internal "pricing committee" instead of listening.

What happens when a business doesn't listen to its customers?

"Bryan clearly learned those lessons early, so I think he'll be fine."

Indeed, and I wish him well and much success. He truly deserves by his deeds let alone his thoughts to continue being successful.


Eh, killing OpenSolaris was done by Oracle. While that was a side-effect of Oracle acquiring Sun, it's still on Oracle, so I won't apportion blame for that one on Sun execs.

The death of MySQL is certainly a good thing, but it's not really dead. People use it still.

SPARC was a dead end.

I too wish Bryan well. And the rest of the Sun diaspora. I miss them!


Might be better to ask this on Quora. Maybe someone not bound by an NDA will answer.

On the other hand, the single-tenant customer is plausible. I know Samsung invested in a lot of technology in the past five years. The Viv team is still around (acquired around the same time as Joyent), and there is quite a bit of IoT Samsung is building into their appliances.


End of an era. Lots of good stuff to come out of Joyent and I hope we don't see the end of their OSS contributions. Manatee (the state machine behind Manta PostgreSQL HA), the SmartOS KVM port, Triton itself - huge body of great stuff in addition to Illumos contributions that they are perhaps better known for.


Manta was one of the most exciting technologies I touched five years ago. Color me impressed when I inline `sed` replaced 3TB of logs (2.5 years worth) in 30 seconds on our manta storage.


Used it for the backing store for the npm's registry when tarballs were moved out of CouchDB. Managed to cope with us flushing the CDN cache (Fastly) and not fall over with users downloading packages.


> For our existing public cloud and private data center customers, adding scale, financial muscle, and Samsung as both a partner for innovation and as a large anchor tenant customer for Triton and Manta, will pay big dividends.

The tenant is now too large, please move out.


Fun fact, in 2008 Joyent was Twitter’s host and asked them to leave.


Interesting. TextDrive (which turned into Joyent) was one of the first Ruby on Rails hosting options, and Twitter was first built as a RoR app.


The truth is that Joyent bought TextDrive. At Joyent we were developing a big RoR app. TextDrive did great hosting. Jason and I decided to merge.


another fun fact: a lot of the early HackerNews posts were the daily drama of Twitter's RoR struggles.


I kind of miss seeing the Fail Whale.


same!

i also miss their xmpp bot for some weird reason.


Source? I've not heard that take and would be curious to read more about it.



This article makes it sound more like Twitter left the relationship because Joyent wasn't delivering on performance and stability.


It's true.


I even wrote a little poem about it at the time: https://thelocalyarn.com/article/birds


That's correct. :)


Oh no! I was just researching them and about to buy services! I suppose that's the problem. Too many "about to buy" and not enough actual buying.

But I mean, in honesty it's long overdue. Technical superiority loses out again.


One nice thing about having more cloud providers is having more options for where your computation was housed. I have had a machine at Joyent for a long time now (with the goal of building a service that I never got around to, so sadly for them only ever the one computer) because their US South West datacenter is located not just "in Vegas" but specifically at the SwitchNAP SuperNAP site, which co-located me with other resources I was using: I had a <1ms ping to my upstream telecom provider, and thereby could sit inside of real-time audio without adding noticeable latency (as well as have the option of getting extra-cheap bandwidth: I wasn't going to be costing Joyent anything for bandwidth but was still intending to use lots of CPU, so they had indicated a willingness to let me one-off this billing if I ever actually scaled up, though I was still quite happy to pay the full price for the epic latency). AWS is great (and I honestly used them for most of my less latency sensitive projects... hence the problem for Joyent, I appreciate), but their lack of a South West location means that the lowest ping I can get from them for this purpose is almost 20ms :(.


I have been a fan of Joyent's tech for several years now. This change seems like a scary one. Wishing them luck.


Sad day. I never used them but I always enjoyed Bryan Cantrill's postmortems and deep dives on Joyent all over the internet. Would love to see his take on this EOL announcement.

Bryan, are you around?


In terms of my take, much of it is in the blog entry: it was a difficult decision, especially because the public cloud is a big part of who we are. It's very important to us to do right by our customers, and the team has been very dedicated to doing everything we can to get affected customers onto new infrastructure. Beyond the EOL of the JPC, we continue to serve our on-prem customers -- and (to answer a question that came up elsewhere) our technologies are and remain entirely open source.


That sucks really badly... Sad customer here.


Luckily the tech is still going strong, joyent and other are still there to push forward. There are also EOL partners (https://docs.joyent.com/joyent-public-cloud-eol) that you can migrate to quite easily. (Hint; were one of them)


Do any of the EOL partners run a public Manta install? I haven't found one yet. To me that's one of the harder to replace parts of JPC.


What was their win over AWS or GCE?


Not a customer, but their main differentiator was co-locality of data and compute resources, leading to better utilization/lower communication overhead. The use of Solaris, their port of kvm etc were implementation details. Sorry to see them go, one of the few "cloud" providers that actually seemed to to some creative thinking on the architecture side.


Yeah how can you compete against AWS / x86 / Linux on that space not sure.


With technology which is an order of magnitude more concerned with reliability and easier to use than any competition out there, that's how.


By using Solaris / re-implementig KVM on different arch / platform, ofc it's the wrong approach, on one side you what 100s eng at Joyent working on that, where the rest of the world uses native KVM, x86, Linux ect ... it's just not possible on a tech level to compete.


SmartOS beats GNU/Linux feature for feature and it has features like fault management architecture that are science fiction for Linux. For example, it will migrate affected memory pages from portions of the chip with too many parity errors and it will off-line the chip if necessary, but the server will continue to function. All of that will be uniformly reported on top of that and is visible with fmadm faulty. I can off-line faulty processors and replace them on hardware which supports that while the system continues to serve.

Then there is COMSTAR, a reference fiberchannel and iSCSI implementation. illumos inherited the reference NFS V3 and V4 implementations from Solaris. The runtime linker is so advanced in respect to versioned interfaces that it looks like it's magical. The list could go on and on and on... Linux is nowhere near any of that. Linux barely works somehow but when it comes to running serious infrastructure and data integrity, that's not enough. Just because most people think bash completion and GNU commands are what makes GNU/Linux superior doesn't mean that it actually is: it's very crude and unreliable underneath, especially since you have armies of people constantly hacking on it and breaking kernel interfaces, especially in networking. Try running a firewall / VPN company with it to see what a nightmare it is. And that's just one instance.

And then there is cloud: nothing that GNU/Linux has comes even remotely close to how simple, secure and fast Solaris zones in SmartOS are. Literally nothing: I can pull an image with imgadm and instantiate a bare metal performance UNIX®️ or GNU/Linux virtual server in under 25 seconds with vmadm. No Kubernetes or Docker or OpenVZ or Terraform can do that, and those are all afterthought bolt-ons, while imgadm and vmadm are core of SmartOS and its reason of existence.


I see multiple problems with that way of thinking, first HW will fail and AWS, Google ect ... showed us that you should not invest in very sophisticated things for those kind of problems.

It feels that you're building something to avoid HW issue but they will happen no mater what so it's better to invest in HA with a mix of standard servers + SW.

And solaris zones... all the modern tooling are built around Linux / Docker / Kubernetes ect ... so why should I use something that no one uses and lagged behind in term of tooling? Solaris zones are doing 1% of what a modern cloud is capable ( like Kubernetes )

Linux is the present and the future if you think with a small company you can change that you are very wrong, by the time SmartOS get something you have Intel, IBM, RH, Google, AWS that all work on Linux that have thousands of people working on many more things, not to mention HW implementation, and that's not even talking about HW integration, pricing, security, support ect ...


Because with Triton or just imgadm and vmadm one doesn't need Kubernetes. Solaris zones, imgadm and vmadm make Docker and Kubernetes obsolete.

"if you think with a small company you can change that you are very wrong"

  You must be the change you wish to see in the world.

  Mahatma Gandhi


You don't know what you're talking about if you think those are equivalent. And well their public cloud is pretty much dead so good innovation!


Joyent has had nowhere near hundreds of engineers, I'm sorry to say. Small tribe with amazing results.


It used to amaze Joyent's clients how small Joyent's Cloud Operations team was.


Apparently not.


There is nothing that can be an order of magnitude better than DO/AWS/GCE on multiple aspects.


prove it :)


Prove it.


As one of the migration partners (mnx.io), please reach out. We can help you move to anywhere, or keep you running SmartOS with us!

Nick at mnx io


I fondly remember Joyent as one of the first companies to do Ruby on Rails hosting right. They were using a combo of FreeBSD, Virtualmin, Apache (proxying to mongrel etc).

Fun times!


They were probably our (Virtualmin) biggest user for a few years. They used the Open Source version so we didn't see much revenue from it (occasionally their users would upgrade to the commercial version), but they sponsored a few Solaris-related features in Virtualmin/Webmin and sent us patches now and then. The shared hosting stuff that used Virtualmin was TextDrive, and once they moved to container-based products they stopped including a control panel at some point (and they're container management stuff didn't use our tools for that), so we lost touch with them.

Edit: It's also how Twitter used Virtualmin in their early days, as they were hosted at Joyent. (I didn't know this until Evan Williams told me at a YC event. That was pretty cool.)


Evan was a lifetimer. Twitter's mail was routed via one.textdrive.com if memory serves me correctly using a truck load of aliases to redirect to offsite email accounts.

It was a pleasure dealing with the Virtualmin team on issues we had on Virtualmin on Joyent and TextDrive's shared hosting services.

Replying to the "Wins over GCE/AWS above that I can't reply to":

They had various:

* The ability to run workloads at near bare metal speeds (their using Solaris Containers).

* Manta (ability to perform compute operations on their storage service without having to download files and reupload files).

* Their work on getting a better user experience in zones with pkgsrc (thanks to all the hard work of Filip and Jonathan).

* Better performance and having the ability to spend less money on resources and reducing the instance count for folks moving off AWS to Joyent. I was involved in a number of these migrations.


That's a nice bit of trivia! Had no idea Twitter started on Virtualmin at Joyent.


Oop. I know a former employer will now be scrambling to migrate their legacy servers...


Did anyone actually even use that thing?


Wow - brings back memories!


not sure anyone really even used this thing.


Could have been predicted seven years ago. Kudos for making it longer than expected.


Hindsight is 20/20.

Linode (founded 16 years ago), and Digital Ocean (founded 8 years ago), are still alive and kicking, not to mention more traditional hosting companies like HostGator.


DigitalOcean have no serverless product and just spent a bunch of engineering resources on a new customer-managed Kubernetes setup.

The amount of people who are unsatisfied with DO's VM product and would be satisfied with a k8s product is tiny. The amount of people who would like somewhere to run functions and not care at all about infrastructure is growing rapidly.

If they're not skating to where the puck is moving they won't be around much longer.


I won't neccisarily agree. There is space for "simple" hosting for some thing which is a bit more than just a (static/wordpreess) website, but not a large application, where cloud tooling setup is beneficial.


Sure but how much is that market growing?


As long as it is sustainable business it should be fine for companies in this domain.


Hmm...you can fairly easily (haha) layer functions/serverless over K8s, e.g. with projects like OpenFaas.


Sure but why manage your own OpenFaaS when you're already paying a cloud provider and the billing will still be 24x7?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: