If you factor in that not much work is getting done in many western countries until early January the actual migration time is more like less than two weeks.
Now I don't know how easy it is to move from hyper.sh to alternatives, but that is unprofessionally short regardless.
I would have expected at least three months. Is it possible that they are that short on money that they need to shut it down immediately?
Seems like it.
Let that be a lesson to not rely on cloud infrastructure too much without designing your architecture to be somewhat cloud agnostic.
My service is entirely dependent on Hyper.sh and there seems to be no trivial migration path. I think I shall have to drop everything I was planning to do after the holidays and rearchitect for Kubernetes. :(
A few years ago, I ran a similar company that I also shut down, so it would be hypocritical of me to complain much. Competing with AWS is hard. (We did give users 3 months to migrate though.)
"Let that be a lesson to not rely on cloud infrastructure too much without designing your architecture to be somewhat cloud agnostic."
Not disagreeing per se, but I believe it's much more important to design for migration between providers instead of designing for abstraction over providers. The difference is subtle, but trying to make predictions about what needs abstraction will most likely lead to the same (or bigger) costs when a migration actually happens.
It's a bit like the general advice to design to avoid getting stuck in a hole as opposed to designing to getting out of future holes.
The thing that drew me to Hyper.sh in the first place was the fact that it used the Docker Remote API. That gave me some confidence it would be provider agnostic. Unfortunately no other providers have done this, so that idea didn't pan out.
I believe it's much more important to design for migration between providers instead of designing for abstraction over providers
Right. If you design to be cloud agnostic that means only using the lowest common denominator of features, and while it’s doable it’s not cost effective. The price structure is designed to shepherd you into the managed services.
Hm, "professionalism", to me, is a weird criticism to levy against a product or business that is presumably shutting down due to failure or lack of funding. Is it common, or expected, that startups allocate funds for clean shutdowns? I assume it's just burning through cash trying to make it work, until it becomes so unavoidable that you have to shutter suddenly.
>Let that be a lesson to not rely on cloud infrastructure too much without designing your architecture to be somewhat cloud agnostic.
Yeah, but no one that matters, cares - whether it's writing blog posts or putting your entire company brand/presence/blog on Medium (which harasses all of your readers), or relying on low-margin infra startups that often amount to a thin docs/marketing layer around Docker.
Just like the same people that get hosed by their hosting/DNS providers with a history of screwing users... because they didn't think it would happen to them -- people are lazy, people think they're exceptional, and I think a bunch of people don't like learning infra and don't see it as important.
If they are actually out of money, they don't have any other option than to shut it down. You can't blame anyone for that. A lot of startups wouldn't have the option to budget for an orderly shutdown.
But their email doesn't say anything about being out of money, just that they are focusing on a different technology.
That's why I brought it up. If you have to shut down, own up to the real reason. If not, you are screwing your users to focus on something else.
I'd bet that they just can't cover the bills anymore, though.
Well, they're not out of cash - at least not right now. It appears to be a pivot. I'd therefore have to agree with your assessment that it's a dick move.
Could this be due to a fatal security issue? Running untrusted code side by side on native hardware doesn't give you a whole lot of room to patch vulnerabilities ...
And the situation has gotten a lot worse in the last year.
But Hyper uses lightweight virtual machines for multi-tenancy - they were actually one the providers who got it right. It was one of their unique selling points.
Weird, no statement or anything that I can find, just a banner at the top of the page. As well, they're shutting off service in less than a month during a time in the year when at least many western countries have limited staff/availability due to holidays. While I understand that once they decide they will no longer be operating, they don't have a legitimate business reason to help (former) customers ease through the transition, it really seems like they went out of their way to be as customer-hostile as they could.
EDIT: Looks like an email went out (copied in a comment below) and that they are not shutting down as an organization, just this product. Even more curious as to why they'd offer such a short migration window to former customers. I would, especially in an organization, be truly hesitant to rely on any of their other technologies if this is the pattern being set.
We are writing to let you know that we decided to pursue a new direction in 2019, and will be closing down the Hyper.sh cloud platform on January 15, 2019.
Over three years ago, we set out to create an open secure container-native platform. We believed that containers represented a sea change in how software would be developed, deployed, and maintained.
Along the way, we created one of the first container-native cloud offerings, the Hyper.sh platform, which utilized our open source technology, called runV, which last year was merged with Intel’s Clear Containers project to become Kata Containers. We’re proud of the platform we built, and the influence we have had on the overall container industry. We are even more grateful to you, our customers, who have deployed hundreds of thousands of containers and built out new business on our platform.
The Hyper.sh platform, while trailblazing, is not where Hyper’s future efforts lie. Moving forward, Hyper is focusing all our attention and efforts towards the upstream Kata Containers project and in developing our Enterprise Kata stack for deployment in the major public clouds.
As of today, it is no longer possible to create a new account on Hyper.sh, and on January 15, 2019, the Hyper.sh cloud service will be shut down. Per section 11 of our terms of service, we wanted to provide you time to migrate off the platform and for the next month, our priority is to help your transition to other cloud services. If you need assistance, please feel free to reach out to us via Slack or your account dashboard. On January 15, 2019 any remaining user data and accounts will be deleted from the platform.
Please start now migrating your containers and data volumes off the platform. Directions on how to migrate your container volumes can be followed here. Please note, you will not be charged for either the container or the FIP in performing the migration.
Thank you for your business and support of our platform. It has been a privilege to serve you.
This is somewhat common. They've probably emailed all their customers with more details. One of them can post a screenshot of their email and link it here.
Wow! Just checked my email and got this. I was considering using this, but KNEW I couldn’t have built parts of my infra around this, because of one thing: the pricing. I even sent a message on their website asking what their roadmap was. After not getting a response for days, I figured the end was nigh.
It was far too low to build a sustainable business around, and so I decided to write my own stack. What I needed they served very well ( ability to run one-shot docker containers as an HTTP rpc ) — similar to AWS Fargate but much much simpler.
Fargates’ RunTask is close but requires way too much configuration of roles and permissions and “provisioning” just to run a container. And even at the scale AWS is providing fargate, their pricing was still higher! I remember the AWS pricing being something like 5c / hr priced out in seconds, but with a minimum charge of 1 minute, but hyper.sh was 1c/hr with a min charge of only 10s! So, roughly here’s why they’re going out of business: the business requires very scaley things ( think dinosaur not lizard ) in order to make very small amounts of money. And it’s making 1/5th of the market dominant service.
Micro pricing only works when your customer is running high frequency transactions. I would have been happy to pay $25 but even 50-100/ month for a starter service that does this well.
Woa... I was evaluating this service a couple of months ago. The conclusion was, even if it looks nice the major downside was that the company hadn't been around long enough and we didn't know if they were profitable. These kinds of moves with sub-30 days during major holidays is really scary. I feel for everyone out there having to work during the holidays. I hope the CEO did this out of financial necessities. Otherwise he should be put on some type of do-not-purchase-from-these-founders-blacklist.
> Otherwise he should be put on some type of do-not-purchase-from-these-founders-blacklist.
Does such a blacklist exist? Given the short notice I expected that Hyper.sh would be gone (and thus free of commercial backlash) but it appears the company will continue in new areas, which makes this decision even more surprising.
I'd definitely be interested in having such a list and an easy way to determine if they are involved in new companies, which doesn't sound trivial. "Foo Co was terrible to their customers" doesn't tell me Bob made the decisions, nor does that tell me that Bar Co is also run by Bob.
The usual HN feedback for trying to track bad CEOs so you don't get bit in the future is that we shouldn't because they're humans who make mistakes and always deserve second chances. Not saying I agree but it's what I've noticed.
> The Hyper.sh platform, while trailblazing, is not where Hyper’s future efforts lie. Moving forward, Hyper is focusing all our attention and efforts towards the upstream Kata Containers project and in developing our Enterprise Kata stack for deployment in the major public clouds.
kata (Intel clear containers + hypers runv) is big in the nested virtualization space but is still a tiny project (contributor wise) consisting of mostly Redhat. It's unlikely you'd run into these projects if you're not at an IAAS or dealing with containers accessing custom hardware (fpgas, gpus, etc). They're really cool, along with gvisor, kubevirt, nemu, etc. The really exciting part is everyone is using these projects for extremely different reasons. IAAS, rendering farms, android emulators, it's a really fun project to watch.
I think in the future there will be a big shift off of runc (docker) as the k8s default runtime now that CRIO has made them pluggable.
> I think in the future there will be a big shift off of runc (docker) as the k8s default runtime now that CRIO has made them pluggable.
I don't think there's going to be a big shift away from runc (though I'm biased, I'm one of the runc maintainers -- and runc is quite separate from Docker) for a couple of reasons:
1. Containers are still more than decent, and will always handle certain usecases and setups better than VMs can (due to the pliability of namespaces and shared kernels -- VMs have the fixed DRAM problem just like they always have).
2. At the moment, plain containers are arguably more secure than kata containers (though this can be fixed "fairly" easily with some minor memory penalties) because they disable a bunch of security features in their VM kernels -- so you don't get seccomp or AppArmor protections for your containers. Now, you do get hypervisor security, but there was a study some time ago which claimed that a well-tuned seccomp profile is about as secure as a hypervisor.
3. Hooks (like the NVIDIA ones) will always work better with plain containers because the whole idea behind hooking into a container runtime is that you can attach things to the containers' namespaces (with NVIDIA this would be vGPUs). kata is trying (and succeeding in most cases) to emulate these sorts of pluggable components with their agent, but fundamentally they're trying to pretend to be a container (which is going to cause problems).
I think kata is a really good project (and I'm happy that Intel and Hyper.sh joined forces), but I don't think it will replace ordinary containers entirely (even under Kubernetes). But hey, I could be proven wrong -- at which point I'll switch to working on LXC. :P
I noticed that Kata have started integrating firecracker which will be interesting and should help their performance and security stories going forward.
I'd agree though that kata (or other VM based containerization solutions) won't completely replace runc based solutions.
One of the things I like about standard linux containers is the ease with which you can remove part of the isolation without turning it all off or on. Being able to easily do `--net=host` or add a capability is very handy in some circumstances.
Also the security story definitely isn't as clear as VMs>containers. Every isolation layer has had breakouts in the last year, VMs, gVisor, Linux containers.
> What problems do you see/think arise from kata pretending to be a container?
There are a few.
One of the most obvious is that anything that requires fd passing simply cannot work, because file descriptors can't be transferred through the hypervisor (obviously). This means that certain kinds of console handling are completely off the table (in fact this was a pretty big argument between us and the Hyper.sh folks about 2 years ago now -- in runc we added this whole --console-socket semantic to allow for container-originated PTYs to be passed around, and you cannot do that with VM-based runtimes without some pretty awful emulation). But it turns out that most layers above us now just have high-level PTY operations like resizing (which I think is uglier and less flexible, but that's just my personal opinion).
Another one is that runtime hooks (such as LXC or OCI hooks) now are a bit more difficult to use. There's nothing stopping you from doing CNI with Kata, but it's one of those things where either the hook knows that it's working with a VM (which requires hook work) or the hook is tricked into thinking its dealing with a container (which requires lots of forwarding work, or running the hook in the VM). I'm really not sure how Jata handles this problem -- but the last time I spoke to the Kata folks the answer was "well, we're OCI compliant" which isn't really an answer IMHO (they're also cannot be OCI compliant, because OCI compliance testing still doesn't exist -- but that's a different topic). I imagine their point was "we copy runc", which is unfortunately what most people think when they say "OCI compliance".
There was a recent issue a colleague of mine (who works on Kata) mentioned, which is that currently "docker top" operates by getting the list of PIDs from the runtime and then fetching /proc information about them. Obviously this won't work with Kata and will require some pretty big changes to containerd and Docker to handle this (though I would argue this would be a good thing overall -- the current way people handle getting host PIDs for container processes is quite dodgy). There is currently some kernel work being done by Christian Brauner to add a new concept called procfds, and all of this work will be completely useless for Kata (even though it'll fix many PID races that exist).
But as I said, Kata is quite an interesting project (the work done for the agent is quite interesting) and it fulfills a very important need -- people are still worried about container security and adding a hypervisor which is lightweight will dissuade those fears.
I don't think it's accurate to say it's mostly Red Hat, they are a sponsor and sometimes contributor but most of the work still seems to come out of the Intel and Hyper.sh folks who co-founded the project out of Clear Containers and runV.
Yeah, most Red Hatters in this space work on CRI-O, or Kubernetes, or runc, or KubeVirt, or the pieces on or around Kata from a VM side. There’s a lot of other things that need investment.
Over the past 5 years I’ve seen too mans PaaS provider shutdown. Every time those affected ask why and my answer is the same: PaaS is a great tool but an awful business. We studied many PaaS providers and their business model and I believe there are fundamental business issues with a PaaS provider that provides infrastructure as well as the platform. The only way to survive the market is to have a rich owner (like Heroku / Salesforce), be a cloud provider (GoogleApps), run it yourself (DIY Kubernetes like solutions) or use a service that manages a PaaS on your own servers (Cloud 66).
Wow, a two week notice for an infrastructure provider is borderline criminal. I'm out for the holiday during this period... Guess it's going to be a scramble to get back on reliable 'ole Heroku for me. What a nightmare.
The promise of "Speed of containers and security of VMs" is enticing, but is there a simple 101/quickstart for somebody that just wants to run one container in this way?
as in no Kubernetes, OpenStack, Multi-tenancy, nothing.....
Just one bare-metal server, configured as KVM host, and how can I [run/start/stop] one Kata container?
I feel like everyone just assumes you're a Kubernetes Pro and runs infrastructure at the scale of Google/FB/Amazon these days... :-(
k8s is really just the scheduler and gives you a uniform way to deploy the "vm" containers in the usual scenario. With k8s you can have workloads run on different runtimes like trusted=runc, untrusted=kata, etc and this is even easier now with RuntimeClass which you can write right inside of a regular k8s deployment yaml.
Kata is actually just several binaries that talk via grpc (kata-agent, shim, proxy, runtime) and interface with QEMU/NEMU. For instance kata-proxy proxies commands over virtios serial interface that's exposed via QEMU.
You could install the binaries and qemu-lite and have a similar system but I'm not really sure how you'd benefit as it's the management through k8s that really won me over. I think in your scenario you'd just be making very complicated QEMU vms. I've linked this to the contribs, maybe they have some thoughts.
The documentation for kata seems fairly straightforward for a single host install. Install the kata packages, modify docker daemon to change the run time, then use Docker the usual way.
Sad to see this. What happened to the "serverless containers", especially startups working on the idea? Months ago, Zeit.co gave up the idea of hosting containers and changed their direction to FaaS . Wondering if there is a technical reason (e.g. cost effective and scalable) behind both changes. On the other hand, big cloud vendors are all providing the serverless containers while the experience may not be as smooth as the startups provide.
AWS employee here. Serverless containers are hard, and there are a lot of technical challenges along the way.
For context when we launched ECS years ago the goal was always to build a serverless container platform. But first we had to build out our own container orchestration platform capable of keeping track of all the containers at the scale that AWS requires, and build out a lot of underlying tech that didn't exist yet. Recently we open sourced Firecracker (https://aws.amazon.com/blogs/aws/firecracker-lightweight-vir...) which is one of the pieces we built along the way to enable the data plane. It took us a few years to get to this point, which is a really long time in startup lifetime.
The major cloud providers have the resources to spend a lot of time building the technical depth required to offer full featured serverless container platforms that can truly scale up and out. In my view the smaller providers like hyper.sh really nailed the initial developer experience but they are missing the depth behind the scenes that allows serverless containers to be scaled out cost effectively.
In time these two ends of the spectrum will hopefully converge. By open sourcing tech like Firecracker AWS enables small providers to have access to tech that we built because we needed it, and now its available for them to use. Conversely AWS learns how to improve our developer experience by seeing how these small startups create a great developer experience.
The best developer experience I have ever had was Zeit Now v1 - their serverless containers product. I would absolutely love for AWS to offer something similar (serverless Docker containers where I only pay for the periods during which they are serving traffic), ideally with a similar developer experience.
Nothing puts me off faster than having to click through dozens of web UI screens (or read a hundred pages of API docs) just to get a container launched and accessible via a URL. The Zeit Now v1 model, with everything configured from a single (optional) JSON config file and deployed with a single "now" CLI command, was absolutely ideal for me.
AWS Fargate, Azure Container Instances, GCP Serverless Containers (private beta). All the clouds have something to offer now, but you can also look at Kubernetes which can also run a single container very easily and has managed offerings everywhere.
I used Hyper.sh for my side project 2 years ago during the initial development phase. It was really easy to start with but I wouldn't say it was stable. Every month or so there would be downtimes or the container would run but for example networks wouldn't work. Or storage volumes wouldn't mount/unmount.. :)
After several downtimes I switched to Google's GKE and never looked back. Hyper is easy to start with but impossible to finish. There was no managed database at that time (not sure whether they had it now) and using Postgres with their persistent volumes (which are backed by Ceph I think) performance was really sad.
All in all, it was a great service for trying out ideas or running small & less important applications, but if you really want it to always be available, then probably it's not the right tool.
In fact, I felt this coming since quite a long time. One day, they abrubtly shut down their Pi (serverless K8s) without a notification. Their forum was deserted. A little activity in Slack. Nothing but these.
The forums quieted down.
Direct product and support questions in the forums were unanswered.
Frequent changes, almost pivots, in product development.
No updates in github repos for key product related software.
I note that their Twitter account has had no activity for a few months. This seems to be a common thing with services that shut down or products that get discontinued.
Makes me wonder.. maybe it's worth building a tool that monitors the Twitter accounts of common services and products and raises a bat signal if they don't tweet for a month or two. It seems to be where budget gets tightened first.
I've been looking for a service like this for a while, and the first I find out about it is that it's shutting down. Does anyone know of similar offerings that are as simple? I know AWS, GCP and the likes offer container clusters, but I would like to just run a single container (for cheap) and not worry about it.
I'm going to be honest AWS Fargate isn't as easy to use as hyper.sh if you are looking for an alternative. That said our increased complexity is because Fargate is orders of magnitude more full featured for applications that outgrow small platforms. If you want to give Fargate a try I'd recommend this open source tool (http://somanymachines.com/fargate/) as it provides you with an easy, opinionated starter command line experience similar to what you might have seen for hyper.sh
Azure App Service's 'Web App for Containers' [1] allows you to run a container, but might not meet your definition of cheap. Also consider Azure Container Instances [2]
If you want something more similar to Hyper.sh, serverless containers are coming out publicly soon, and you can sign up for the alpha here:
https://g.co/serverlesscontainers
Zeit is dropping the support for containers for their v2 and now Hyper is shutting down their services. Any other one sentence deploy ("now" or "hyper run") for containers available out there?
App engine is really nice, it's `gcloud app deploy` and they have support for multistage deploys.
I had to drop them because it's hard to start a HIPAA compliant product on their platform as a solo dev. AWS on the other hand signs a BAA with you with the click a button.
This forced me to learn AWS and to learn that all I really need for the MVP is a single EC2 instance, RDS, a redis subscription from redis labs, and cloud watch.
Deploying is just pulling from github, shutting down the service and restarting it. It doesn't really need to be simpler than that.
I'm pretty sure that I could scale to 10k users per day on 1 ec2 instance and by that point I would hopefully have venture capital and hire an expert to handle all this for me.
They had a cron feature which was really good. I've switched to running a Kubernetes cluster for my side projects but Hyper.sh cron was really easy to get started with.
I was just considering using this service for a side project that needed isolated containers to run jobs in. Anyone have suggestions for similar alternative services?
No, for short-lived (a minute or two per run) batch jobs. But, it looks like most of the ones you mentioned can also be used for that sort of task. Thanks!
Me too, ( see thread for my comment ). I decided in the end that using my own ec2 Ubuntu t2 micro was simpler and far cheaper to run than using and configuring Fargate. I assume you want to execute untrusted code (e.g. run CI on behalf of customers or provide a repl.it-like service), like I’m doing? I couldn’t find a good service that did this well ( well I did find hyper.sh ).
Well, in theory the code that's executing is in a sandbox already but I'd rather have that extra layer of protection. I was really looking forward to just having a little bit of glue code to hit a hyper.sh endpoint and not have to worry about any server administration. manigandham suggested a few alternatives that I'm going to look into.
Right now I believe the only major players with isolated vms are Oracle Cloud and Alibaba. There's a lot of movement in the space right now, though, so you'll likely find a number of options next year.
"Each Fargate task has it's own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task."
I think "builds" was the wrong term. (I've edited it to "jobs" above.) GitLab CI is great, but I don't know if it fits my use case, which is running intermittent batch conversions. I was planning on using hyper.sh to spin up containers on demand, instead of having to maintain a job server and queue of my own.
If you factor in that not much work is getting done in many western countries until early January the actual migration time is more like less than two weeks.
Now I don't know how easy it is to move from hyper.sh to alternatives, but that is unprofessionally short regardless.
I would have expected at least three months. Is it possible that they are that short on money that they need to shut it down immediately? Seems like it.
Let that be a lesson to not rely on cloud infrastructure too much without designing your architecture to be somewhat cloud agnostic.