Hacker News new | past | comments | ask | show | jobs | submit login
Knative – Kubernetes-based platform to manage modern serverless workloads (cloud.google.com)
423 points by dankohn1 on July 24, 2018 | hide | past | favorite | 163 comments



Joining in, Markus from IBM and OpenWhisk. Been involved with knative for quite some time now as well.

Happy to answer any questions that might arise as well. Answers might be biased and opinions are my own.

See IBM's statement: https://www.ibm.com/blogs/cloud-computing/2018/07/24/ibm-clo...


Can we see a design document/theory of operations, please? It’s difficult to evaluate this effort without understanding exactly what it is - namely, the constituent components, their respective functions, how they work and relate to one another, etc.


The docs repo gives a high level overview and a deeper dive into each of the components with samples:

https://github.com/knative/docs/blob/master/README.md


a bit off topic, i think this 'serverless' thing certainly will create lot of jobs, what a messy architecture, good for ibm, : )


@markusthoemmes could you kindly comment on how OpenWhisk<-->Knative interact or live together or side by side


This of course needs to be discussed in the broader OpenWhisk community first, but taking dewitt's answer here into perspective (https://news.ycombinator.com/item?id=17604185) and also based on what IBM has put out in their statement a possible interaction could be to layer OpenWhisk's higher level abstractions and multi-tenancy features over the knative stack and use the latter as an "execution-engine".

In technical terms, OpenWhisk's invoker system could be replaced by knative, keeping the API/Controller bits stable to still support the notion of actions/triggers/rules/sequences/compositions you name it.


I have a few questions.

> Knative provides a set of middleware components that are essential to build modern, source-centric, and container-based applications that can run anywhere

Are these middleware components that someone else is supposed to package together into something useful like a platform? The other bits lead me to think it's a platform of sorts in itself. So, why the talk of middleware?

> Knative offers a set of reusable components that focus on solving many mundane but difficult tasks such as orchestrating source-to-container workflows, routing and managing traffic during deployment, auto-scaling your workloads,

PaaS systems already do this. Other serverless things, like kubeless, do this too. Are these reusable components supposed to be packaged in a higher level system? It sounds like that but other parts of the page suggest it's a platform to use now.

> Kubernetes-native APIs

This is found in the docs repo. The examples are long form k8s style objects. Have developers (those being targeted) been interested in these? In my experience they like shorter form ones (there are numerous tools making that possible).

I'm curious of the meat and usefulness behind the hype.


> Are these middleware components that someone else is supposed to package together into something useful like a platform? The other bits lead me to think it's a platform of sorts in itself. So, why the talk of middleware?

It's intended to be usable as a single installation with the option to install individual parts. The textbook case is Build -- you can get things done with or without it.

> Are these reusable components supposed to be packaged in a higher level system? It sounds like that but other parts of the page suggest it's a platform to use now.

I think a bit of both. The joke I've made is that if Kubernetes is IaaS++, Knative is PaaS--. It's usable as-is, it aims to provide a common set of primitives for shared concerns, but it can also serve as a base for higher-level systems. The Project riff team, for example, have pushed some of their efforts down into Knative.

> In my experience they like shorter form ones (there are numerous tools making that possible).

Mine too. My personal view is that Build (and before long Pipelines) will be the main entryway to Knative.


Just to amplify jacques_chester's reply, yes, we do see products being built on Knative components. Google is doing that (serverless container support in GCF, serverless add-on in GKE), and you can see similar announcements being made from partners in this thread.

These are early days of course, but given that the goal is to codify the commonalities (the 80% we all do roughly the same anyway) and to improve customer workload portability overall, I hope to see new products built using Knative, and existing products re-base on Knative as well.

Good questions, btw!


> Knative is PaaS--

Except it bills itself as serverless. In a PaaS your apps need to have a server (e.g., http server). Serverless (e.g., FaaS or Brigade) elsewhere doesn't need this. Stuff isn't long running.

How is long running software servers like this serverless?

> My personal view is that Build (and before long Pipelines) will be the main entryway to Knative.

Pipelines? That doesn't appear to be in the docs. What is Pipelines?


> Except it bills itself as serverless. In a PaaS your apps need to have a server (e.g., http server). Serverless (e.g., FaaS or Brigade) elsewhere doesn't need this. Stuff isn't long running.

I think this comes down to the difficulties of terminology. I think of FaaS as a PaaS with some extra features (scale-to-zero being the most-noticeable). "Serverless" is a catch-all term for a variety of workloads, of which FaaS is the most visible.

> Pipelines? That doesn't appear to be in the docs. What is Pipelines?

It's a proposal[0] to evolve Concourse into being a Knative component, picking up from Build for complex workflows. I've been working on parts of this with various folks over the past few months.

I should emphasise that it is a proposal. For what you do with Knative today, use Build. It's a simple abstraction and it works right now.

[0] https://docs.google.com/document/d/1PicF7UhvSBpZLwichuY5hdhT... (to view, join the knative-dev group: https://groups.google.com/forum/#!forum/knative-dev)


I just had a call with Josh, a GitLab product manager, that maybe helps with some of your questions https://www.youtube.com/watch?v=k1jK4F4NoBw


Thanks for sharing.

Noticed you talked a bit about kaniko. It has some security issues that have been talked about but last I checked not addressed by the kaniko team. How are you dealing with those?


Thanks for watching mfer! Kaniko certainly doesn't address all of the challenges with building containers securely, but I think it's the best solution available _today_ that supports the common Dockerfile workflow.

There are a number of other promising tools like img, but aren't readily usable yet because of a dependency on some upstream PR's.

At GitLab, we're trying to think of ways to help developers understand the challenges, as well as provide easy to adopt solutions as these tools become available. Would love your thoughts and feedback: https://gitlab.com/gitlab-org/gitlab-ce/issues/48913


I kinda have a hard time understanding what it is ... is it kinda a 'glue' layer between Kubernetes and istio ? Can somebody give us a 2 sentences explanation? please and thank you


This is a way to deploy and manage a stateless web application ("function") without needing to get into a bunch of low-level details, and get closer to pay-for-what-you-use.


Sorry, I'm still pretty new to Kubernetes. I am exploring using Serverless Framework + AWS Lambda/GCP Functions to build out a full CRUD-like REST API backend. If I used Knative, would I achieve a similar functionality? Would each "function" (i.e. endpoint) be deployed in a separate container? If it's all in one single (autoscaling) container you still don't acheive full pay-for-what-you-use, right?


Each distinct container/app/function scales independently.

Billing models do in fact affect our design discussions and we're still kicking the questions around[0][1].

[0] https://github.com/knative/serving/issues/895

[1] https://github.com/knative/serving/issues/848


Very interesting! thanks!


> get closer to pay-for-what-you-use.

How though, won't you need a Kubernetes cluster running?


On the small scale, no, it's not pay-for-what-you-use. Imagine though you're an enterprise, running a large K8S cluster with lots of workloads inside. By using Knative and scale-to-zero, now you can pack more lightweight workloads into the same cluster resources, because the pods scale down when they're not actively being used. It gives you (as the cluster operator in your company) the ability to run your cluster the same way serverless works in the cloud.


The exact economics depend on where the billing boundary falls. For on-prem the economics will probably follow one of central IT budget, showback or chargeback and I expect the case for hugging the curve as tightly as possible is weaker.

For cloud providers the boundary is likely to become execution-seconds. As Kubernetes worker nodes become abstracted away this will be the remaining way to track utilisation.


Can someone also give us a 20 sentences explanation on how it works, or an example installation? "It does magic" still leaves me wondering if this is worth investigating.


You basically have a git repo that contains your business logic implemented as 'functions' and some configuration : knative will build, containerize, deploy, and glue/route those functions for you to make it a full fledged app. it will also take care of load balancing/scaling etc. So all you need really is focus on you business logic code and write some config for knative ... well this is at least the vision


This is just a summary of what it is. A car is a machine that moves you around the roads. But how does the car work ?

Are you using it? Because the docs seem to show that what you describe is not the case. The devs seem to have to write integrations for everything, so almost nothing is done for them. Saying Knative will "do it for you" is like saying your car will "drive itself". You only have to steer it and work the pedals ...


If you check out the section "CPU and Memory Usage" in [0] they have a listing of containers and sidecars that run.

[0] https://github.com/knative/docs/blob/master/serving/debuggin...


I am giving you the higher level view on what I think it does ... read the docs! or even better the source code it is open. Explaining to you how the car works can be as far as explaining how physics work, because you need understand combustion, gravity etc. EDIT: and that will exceed 20 sentences by a few very heavy books.


According to the docs, the high level overview is misleading.

You don't have to explain physics to explain how a car works. You only have to explain it in terms of something your audience is familiar with. Magnetization would be hard to explain to someone without physics, because there's no other parallel to quantum mechanics for the average person.

But a car is based on general principles which most of us understand at a simplistic level. We understand "stickiness", so "stickiness" can stand in for "coefficient of friction". We understand "weight", so we don't have to explain how gravity works. We understand that fuel, oxygen and a spark creates a fire (or explosion), so we don't have to explain chemical reactions. We also understand that explosions exert a force on things around them.

So when I say that an explosion in your car's engine exerts a force pushing against a piston, and that piston is connected to a rod, and that rod pushes on and turns a crank, which (skipping the transmission for brevity) turns an axle, which is connected to a wheel whose tire is stuck to the road, and that the weight of the car on the wheel forces either the road or the tire to move, we understand that even though the car is heavy and the turning force on the tire is pretty strong, the road is probably stronger and isn't going to move, so instead the tire moves, and the car is attached to the tire. So you can understand at a basic level how a car works without having to know physics.

What I was asking for was what combination of components in what order are required to make the thing run, and how these things are accomplished without the developer seeming to have to do anything. If they've built a car for developers, that's great, but it seems more like they've created nuts and bolts.


One of the items that's not present at the moment (and dewitt can probably provide additional color) is the top-level developer shell which puts the pieces together with a minimal amount of client-side work. If you can see Oren's GCP Next talk (https://cloud.withgoogle.com/next18/sf/sessions/session/2204... -- which doesn't show build or events yet), you can see a deployment using `gcloud serverless` to both containers on GCF and a Knative cluster, which differ only in a single flag to gcloud to tell it which endpoint to use.

The short summary if you had a nice client shell would be:

1) Run a command to deploy. That command:

a) determines what build templates are available on your cluster and what language/tools you're using, and finds a match between the two.

b) Creates a YAML definition of your application on the Knative cluster(and stages your source if needed).

2) On the server side:

a) The build component will (optionally) trigger to take staged source and convert it to a container.

b) The serving component will create Istio routes and various pieces to schedule your app into a k8s Deployment.

   i) This Deployment will scale to zero if there's no activity, and scale back up if needed.

   ii) Scale-to-zero is accomplished via a (shared) "actuator" which stalls the incoming HTTP request until a Pod is live.

 c) Additionally, the serving components loads various observability tools like Prometheus and ELK (by default, the no-mon or lite installs skip this) so that you can see what's happening even as your pods appear and disappear.


Sorry I was just trying to help. You're obviously perfectly capable of clicking on links, reading and figuring out things for yourself... My intent was to help figure out if this was worth investigating as you were wondering to which I clearly failed. You car analogy is still falling short but it doesn't matter. Best of luck ;)


The car analogy is just right, IMO. It's a functional description of how a car works at a high level, so that we can understand how it's put together; and if we're reasonably expert in mechanics, we can fill in a lot of missing details. Whereas "a box that you sit in that lets you get to where you want to go quickly" leaves out far too much detail.

Too many descriptions of tools for devs sell benefits instead of functions. This is normally good sales practice, but devs are experts in the mechanisms behind the tools they use, and they generally prefer more functional descriptions so that they can more quickly evaluate whether any given tool is a good fit for them, and what its strengths and weaknesses will be.


Yeah, on the other hand I'm looking closely at this one just because of the people that showed up in this thread. I know some of these guys and their past work is exceptional, so if they're claiming to be involved in this, I am sure I'll be sorry if I don't check it out.


OK they're obviously not just involved, actually driving...


> ...But a car is based on general principles which most of us understand... (etc)

This is bar none the best explanation I have ever read of how a car works, without hand waving, to clarify for non-believers in car technology. I want to reward you, but I am kind of an outsider who honestly just strives to use Kubernetes, unsuccessfully, and I haven't got it off the ground at my organization yet (and I may never...)

So let me do my best, as an outsider who you must understand is absolutely hand-waving based on a quick read-over of the high-level documentation, and an understanding of how these systems go wrong; but I have no honest understanding of this particular stack (but then again, I've seen a lot of what some of the contributors are doing, so perhaps your grain of salt need not be too big... but I digress, in under 20 sentences...)

Serving Scale to zero, request-driven compute model:

You're aiming to build out your environment inside of a small footprint. If all of your customers go away, and stay gone for a day, you'd really like for your stack to approximately stop the cash bonfire altogether. This is a goal of the stack, too.

Build Cloud-native source to container orchestration

So your footprint is a program, and you programmed it in a language... great, an event... to be treated like other events, like a new customer that visits your website, or a new commit from one of your devs... whatever build is necessary for your stack to come into being, it's handled inside of this stack. Not to spoil it, but: events like this are the key driver for the entire system, which the system architecture actually reflects in a way...

Events Universal subscription, delivery and management of events

A minimal gateway serves as router that intercepts customers, and serves as infrastructure stander-upper, standing up the infrastructure on-demand while the greater parts of your stack are basically disposable and automatically self-destructive, so that every time a new customer comes along, the request actually starts the whole response stack anew. Then, upon finding no further traffic to answer, the newly provisioned stack rapidly disposes of itself to save on the cost.

The response stack tears itself down entirely on the way after the response is served. Unless there's another customer, if the capacity remains un-utilized for long enough to leave a mark... it's gone. But obviously this goes both ways. We don't want anyone to keep waiting in periods of increased load, if ever there's no capacity available, we want to increase the capacity as demand is spiking, in response to the demand to keep it satisfied. Again, this is baked into the platform.

Serverless add-on on GKE

* the fine print, you must at least have GKE or another Kubernetes cluster or provider at equivalent service levels to enjoy the benefits described above. This runs on GKE, or to be more precise, Kubernetes. That infrastructure stander-upper actually lives in the footprint of a GKE cluster. If you've paid for a cluster before GKE, don't worry, what I just said is still potentially much smaller and cheaper than you think. (GKE can scale to about $5/mo baseline footprint if you are small-time like me.) If you know how resource scheduling on Kubernetes works and you know how Autoscaling of Kubernetes cluster nodes works, you're about 90% of the way there toward knowing how this scaling situation works too.

There is a function gateway, that minimal gateway that I spoke of under "Events", and it is a persistent process that can't be stowed away for cheaper when it is not answering requests. But it drives the whole cluster. It spins up extra demand when events result in requests on topics that spawn Pods to respond, and Kubernetes will react to Pending pods with new Nodes to allow extra capacity to schedule those pods. tl;dr You need not keep extra capacity around when it's not actually needed. Don't even worry about it. The cluster will autoscale in response to rising and falling demand, and the bill will definitely come at the end of the month.

---

I've been trying to wrap my head around the whole Serverless Function thing for a couple of weeks now, as a Rails dev who hasn't had very much exposure to it and a Kubernetes enthusiast, well I think I get it now. (No one serverless stack is going to win, but obviously there will be a winner. Scaling to zero is the big win here. It's not the first platform to purport to scale in response to events, and even scaling down to zero, but not many have done this from what I can tell.)

Riff is one that advertised this "scale to zero" capability in their project before, and they are apparently involved with this project too, so that's neat. But if it's a car, back to where you started... and the commits from your developers are the feet on the pedals, ...oh hell we don't really need another car analogy do we? I seriously can't write anymore of this kind of garbage now, at least until I get my keyboard on the terminal for a while and try the thing out.

Supported on Minikube. I can tell you I tried Riff out this weekend (Riff team is represented here in this thread, they are apparently deeply involved in Knative), and I went through the experience of adding support for a new language runtime for Riff, and it was a lot like "not really having to do anything" other than put my feet on the pedals and keep control of the wheel, in terms of how the stack let me do what I know about, and getting out of my way for the most part.

I think I'll learn how to use gRPC now. I think I get the idea of what a "sidecar" container is really meant to be used for, now. I think I should stop writing though, and try compiling the source and see how this new runtime environment on Kubernetes behaves. I hope it's better than Riff (because Riff was impressive from the demo to the trial, but I don't think the Riff devs will be working and focused on this instead, unless it's actually going to be even better than that. They have no lack of vision in this space, in my humble opinion.)


https://www.youtube.com/watch?v=AEf4DhoyF00

This is about 20 sentences, ...


Why isn't a polished version of this the blurb?


They need to hire me :)


Email me. : ) We're hiring. I'm dewitt@ all the obvious places.


How would this compare to something like serverless.com?


The serverless.com framework is built around existing serverless/faas platforms and allows you to write vendor-agnostic functions and deploy them exchangeably. It does not provide the resources to run your code. it abstracts away different API of different vendors to make them seem as one.

Knative on the other hand provides building blocks to eventually build a serverless platform. It provides the resources necessary to run and scale your application.

In short: You could use the serverless.com framework to deploy applications/functions on knative. But you still need some layer actually running your workload, like knative.


not sure they seem quite similar (I am not familiar with neither to be honest) but the main difference in my opinion is this is built on open source and backed by the big boys : google, IBM, SAP ..etc


So it's like Serverless framework for k8s?

https://serverless.com/


I think it's more like what you would use to build serverless.com a more fair comparison IMO would be to https://cloud.google.com/functions/

but it didn't use neither so ...


serverless is a config/deploy thing that deploys "functions" (to lambda etc.), ties them to HTTP routes or other events, provisions resources (storage etc.), ties them to your applicatoins etc.. The main thing it does not do is scaling etc. because you're deploying to a platform that handles that.



Thank you for the link to the samples, they are much more informative. But it still doesn't quite explain it.

The samples show that you can make an app, make a container, and make a service config file, and deploy your app to K8s. Yes, we've been able to do that for some time now.

This thing is supposed to provide a bunch of advanced features for devs to not have to think about. However, the build repo says this:

"While Knative builds are optimized for building, testing, and deploying source code, you are still responsible for developing the corresponding components that: + Retrieve source code from repositories. + Run multiple sequential jobs against a shared filesystem (for example, Install dependencies, Run unit and integration tests). + Build container images. + Push container images to an image registry, or deploy them to a cluster."

"While today, a Knative build does not provide a complete standalone CI/CD solution, it does however, provide a lower-level building block that was purposefully designed to enable integration and utilization in larger systems."

So as a developer you still have to have all the things you had before, but with extra layers of abstraction now, apparently just to support hybrid cloud installations.

The marketing lingo appeals to developers as if it makes all this simple, when in fact it may be more complicated.


I don't get it either. It seems like a big clapping for hot air. Yeah, it's great that there's a proper build, serve and events responsibility group, and now what? What's the "killer app" here, "the use case"?

This seems nice though: https://github.com/knative/docs/tree/master/serving#more-sam...

It seems to have OpenTracing (Zipkin) integration: https://github.com/knative/docs/blob/master/serving/debuggin... (you need to install elasticsearch and stuff for it of course: https://github.com/knative/docs/blob/master/serving/installi... )

And assigning a custom domain: https://github.com/knative/docs/blob/master/serving/using-a-... ... okay, I was hoping that I can specify a whole URL where to mount the "app" (something like https://my.fancy.pants.tld/api/app2/

It seems to me that the weakest part of this is build currently. Mostly because that's what's pretty linear and one-off, and well explored by other projects (GitLab CI/CD can easily run on and deploy to k8s), and knative is mostly about serving and eventing, meaning all the interaction between the lifecycles of stuff on k8s.


It'll be interesting to see the differences from other systems in this space (OpenShift, Deis Workflow). From the samples it appears to be more of a "pull code from github and push to Dockerhub" model than a "push code to the platform and push to an internal registry" that other PaaSes target.

https://docs.openshift.com/container-platform/3.3/install_co...

https://deis.com/docs/workflow/understanding-workflow/archit...

disclaimer: I was one of the core maintainers of Deis Workflow.


Oh, I miss Deis, but congrats on the MS acquisition.

I think looking at build is not interesting, because it seems that knative currently focuses on the serving and events part (some thoughts on this you might find interesting: https://news.ycombinator.com/item?id=17607401 )

The repository is just an example. Using an internal repo seems just as easy.


Interesting.

I had to learn Kubernetes very recently and this seems to simplify a lot of the boilerplate needed to have an app running.

One the most grating part for me was having the ingress run with a proper SSL certificate with the right handshakes though (had to install a nginx controller just for that).

That’s the kind of things that everyone will go through, and is solved in one click on heroku. Yet it seems to be left out of all the samples, if it’s out of the scope that diminishes a lot the appeal.


I think there's a Kong ingress controller, that gets you a very powerful nginx setup.

https://github.com/Kong/kubernetes-ingress-controller

Knative uses parts of Istio for servign and TLS/SSL setup: https://github.com/knative/docs/blob/master/serving/using-an...

It's a bit funny that knative serving is so complicated I have still no idea what they use under the many layers of abstraction (probably something hacked together in go), and I don't understand why don't they use a generic configurable ingress component.


You have source code. You want a simple way to turn it into running functions and apps, with eventing support, with clever routing and rollout capabilities, without lifting a finger.

It's an abstraction layer being developed above both of Kubernetes and Istio.


ahh so you just worry about your business logic, which will be a bunch of 'functions', and knative will take care of routing, deployment, load etc. ?


It isn't limited to functions. Knative can run containerees, or with a build system take full app code and deploy it as a container.


so it can be anything really ... hmm


I think Knative (and the scale to zero in particular) are best suited for request/response or event-delivery workloads. So you wouldn't want to run (for example) memcache or mysql under knative. FaaS and (HTTP application) PaaS are both good matches.

I think it's an open question whether a PaaS like Google Dataflow is a good match or not. It will certainly require more planning, but I think it's doable.


Yeah but don't forget this is a higher abstraction on top of microservices. I think the big take away here is that if your app falls into the patten of 80% of modern apps (can't remember where is saw that number, don't quote me :) and is split properly you can have all your 'ops' part as configuration.


I would say that yes, that is a good statement of the vision that the contributors share.


Got it! thank you! It's actually very interesting ...


You still need a Kubernetes cluster running though, right?


Yes ... from the install guide : "To get started with Knative, you need a Kubernetes cluster."

https://github.com/knative/docs/tree/master/install


This is what I am trying to figure out too. Is Knative standalone, or is it running inside your K8 cluster, or do you build a K8 cluster and run this in it exclusively?


Knative installs into a Kubernetes cluster. Knative as it stands is not multi-tenant, so if you need to isolate workloads, you will need distinct Kubernetes clusters.


I think it's mostly pods on your k8 cluster ... see the link to the install guide above


It has software, yes, that of course runs in pods, but ..

It's a whole lot of infrastructure to support a PaaS / FaaS workflow. From building and routing/serving/tracing to complex event driven stuff.

It's fully k8s native (hence the name, I reckon), and it uses CRDs. Which is basically k8s's DSL for describing anything. Basically a k8s-standard for k8s plugins with schema, validation, schema change management, kubectl support, etc.


Based on the linked GitHub readme, it seems that this allows you to deploy autoscaling apps ready for public consumption without having to worry about defining load balancers, ingress routes, etc. There's a lot of documentation and it's difficult (for me) to completely grok.

GitHub readme: https://github.com/knative/docs/blob/master/install/getting-...

Example Ruby app: https://github.com/knative/docs/blob/master/serving/samples/...

While I can see how this greatly simplifies deploying an app onto a Kubernetes cluster, I'm failing to see how this helps for serverless workloads


Serverless means that you as a developer don't have to manage servers. There's still real computers behind the "serverless".

Knative calls the role of the person providing the servers are "operator". Kubernetes is great for operators because it has a lot of common low-level primitives, and lots of choice if you want to pay someone else to be your operator.

Knative aims to give a similar great experience to developers if you can convince your operator to install it on top of the kubernetes they already have. In particular, if an operator charges you for container runtime minutes on knative, you're getting close to the pay-per use model of lambda or app engine. Also, as you noted, developers should have fewer concepts they need to grok in order to deploy a knative app compared with kubernetes.


Looks like Beanstalk for Kube by Google.


I've been around Knative for a few months on behalf of Pivotal (our blogpost here[0] and TheNewStack here[1]). Other leading contributors are IBM, SAP and Red Hat.

Subject to my biases, what would people like to know?

[0] https://content.pivotal.io/blog/knative-powerful-building-bl...

[1] https://thenewstack.io/knative-enables-portable-serverless-p...




Hey, glad to see you again!

It looks like I have some more catch up to play, I see a rails logo on this page...

I emailed you a couple of days ago or yesterday about the Ruby support in Riff. There's not a PR yet but we got Ruby into a working state again! It looks like not a lot of people are interested in Ruby on serverless platforms.

Can you shed any light on why? And what kind of Ruby support can I expect from knative, given that it's not apparently coming from the Riff project now... as a Rails dev who wants to use this stuff today, how do I get started?

(It has a Rails logo on the page linked by the post, so I assume I can use it now, but I haven't gone deeply enough to see if there are limitations... I'm used to Ruby support being neglected in the serverless areas, I don't know if it's because we Ruby devs are slacking, or there's a fundamental flaw I haven't seen yet, or what...)

So, what can you tell me about that Rails logo on the landing page?


Yes, I'm sorry I didn't reply to you -- I was caught by the problem that I knew this was imminent but not being able to say so (or even hint so).

> It looks like not a lot of people are interested in Ruby on serverless platforms. Can you shed any light on why?

I imagine there's some mix of market demand and path dependency. A lot of folks who like experimenting moved onwards to the Node ecosystem and a lot of FaaS work has happened there. Meanwhile enterprises are heavily invested in Java and .NET.

The riff team at the moment is pretty small. One thing coming up is to fold buildpacks into the code->running pathway for riff. This should make it easier for the riff team to get out of the business of supporting particular ecosystems and for some of the engineering work for buildpacks to be shared amongst multiple setups.

In terms of starting with Rails today, it should be possible to use the buildpacks BuildTemplate to run the existing Ruby buildpack. I don't know if it's working, it's early days for heavy automation of development on Knative itself.


Awesome! That's totally understandable, your non-response was actually a hint that you were up to something, so it's good to see what it was. So I'm not that surprised to find you here in this discussion.

I'll check it out in more detail when I'm back in front of a computer! Glad to see the rails logo somewhere new, too.


what's happening to riff? https://github.com/projectriff/riff


Project riff members have been deeply involved for months. Basically, we'll move up the stack to use Knative as the base layer for riff.

It's nice because we get to share some of our work with the wider community.

The riff team have a more detailed post: https://projectriff.io/blog/announcing-riff-0-1-0-on-Knative...


Thanks to everyone who played a part of this! The Google team is also watching this thread. Happy to answer any questions people have.


Can we see a design document/theory of operations, please? It’s difficult to evaluate this effort without understanding exactly what it is - namely, the constituent components, their respective functions, how they work and relate to one another, etc.

Just a comment here about this in general - given all the resources at Google’s disposal, I’m perplexed as to why we don’t see world-class (or even at least halfway decent) documentation being shipped along with the product at first release. As a community I think we need to demand higher standards in this area. If GNU can do it, Google certainly can.


Is this something like what you're looking for?

https://github.com/knative/serving/blob/master/docs/spec/ove...

Or a little higher-level:

https://docs.google.com/presentation/d/1CbwVC7W2JaSxRyltU8CS...

There are a lot of technical docs in the individual repositories. You can get started, for example, with the serving docs under:

https://github.com/knative/serving/tree/master/docs

Same for the other repos, like build and eventing.

Thanks for asking!


Not really, I'm afraid. Resource descriptions don't constitute a theory of operations, and the higher-level document you linked to is not public.

By "theory of operations," I mean a design document, often but not always created before a line of code is written, that describes in plain English what is to be built (or, what was built). It often discusses things like:

* What problems are being solved?

* What attempts have already been made to solve the problem?

* How does it work? How do the components interrelate? How does one operate it, especially at scale?

* How does this solution solve the problems better than the alternatives?

There are lots of great examples out there. I like to point to Consul[1] as a textbook example of fantastic documentation, and it's been there since day 1. Google would do well to follow Hashicorp's and GNU's lead.

[1] https://www.consul.io


I once had an interesting argument about Learning Perl.

The first chapter, when I picked up the book, was a tour of Perl. I loved it. It showed me all of the highlights with no details at all. I never read another chapter, and instead picked up the second book in the series to use as a reference.

My counter in this argument HATED that first chapter. They almost didn't read another one, because it put all of these examples in front of them with no depth. They thought the book would be much improved by removing that chapter.

I would have been bored to tears by the book this person wanted to read.

I'm not going anywhere in particular with this, except to say that the world takes all kinds. Sometimes docs don't exist simply because nobody realized someone else would find that shape of document useful, so they decided not to write it.

The docs I love may well be the docs you hate :)


Have you taken a look at the docs repo? https://github.com/knative/docs/blob/master/README.md

We have the high-level overview and deeper dive into the details for each of the components, install instructions and samples.


Yes, but they don't fit the description I discussed above. Or at least they're not organized in such a fashion.


Your doc is marked invite only, BTW.


So you found the Zipkin UI link in the Microsoft GitHub docs already?


This looks really cool, I'll definitely be looking into it more closely when I get a chance.

Obviously this is a conflict of interest for Google, but just curious if you know of any plans in the works for being able to run this on AWS EKS? It's the obvious omission from the list of supported clouds on the installation page[0].

[0] https://github.com/knative/docs/tree/master/install


If AWS EKS gives you access to kubectl or similar, it is probably trivial to setup. There shouldn't be any obvious roadblocks. Will be a great contribution to add to the guides!


AWS EKS definitely does support kubectl, but requires the aws-iam-authenticator (formerly Heptio) to integrate with AWS IAM. Docs here: https://docs.aws.amazon.com/eks/latest/userguide/configure-k...


There seems to be a problem with istio working on EKS, actually -- i'm getting a timeout to the sidecar injector: same issue as this: https://github.com/istio/old_issues_repo/issues/271


I'd love to see a PR that documents Knative on EKS!


I think PRs would be gladly accepted. But, in general, it should work fine with any conforming Kubernetes system.


It's been a really interesting experience. Google's position as a catalyst made this level of cooperation possible.


Huge thanks to Pivotal for being an amazing development partner. Pivotal's deep experience with enterprise customers and developing products like riff brought real-world knowledge and technical leadership to Knative early on. We're super grateful for the opportunity to have worked together on this!


How serving works? Is it possible to use it with kong? (Or other ingress controllers?)


I wrote a little post to explain why Knative is important from the developer perspective https://medium.com/@mchmarny/build-deploy-manage-modern-serv...


Great post, really cleared up the value proposition of this project for me!


Just a quick reminder from the installation documents (emphasis mine):

"If you already have a Kubernetes cluster you're comfortable installing alpha software on, use the following instructions"

This looks cool, and will likely be useful in the future, but this is still alpha quality software. Don't bet your business on it by installing it to your production clusters. Don't let your development cycle by driven by the latest shiny thing, no matter whose logo is on it.


They say Red Hat is involved.

How does this relate to OpenShift?

I really hope that, for instance, Knative's builder will be merged with Red Hat's source-to-image (S2I) builder.


The current Build design allows for a relatively straightforward extension to use S2I as a BuildTemplate. You can see other examples here: https://github.com/knative/build-templates

As and when Builds becomes Pipelines, I anticipate that it will remain relatively trivial to integrate existing build infrastructure without too much fuss.


We definitely want to support knative, integrated well with existing concepts (like triggering deployments and builds when images change), to stitch in the new build resource so it works well with existing tools like s2i (as Jacques mentioned), and generally make sure these are good building blocks for applications. There’s a lot of good ideas in knative that will benefit anyone who uses Kubernetes.


The github page is now public. If you want to see the actual code you can find it at https://github.com/knative


I'm a bit confused about the "eventing" application. Is it a message bus like NATS? a persistent event log like Kafka? or is it an adapter that allows applications to interact with both?


"eventing" is building an ecosystem to make it easy to connect events to event consumers (whether they are Knative Services, k8s services, VMs, or even a SaaS).

In order to do this, we've broken the problem down into 3 parts:

Buses provide a k8s-native abstraction over message buses like NATS or Kafka. At this level, the abstraction is basically publish-subscribe; events are published to a Channel, and Subscriptions route that Channel to interested parties.

Sources provide a similar abstraction layer for provisioning data sources from outside Kubernetes and routing them to the cluster, expressed as a Feed (typically, to a Channel on a Bus, but could also be directly to another endpoint). Right now, we only have a few generic Sources, but we plan to add more interesting and specific Sources over time.

Lastly, we have a higher-level abstraction called a Flow which bundles up the specification from the Source to the endpoint, optionally allowing you to choose the Channel and Bus which the event is routed over. (Otherwise, there is a default Bus used to provision a Channel.)

All of this is also very much work-in-progress. You're seeing the workshop as we put down our tools yesterday, not as cleaned up for a tour. :-)


Sources also can expose multiple Types of events (EventType) they can emit, for example Github has 40 or so events, so you can choose which types of events you want to consume from your handler.


Can you point us to the design document so we can further study it?


This one is harder to point to, eventing is still under fairly active discussion. The Eventing Glossary doc[0] and the working group notes are probably the best references right now.

[0] https://docs.google.com/document/d/1JSPgieg0DiP4uIfo_zDoojo6...

[1] https://docs.google.com/document/d/1uGDehQu493N_XCAT5H4XEw5T...


The eventing repo provides a common pluggable model for managing events. The communication layer is currently HTTP, not NATS. But the broker is pluggable and includes support for Kafka. You can choose to use Kafka throughout your cluster or just for a single feed. You can add other implementations as well.

The three pluggable abstractions are your event sources, the bus over which events are stored/sent, and the actions which should be invoked in response to an event.


Eventing provides primitives that can use messaging systems under the hood (Buses). But it's meant to be a higher level abstraction for connecting functions or applications.


So Kunernetes by itself is not high-level enough, we need a layer on top of it? Turtles all the way up.


Have you ever used Kubernetes? The abstraction it focuses on is running container images; building them and getting them to a registry is a whole separate abstraction.


This is a great answer. Kubernetes is missing (or said another way, OpenShift has...) buildconfig and imagestream primitives to make building containers a natural extension of your Kubernetes deployments/daemonsets/whatever.

OpenShift has done a really good job of being there, wherever Kubernetes is lagging behind. They may not provide the new canonical implementation of the solution to the forseen problem, but they are expert at predicting where the gaps will need to be filled (and preemptively making an effort to fill them.)

I see Helm project has a specific advice for multi-tenant usage now, too, in their best-practices advice! That was one of the criticisms of helm from OpenShift, and now it's evidently solved(-ish) with the advancements in modern k8s rbac and some documentation.


Thanks for that - we definitely think there are many great ideas floating around in the Kube ecosystem, and projects like knative bring together a critical mass of perspectives and people to take “good ideas” and make them “things we all can depend on”.

knative is a mashup of good ideas and patterns from app engine, openshift, cloud foundry, FaaS, and public cloud providers. I think it will fill the missing space between containerized apps and true FaaS on top of Kube, while still making it easy to break the abstraction glass as necessary.


Isn't that computer science/software industry in general?

Everyone builds upon other people's work.


Are serverless applications used internally at Google, or this is just for their cloud hosting platform?


I am glad to see attempts at making a high-level platform. However, I feel like it will be too opinionated for many to pick Istio and solve build and FaaS all at once. Will there be some push-down of some of the functionality into Kubernetes? Will any of this turn into specification with alternative implementations?


One of the goals is to make the components loosely coupled at the top (so a product can choose among them as needed), and also pluggable at the bottom (so you can swap out implementations as needed, for example logging and monitoring).

I expect that some of this will end up upstreamed into Kubernetes proper if it's broadly useful (autoscaler work, to pick an example), but this is still super early so let's see what people want.

And yes, we're documenting the control plane APIs, the data plane requirements, and the contract for serverless container environments, with the hope that they can be reimplemented consistently wherever needed.

One of the explicit goals is workload portability, so separating spec from implementation is critical.


Re: upstreaming - Kubernetes has tried really hard to make it easy to be extended, and knative also tries to use those mechanisms and be nice, clean, reusable APIs that can be used on any Kubernetes cluster. I think it's increasingly likely that "upstreaming" an API pattern means "everyone installs it to their cluster", but we'll need to see how that plays out.


Some of the attributes of serverless workloads will require changes deeper down in the stack (read Kubernetes). For instance constantly getting containers up or throwing them away at high rates stresses parts of Kubernetes more than its used to. I fully expect lots of requirements to make their way into Kubernetes itself.


There will probably be stuff pushed down based on things Knative needs. For example, we've discussed the effect of what we call "the warmth hierarchy" on autoscaling (tl;dr, bad).

Solving those problems can't be done entirely at the layer in which Knative is developing -- some changes will need to turn up in Kubernetes (node-local caching awareness) and Istio (workload/node-aware routing).


The Google Next keynote is happening now: they announced Istio 1.0, but not this. Interesting.


We have several days of keynotes at GCP Next and way more to share throughout the week. But for the curious, there's a serverless spotlight session at 1:55 today that will cover this, as well as a Knative-specific talk at 4:35.

We decided to announce this first thing with the blog posts just to get the word out as soon as possible and give our amazing partners (Pivotal, SAP, RedHat, IBM, and more) a chance to share their news as well.

And frankly, it's just more fun to run an open source project out in the open, so we were pretty keen to flip the git repo bits to public as early as we could. : )


How is this different than Kubeless on Kubernetes?


knative has "distributed tracing" and supports scaling to zero, and aims to be loosely coupled. that said I have no idea how kubeless works, but I like that they have a CLI for quickly deploying functions.


It strikes me as highly significant that a large portion of the questions here are along the lines of "what the heck is this thing?" I'm certainly in that boat: as far as I can tell Knative is some kubernetes based cloud something something that maybe let's you host your own FaaS? Like Openwhisk? Or maybe it helps you deploy or something (isn't that already what k8s does?)?

This is hopefully received by Knative devs (or marketers) that the homepage needs a major overhaul, as it's not at all clear what the product is to the average visitor.


1) It is mentioned that KNative "deliver a serverless experience to developers on any cloud". How does it work with any cloud? Are you referring to fact that all clouds has k8s? Is it already supported in any cloud or will be supported?

2) If any cloud is supported, does it virtualize services such as storage, queues, API management and Identity

3) Does Knative can seamlessly switch monitoring to what is provided by each cloud.


What is the limitation on this? AWS lambda has 250M code size limit, 3G memory limit, 512M temp storage limit, how about this?


How big is your Kubernetes cluster?


Can I run functions written on any platform on Knative? Java? .net core? Js?

What about startup times? How do they compare to other solutions?


> Can I run functions written on any platform on Knative? Java? .net core? Js?

Anything that meets the Container Runtime Contract[0] should work.

> What about startup times? How do they compare to other solutions?

Startup times need a lot more work. Knative has a number of moving pieces which are contributing delay to startup time and they're being discussed or attacked by various groups. Some of this work will probably require changes to be contributed upstream to Kubernetes and Istio.

[0] https://github.com/knative/serving/blob/5b8542fb58044834d50f...


Startup time is the hardest (and most valuable) piece for everyone to work together on. By improving startup time of Kube containers, we can benefit (and draw from) a lot of people who are using Kube today, not just knative. This is a great example of trying to find an “everyone wins” scenario in open source and succeeding.


Why would you use something like this over, say, Google App Engine Flex?


I'm the engineering manager at Google responsible for both Knative and App Engine Flex. What we heard from a lot of customers is that they want a serverless / App Engine Flex-like experience but one that works multi-cloud or in hybrid/on-prem setups. The goal for Knative is to fill that need, especially for people that already use kubernetes. That being said, App Engine Flex is a solid GA service that provides a fully managed experience and runs huge services today. Knative is not there yet but is getting better by the day.


I want to love AppEngine Flex, but it consistently takes 10+ minutes to deploy. Is this something that we can ever expect a fix for?


Network/Load-balancer programming is embarrassingly slow on Flex. We have long term projects in place that will improve it but I can't give any dates here.

Short term we have a few tactical features in the pipeline. In the next few weeks we are rolling our "parallel build and deploy" which moved the docker build to run parallel with LB programming. Depending on your build that saves a few minutes.

When doing development I usually just replace an existing version by deploying with:

gcloud app deploy --version <my-dev-version>

This keeps the same LB and VMs as before and does a gradual container swap. It is not safe for production but definitely helps when iterating.


This is exactly the reason why we went away from appengine flex to AWS Beanstalk. As much as a google fan I am, this is unacceptably high and there seem to be no interest by Google to reduce this. Beanstalk isn't that great either but better than appengine flex when it comes to the deployment wait times.


I’ve read through this whole thread and everyone is struggling to understand the primary use case for knative. Your comment is the most clear.

Please please please at the forefront of all docs, presentations, and blogs put something like this:

Knative’s primary use case is for you to provide your own cloud-neutral, on-prem, or hybrid-cloud serverless platform built on top of kubernetes.


It's probably harder for us to dig ourselves out of the various deep dives and rabbit holes we've been over the past few months.

I think there are two parts to the story here. One is what Knative is for, what it can do. That's some version of "source code to event-driven system on any Kubernetes system without the tears". As with previous Big Changes there will be a cottage industry of explanations, and that is fine.

The second part of the story is: who is working on it. And that's the underrated part for me so far. You see Pivotal and Red Hat -- we are fierce competitors -- working on the same project with Googlers, IBMers and SAPers. You find folks who work on riff and OpenWhisk sitting in calls with engineers who've worked on Google Cloud Functions comparing notes on problems and solutions.

I have sat in working groups where experiences have been shared from Cloud Foundry, OpenWhisk and Google App Engine in the space of 5 minutes. I've sat in other calls with teams comparing notes on Buildpacks and S2I, Concourse and OpenShift ImageStreams ... it goes on and on.

The big story here is that Google were able to catalyse a conversation that would be very difficult to start any other way. People from contributing organisations are busily sorting out common ground that will let everyone to move past this level of abstraction much quicker than would otherwise be the case.


Thanks! In my mind a key goal for Knative is to: "Free customers from serverless lock-in through industry-wide portability" I think I like your version better so maybe I should start using that :-)

We are still very early in the process so code (and comms) are a bit rough. Really appreciate your feedback and we will work to clarify things over the next few days.


How would you anticipate seeing this work in, say, AWS?


Knative depends on Kubernetes and should run well on any reasonable Kubernetes setup. I haven't tested Knative on AWS since I'm not a customer there :-)

We were thrilled to see Microsoft contribute instructions for Azure (https://github.com/knative/docs/issues/208) but we have not heard much from Amazon yet.


Any language runtime natively. Opensource. Run anywhere.


I can't give a definitive answer as I'm not very familiar with Flex. But a lot of Google folks with App Engine backgrounds are working on Knative.


Indeed. Much of what we open sourced in Knative came from what we learned from running GAE and GCF. Which is important: this isn't a side project for us, this is based on real-world experience doing it at scale for huge customers. You don't run a serverless stack for 10+ years without taking away a few lessons. : )


When do you run into rate limiting for scaling up ? We all call it serverless but somewhere there are servers you are using and they dont magically appear from nowhere.


Yep, no free lunch. Scaling up has a few different "cliffs". To outline a few: 1. Existing pod scheduled and ready (fastest) 2. Room on existing node (so no docker pull needed) 3. Room on existing cluster (so no VM creation needed) 4. No room, need to increase cluster size.

Optimizing 1-3 is very much in scope for the Knative project. Optimizing (4) is a provider problem and will vary by provider.

The Knative Scaling working group is discussing this every week on Wednesdays and that meeting is open to the public. We also keep good notes linked from our community page: https://github.com/knative/docs/blob/master/community/WORKIN...


How do you pronounce it?

Native with a silent K? (like "knave" or "knee")

Kay-Native?

Kuh-Native?


From https://github.com/knative/docs: “Knative (pronounced kay-nay-tiv)”


Whats the difference between this and KOPS?

https://github.com/kubernetes/kops


Knative is on top of Kubernetes and Istio.

Kops addresses the bits below Kubernetes.


Why would I choose this over kubeless?


Cold startup time and mitigation strategies like those on AWS?


Even better! Knative will allow you to even scale your containers out-of-band if you know that something BIG will happen. See https://github.com/knative/serving/issues/1656 for the discussion.


That tells me not much....is it pinging to keep instances warm?


No. The intention is that the autoscaler will support both floor and ceiling values for its work. So while scale-to-zero is the default, you can make an economic decision that you want scale-to-1, scale-to-10, whatever makes sense for your case. Pinging won't be necessary to artificially create this behaviour.

This is an example of nesting reactive control (the autoscaler) with predictive control (the min/max values).


interesting...by floor and ceiling is that like the minimum and maximum threshold for latency?

here's my pain point. I built a serverless REST API with token authentication on Lambda. However, if many people aren't using it all the time it will sleep and then the next sucker who calls the endpoint is stuck with waiting for the serverless instance to wake up.

In some cases even getting a token from a simple POST request would take an awful long time. This was a few years ago and I stopped using serverless since then.

But now I'm interested in serverless because I've been hearing that the cold startup problem is being reduced.

I wonder if in the future developers will be just taking core logics from serverless repository and wiring up the components, sort of like how wordpress does it without the crazy layers of PHP and bloat.


There are two aspects here: engineering and economic.

Our engineering view is that we want startup to be as fast as possible, which is a surprisingly nuanced problem with lots of moving parts that need to collectively do something smart, even before you get to the startup time of your own code. This will show a lot of improvement as we go, but right now it's early days.

The economic question is about trading off the risk of hitting a slow start vs the cost of maintaining idle instances. It is impossible for Knative's contributors to solve that problem with a black box solution. What we can do is to provide you with some knobs and dials to express your preferences.

Edit: I didn't answer this question --

> interesting...by floor and ceiling is that like the minimum and maximum threshold for latency?

Not for now, this would be bounds on what scale the autoscaler can choose. Latency is an example of an output setpoint that an autoscaler could attempt to control, as opposed to a process input. We have in mind to make autoscaling somewhat pluggable because different people want to target different signals.


I recently tweeted about these abstractions over abstractions here: https://twitter.com/sahrizv/status/1018184792611827712

Not downplaying the team's effort and the immense utility of this to many companies.


Rather than linking to your own tweet, perhaps you could post the comment here and save us all a click?


Fair point, but the medium changes the message sometimes. :-)

The tweet resonated with a sizeable part of the developer community hence linked here.


Digital Ocean is about to ship similar product.

https://www.digitalocean.com/products/kubernetes/


No, Digital Ocean's Kubernetes product will be similar to the existing Google Kubernetes Engine... this is much more high-level.


Yeah, you are right. My bad.


That looks more like GKE or AKS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: