Hacker News new | past | comments | ask | show | jobs | submit login
Kong gateway reaches 1.0 GA, now supports service mesh (konghq.com)
150 points by hisham_hm on Dec 20, 2018 | hide | past | favorite | 49 comments



Hi all,

I was the lead developer on the service mesh implementation. I've just pushed https://github.com/Kong/kubernetes-sidecar-injector public, which should make deploying kong a service mesh simple on kubernetes.

Let me know if you have any questions, I'll try and check over the next few days.


I see that your Kong service mesh using nginx as the underlying proxy. Are there any plans to use Envoy proxy in future? Currently Istio and Linkerd2 are based on Envoy. Unlike nginx, envoy doesnt have an enterprise version where some enterprise features are held off. Also envoy has very good observability and traffic management features.


linkerd2 is not based on Envoy :) It has its own proxy written in rust : https://github.com/linkerd/linkerd2-proxy


Curious to see if Kong can do true hitless reloads without proxy restart. Envoy can do this but I thought only NGINX Plus has this capability?


Yes, Kong's entity configuration (including upstream services) are managed via the runtime, not via reading a config file. This allows for updating proxy configuration without restarting the process itself, allowing for zero-downtime config changes.


Kong indeed can, it's part of our API gateway for years :)


I used Kong in a project recently. Very bare-bones, You need to build your own client for it. Overall, it felt like a blackbox where things magically happened. Totally hated it.


Hello, CTO of Kong here. You mean a client to consume the Admin API in order to create entities on Kong? If that's the case the community has built lots of open source clients in pretty much any language that would make the job of integrating Kong within an existing system much easier. The community around Kong has also built declarative configuration support that you can use instead of the Admin API.

Our official declarative configuration will also be released very soon, and in some environments (like Kubernetes) we already support it.


Excited to hear about an official declarative config. When we were looking at deploying Kong, this was our biggest pain point (looked at lots of 3rd party solutions but none were ideal).


I am surprised you did not try to upsell Enterprise version


It is a community product first and foremost with a plugin framework that allows for extensibility with access to the full core API.

You can discuss with the community online [1], or meet a core contributor at one of the community meetups or in the monthly community call that we usually announce on Discuss.

[1] https://github.com/Mashape/kong or https://discuss.konghq.com/


What about performance, did you use it so much you got a sense of that? To me there is nothing that beats haproxy so far.


They are different products. Nginx and haproxy are more comparable. If you like HAproxy as a load balancer, you can easily use it with Kong. A lot of people do just that.


But then what is Kong doing?


Great work guys! I see that "Kong’s core router can now route raw TCP traffic."

Can you elaborate? What kinds of routing parameters and rules can Kong use to route raw TCP traffic?


You can define a route to be either L7 ("http", "https") or L4 ("tcp", "tls"). For L7 routes you can use the usual routing parameters that were already available in Kong, like routing by host, by paths (prefix or regex-based) and/or by method. For stream routing, you can route by IP source port, IP destination port and/or by SNI in the case of TLS connections.


Since we now support Service Mesh, any TCP traffic is supported. The full documentation can be found at https://docs.konghq.com/


Congrats! I worked for Kong's VP of Engineering a couple of years ago at OpenDNS. He's one of the best managers I've seen.


What would be the relevancy to this news?

And could you name a few traits of this person that make him/her stand out?


So I admittedly don't know much about this stuff, but what would be the difference between using Kong and the Nginx ingress controller? What advantages/improvements would I see/be able to use?


Kong is different from Nginx ingress controller. Without a controller, you have to manually register the endpoint/service. You would want to by pass k8s service and use pod ip directly. That's why they do the kong ingress controller.

The main difference is in plugin API. It's very easy to write plugin for Kong. The second is where data is persisted. Kong stores data in db. Therefore they can do something with it. Ingress controller in 0.21 has dynamic backend, they basically hold an in memory objects for api routing rule.

Kong shines when you have complex logic routing, want to leverage their API keys authentication. Such as you can easily expose a service, with API key store in db. Where as you have to write a logs of `auth access` rule and store key in configmap/env in Nginx ingress.

Nginx ingress config is all about watching configmap/annotations and re-generate config. Such as when new service are added, the config is generated(When new pods are added/remove they use Lua to routing so no more reload in there). In Kong, these are seamless, no reload. All data are stored in either Postgres/Cassandra.

That's said, Kong is very nice but it adds more overhead than a simple Nginx ingress.


Kong's K8s Ingress Controller lets you configure and run plugins (custom code) on your proxy traffic. This gives you a lot of power on how you’d like to route, authenticate, shape your traffic.

Nginx gives you the ability to tweak functionality, but it's not as dynamic or as easy out of the box.


> Nginx gives you the ability to tweak functionality, but it's not as dynamic or as easy out of the box.

NginX supports Lua and Javascript embedded and out of the box on many distributions.


That is through a set of additional plugins. Kong uses that base and ships with mature implementing plugins including a robust RESTful API for managing config.


> That is through a set of additional plugins.

Which are compiled and enabled in several distribution packages.


How does Kong compare to Tyk?


(Caveat: I’m the CEO of Tyk)

Tyk offers a more “batteries included” approach to Kong, and so doesn’t rely on external plugin authors to extend the ecosystem. 100% of our dev team are constantly working on our open source components and we like to keep it that way.

Because of that, Tyk isn’t “open core” like Kong is, there’s no lock-in or levers to get you to buy our value-adds like our Management Dashboard GUI or our Multi-Data-Center clustering add-on - you should be able to do all API Management without having to pay us a penny.

A simple example is OpenID connect support, this is a Kong enterprise plugin, with Tyk that comes as part of the normal gateway.

In terms of performance Tyk and Kong are pretty close now (Tyk pre 2.6 was slower) but we believe we now have parity, especially when switching on things like analytics, auth and rate limiting.

Tyk works very well in k8s though we don’t have a helm chart yet (coming soon).

You can also deploy Tyk as pure SaaS (fully managed), hybrid cloud (we handle back-end and control plane, you install gateways local to services) and full on-prem (install anywhere: K8s, AWS, GCP, Azure - even on Arm servers). We’re unique in that regard.

Tyk has always been separated into control-plane and operations-level components (our gatewaybis very small), so we don’t see that as something new to crow about. If you use our Dashboard, it moves the configuration and data layer out of the gateways and moves it centrally. If you use our MDCB system (enterprise) you can extend that capability across clusters in different clouds to get really targeted, distributed API governance.

There’s a bunch of other things that are different too, but they are more functional.


> you should be able to do all API Management without having to pay us a penny.

I contacted Kong sales once about OpenID connect support, they basically dismissed us as too small. Needless to say we took Kong out of our stack and won't consider it again.


That's unfortunate, there is a good open source one that I have used and works well. The enterprise one is definitely more verbose but the open source one works well!


Kong has an OpenID Connect plugin that comes with Enterprise, but there are at least three OpenID Connect plugins made by the community, one of them is from Nokia.

I work for Kong and I am an author of that enterprise OpenID Connect plugin. Maybe that is a biggest exception to me, as otherwise I'd say that almost all of my code and related goes directly to OSS.


For the interested, that Nokia Open ID Connect plugin is here [0]. I had to hack around with it a bit to get some extra functionality and support ADFS [1].

[0] https://github.com/nokia/kong-oidc

[1] https://github.com/philbarr/kong-oidc-adfs


Thanks for this comparison, never heard of Tyk before.

OT: Your frontpage is very slow (Chrome v.71) - it loads but takes ages to scroll down.


Thanks - we’re painfully aware and are lookig to change it so it’s more usable :-/


That sky background is a 3MB 1080p video file. Very unnecessary and removing it will fix the problem. Also please remove the scrolling interference, it just makes it hard to continue through the site at my pace.


If nothing else, your site helped me uncover some kind of bug in Firefox's experimental compositor.

https://bugzilla.mozilla.org/show_bug.cgi?id=1515787


I've used both, Kong is more "pluggable" and easier to extend in my experience. Also, the ecosystem and community around Kong is much stronger than Tyk's which is amazing.


I'd argue Tyk is easier to extend Tyk, as they include many plugins already baked-into their OS gateway. If you want to extend Kong with some custom plugins I think Lua is your only option. With Tyk you can use pretty much any programming language to write your middleware. Gets my vote


It was messy and bloated, we had many different plugins built in different languages and couldn't share patterns and code.

Single language simplified our architecture, less plugins created less bloat, and ultimately we moved faster with Kong.

Engineers also really enjoyed that they got to learn something new and also could get into C easily with Kong as well using LuaJIT.


Hello, Marco CTO of Kong here.

Kong is arguably more popular than Tyk (and other similar gateways) when it comes to adoption (55M+ downloads and more than 70,000 instances of Kong running per day across the world), and faster when it comes to performance. BBVA - a large banking group - wrote this technical blog post a while ago comparing Kong's and Tyk performance: https://www.bbva.com/en/api-gateways-kong-vs-tyk/\

Kong OSS is 100% open source, not limited to non-commercial use.

Kong is basically a programmable runtime that can be extended with Plugins [1]. There are more than 500+ plugins that are available on GitHub that we are (slowly) adding to the official Hub, among over 5000+ contributions. You can talk to the community at https://discuss.konghq.com/

Kong is also lightweight with a lower footprint, which is required to support both traditional API gateway use cases and modern microservices environments (Kubernetes sidecar, for example). Because of that, our users are basically using one runtime for both N-S traffic (traditional API Gateway usage) and E-W traffic within a microservice oriented architecture. You can easily separate data and control planes to grow to thousands of Kong nodes running in a system.

There are users/customers running 1M+ TPS on top of distributed Kong clusters spanning across different platforms (containers, multi-cloud, even bare metal) with less than < 1ms processing latency per request. One of the reasons for this is that with Kong you can include/exclude plugins that you don't use instead of having a heavier all-in-one runtime like many gateways do.

As a result to Kong's adoption, the business is also growing very rapidly which will allow us to better deliver OSS features moving forward :) [2]

You can ping me at https://twitter.com/subnetmarco

[1] https://docs.konghq.com/hub

[2] https://konghq.com/about-kong-inc/kong-hits-record-growth-20...


How does 1.0 handle database migrations?


Kong runs at a critical point in any infrastructure. Having any sort of downtime is usually unacceptable.

To avoid downtime due to Kong upgrades, Kong now supports a blue-green deployment method where for two Kong nodes of version A and version A+1 can run together at the same time as the upgrade is being rolled out, and then switching all traffic to A+1.


Kong has a migrations framework that handles them for you. If you're doing a migration because you're upgrading from an older version of Kong you'll want to do that only once the Kong nodes are upgraded.

For example, here are the instructions on upgrading to 1.0, which walk you through a couple migrations scenarios. https://discuss.konghq.com/t/kong-1-0-is-now-generally-avail...



What would be the best way to deploy this with my application in Kubernetes?


There are a few options to deploy Kong as your API Gateway:

Kong can work as an Ingress Controller for your Kubernetes cluster and run plugins for your traffic at the Ingress layer. Refere to the repo for more details: https://github.com/kong/kubernetes-ingress-controller

You can simply deploy it as an application using the Helm chart: https://github.com/helm/charts/tree/master/stable/kong


If you're using Kubernetes, you should check out Ambassador. Declarative syntax, no database (persists to etcd), very high performance, built on Envoy Proxy.



I’ll need to give it a try.


Great news! Thanks for sharing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: