Better late than never, I had started this blog back in late October after Github Universe in San Francisco. I stopped by the Google office in Seattle on my way home and was able to meet with some of the Knative team and chat about all this Serverless hype. I had been interested in what the Knative community was building and how the project would impact kubernetes. Knative is being developed by the likes of Google, Pivotal, SAP, Red Hat, IBM, and many others. You can visit the Github Community Page, also check-out a recent blog by Mark Chmarny from Google about the growth of the project and some adoption milestones.
When it comes to serverless you probably heard about AWS Lambda service first, the idea for the most part is “just run my code”. Don’t make me think about all the other aspect of running applications. Hint – they are still containers behind the scene. You will also start to hear (or already have) everyone talking about functions… its all very confusing when you hear all the buzz words and read various vendors describing their capabilities and vision.
Now that I have been able to read up on Knative, follow along on the community Slack Channel and try it out for myself I have a more well defined explanation when people ask me what its all about.
My take is that Knative is about delivering an enhanced developer experience to the kubernetes community, you don’t have to declare every configuration option available, if you declare a required configuration option, Knative will make an opinionated configuration and deliver what you need to run your application. The first time I saw this approach was with OpenShift and the “oc new app” command. You could point at a docker image and OpenShift would do the rest, it would even expose the service and route required to hit your application. Knative is taking these approaches to the next level, it’s designed to offer the building blocks that developers need with-out having to worry about all the difficult low level requirements to build, deploy, and manage cloud native applications. It also supports many of today’s modern programing languages and frameworks “out of the box”.
My take is the serverless aspects are mainly focused on the ability to automatically scale to zero when your applications are not receiving traffic and not having to worry about all the low level things that are required when managing apps on kubernetes. The nice part is when your service starts to receive traffic it will spin up and scale as required to respond to the increasing requests. Again you still need a kubernetes cluster to run your application workload(s) but as you manage lots of application endpoints you will certainly get more out of you cluster. Google GKE (Google Kubernetes Engine) also allows worker nodes to scale up and down as required to handle load events delivering you even more cost effectiveness.
By now you have surely heard about Istio, its a critical part of what Knative brings to the table. Have a look at this previous blog post published that explains the foundational components. More to come on the Istio topic…
Core Features of Knative
I am making more time to play with Knative so keep an eye out for some technical demo’s coming soon.
Interested in learning more about Knative and taking a “from source code” approach to your cloud-native applications, We would love to hear from you.