Modern best practice for application development has shifted towards running your application in containers. Containers provide all the application's requirements in one artifact, giving it the ability to run amost anywhere. If it works in a container on the development machine, it will work in production. Gone are the days when the argument was “It works on my machine!”
Getting your applications from code to container is not always an easy task. The process typically involves running docker build, docker push, and docker run but this does not scale well. A typical enterprise solution is to set up a Continuous Integration (CI) pipeline to be triggered when code is pushed to a git repository. This pipeline would then perform docker build, docker push, and docker run. This is a good basic pipeline, but when you start to introduce code linting, static analysis, code smells and integration testing it can get complicated very fast. Also now you require people to maintain this pipeline and this infrastructure it runs on and this is just one application. Large enterprises can have hundreds or more.
Kubernetes has become the defacto standard for deploying containers. While Kubernetes is great at container orchestration, it can present a large barrier to entry for a developer to have to learn how to interact with it to deploy their containers and test their code.
Enter Cloud Build
Cloud Build is a managed solution from Google Cloud that allows you to configure a pipeline to build your application from source code and output to a container image. You can even deploy the resulting container and test your application using Cloud Run, making it easy to test your new code without ever touching docker, kubectl, or the GCP Console.
To use Cloud Build, you simply add the Dockerfile and cloudbuild.yaml to the application Github, Bitbucket, or GCP Cloud Source repository. Cloud Build has a trigger feature that will start a build when a new commit has been received that matches a filter you configure to only build certain branches or tags.
The cloudbuild.yaml file contains the ‘steps’ of the pipeline. Each step has 2 parts; the container to run the command in, and the arguments to pass to it. All of the steps are done in the workspace directory, which gets shared between all of the steps in the pipeline. These steps can do anything you can run in a container. Common tasks are linting, static analysis, integration testing, compiling, building, and pushing a container image to a registry.
This pipeline builds a container using the Dockerfile and deploys it to a Cloud Run managed instance. When the code is commited and the build is triggered, all the developer will have to do is go to the URL in a browser to see the result. With the image built, we can now deploy it to our cluster.
Cloud Run is the GCP implementation of the open source Knative Serving project with a few added features such as console interface and the ability to deploy to a fully managed environment on GCP. With the fully managed product, there are no Google compute instances or Kubernetes clusters to manage. Your application will scale up and down with demand, including down to 0 instances if there are no requests to save on costs.
If you need to have more control over the environment where your container runs, Cloud Run supports deploying to GKE clusters. Installing Cloud Run on your GKE cluster is as easy as clicking a checkbox during cluster creation. If you are an Anthos user, you can even deploy to your on-prem cluster.
Knative aims to abstract away the complexities of deploying an application on Kubernetes. It takes advantage of the Istio service mesh and makes it simple to use the advanced features of Istio. Knative easily enables Blue-Green deployments, canary testing, and scaling down to 0 running containers if there isn’t any traffic.
Cloud Run makes it easy to deploy your container anywhere, but deploying your container everywhere can be a challenge. It is simple to deploy to a single Kubernetes cluster, but what if there are 5, 50, or 500 clusters to deploy to? Applying the same configuration to multiple Kubernetes clusters is time-consuming and subject to human error. Can the deployer guarantee that a yaml was deployed to all clusters simultaneously and without error?
Enter Anthos Config Management.
Anthos Config Management
Love or hate it, Kubernetes uses YAML or JSON to define the desired state of the resources deployed. This has the advantage of being able to track changes to these files in a Source Control Management (SCM) like git. A peer-review process can be used to ensure that only configuration that has been approved by multiple people makes it into the master branch. However, these files still need to be applied to the Kubernetes clusters. This can be done by creating a pipeline to deploy to the clusters, but what is preventing configuration drift where configuration is applied outside of the pipeline?
Anthos Config Management (ACM) was developed to fill a void in container management on Kubernetes. ACM will monitor a properly structured git repository and apply the changes on the Kubernetes cluster where it is running. It can run on one Kubernetes cluster or a hundred, as long as the git server can handle the load. Any changes to the configured branch will immediately be applied to the clusters where it is running, and any configuration drift on the resources defined in the git repository isf automatically set back to the desired state defined in git. To learn more about ACM see this blog post by Christopher Markieta.
Bringing it all together
Cloud build focuses on the developer experience and workflow and results in a container image that can be deployed anywhere. Cloud Run configurations use the resulting container image and can be deployed via Anthos Config Management.
To showcase these features, I’ve created a simple repository where we can run Canary testing and a Blue-Green deployment.
- A Kubernetes cluster deployed with Istio and Cloud Run (GKE cluster our case, check the Istio and Cloud Run boxes on creation)
- Admin access to the cluster
- kubectl installed locally for management
- Git repository for ACM
- A domain where your application will be served
- (Optional) Git credentials stored in a Kubernetes Secret if repo is private
- (Optional) nomos installed locally for checking ACM status and repository structure creation
ACM requires a specific folder structure in the git repository it is monitoring. The nomos command can be used to set up the structure of the repository. In our case, the structure looks like this.
The namespace.yaml defines the namespace that we’re deploying to and will be created when ACM is applied to the cluster.
The file arcitq_weather.yaml is what we’re interested in. For now, we’ll create it as an empty file. For further explanation of the other file, see Christopher Markieta’s blog post above.
Anthos Config Management Deployment
Download the configuration yaml and apply it to the cluster. This will install the operator to manage Anthos Config Management.
Configure ACM to connect to your git repository by replacing the contents in the <>. In this case, we’ll assume it is public, so we don't need to add credentials for git.
Configure the domain. Use your domain here.
Add the following to arctiq_weather.yaml. The metadata.name attribute will combine with the domain configured above and the namespace of the application to create our URL.
After commiting and pushing to git, check the status of the cluster using the nomos command. We can watch the Last Synced Token to see when the configuration has been applied to the cluster.
Now we can view the result in our browser. Our URL is http://arctiq-weather-and-map.knative-demo.cloudrunrocks.com/
Now that we have the application up, it's time to update it. Let's add a new revision; this one will have a green background. Here we’re deploying a new revision but not sending any traffic to it. It can be accessed by knowing a hidden URL that includes the traffic.tag field. This way, we can test it in our production environment without live traffic being sent to it. Write this new configuration to arctiq_weather.yaml, commit, and push to the repository.
Important items to note here:
- metdata.name is the same
- spec.template.metadata.name has changed
- spec.templete.spec.containers.image has changed
- traffic has been added and is using the spec.template.metadata.name to route traffic to specific revisions. Even though the arctiq-weather-and-map-blue was defined in the previous revison and not this one, we can still refer to it to route traffic.
Check the status, notice the new ‘Last Synced Token’.
Take a look at the revision history.
The URL for our canary deployment is http://canary-arctiq-weather-and-map.knative-demo.cloudrunrocks.com/
After we’ve verified the functionality, we can start sending traffic to it. Let's update the arctiq_weather.yaml file to send live traffic to the new instance. The typical scenario is to start a low volume (1-2%) and work your way up to a point where you’re comfortable moving all traffic to the new instance. Apply this to the arctiq_weather.yaml file, commit, and push.
At this point, 1 out of 100 requests would go to our new revision. As we increase the traffic to green and reduce it to blue by applying the above yaml with the new ratio we'd see more green backgrounds.
If we've run into some issues with our new application, we can always send all traffic back to the previous release. If we're confident the new revision is working, we can go all the way! Apply this to the arctiq_weather.yaml file, commit, and push.
Now we can see our new green background on the production URL.
Getting your code into a container has never been easier!
If you would like Arctiq to assist with your next Google Cloud Build or Anthos enablement project, take the first step and contact the Arctiq team today.