Welcome to the Kubernetes World

Unlike typical software that usually evolves piece by piece, Kubernetes got a kick-start as it was designed based on years of experience on Google's internal large-scale cluster management software such as Borg and Omega. That's to say, Kubernetes was born equipped with lots of best practices in the container orchestration and management field. Since day one, the team behind it understood the real pain points and came up with proper designs for tackling them. Concepts such as pods, one IP per pod, declarative APIs, and controller patterns, among others that were first introduced by Kubernetes, seemed to be a bit "impracticable", and some people at that time might have questioned their real value. However, 5 years later, those design rationales remain unchanged and have proven to be the key differentiators from other software.

Kubernetes resolves all the challenges mentioned in the previous section. Some of the well-known features that Kubernetes provides are:

  • Native support for application life cycle management

    This includes built-in support for application replicating, autoscaling, rollout, and rollback. You can describe the desired state of your application (for example, how many replicas, which image version, and so on), and Kubernetes will automatically reconcile the real state to meet its desired state. Moreover, when it comes to rollout and rollback, Kubernetes ensures that the old replicas are replaced by new ones gradually to avoid downtime of the application.

  • Built-in health-checking support

    By implementing some "health check" hooks, you can define when the containers can be viewed as ready, alive, or failed. Kubernetes will only start directing traffic to a container when it's healthy as well as ready. It will also restart the unhealthy containers automatically.

  • Service discovery and load balancing

    Kubernetes provides internal load balancing between different replicas of a workload. Since containers can fail occasionally, Kubernetes doesn't use an IP for direct access. Instead, it uses an internal DNS and exposes each service with a DNS record for communication within a cluster.

  • Configuration management

    Kubernetes uses labels to describe the machines and workloads. They're respected by Kubernetes' components to manage containers and dependencies in a loosely coupled and flexible fashion. Moreover, the simple but powerful labels can be used to achieve advanced scheduling features (for example, taint/toleration and affinity/anti-affinity).

    In terms of security, Kubernetes provides the Secret API to allow you to store and manage sensitive information. This can help application developers to associate the credentials with your applications securely. From a system administrator's point of view, Kubernetes also provides varied options for managing authentication and authorization.

    Moreover, some options such as ConfigMaps aim to provide fine-grained mechanics to build a flexible application delivery pipeline.

  • Network and storage abstraction

    Kubernetes initiates the standards to abstract the network and storage specifications, which are known as the CNI (Container Network Interface) and CSI (Container Storage Interface). Each network and storage provider follows the interface and provides its implementation. This mechanism decouples the interface between Kubernetes and heterogeneous providers. With that, end users can use standard Kubernetes APIs to orchestrate their workloads in a portable manner.

Under the hood, there are some key concepts supporting the previously mentioned features, and, more critically, Kubernetes provides different extension mechanics for end-users to build customized clusters or even their own platform:

  • The Declarative API

    The Declarative API is a way to describe what you want to be done. Under this contract, we just specify the desired final state rather than describing the steps to get there.

    The declarative model is widely used in Kubernetes. It not only enables Kubernetes' core features to function in a fault-tolerant way but also serves as a golden rule to build Kubernetes extension solutions.

  • Concise Kubernetes core

    It is common for a software project to grow bigger over time, especially for famous open source software such as Kubernetes. More and more companies are getting involved in the development of Kubernetes. But fortunately, since day one, the forerunners of Kubernetes set some baselines to keep Kubernetes' core neat and concise. For example, instead of binding to a particular container runtime (for example, Docker or Containerd), Kubernetes defines an interface (CRI or the container runtime interface) to be technology-agnostic so that users can choose which runtime to use. Also, by defining the CNI (Container Network Interface), it delegates the pod and host's network routing implementation to different projects such as Calico and Weave Net. In this way, Kubernetes is able to keep its core manageable, and also encourage more vendors to join, so the end-users can have more choices to avoid vendor lock-ins.

  • Configurable, pluggable, and extensible design

    All Kubernetes' components provide configuration files and flags for users to customize the functionalities. And each core component is implemented strictly to adhere to the public Kubernetes API; for advanced users, you can choose to implement a part of or the entire component yourself to fulfill a special requirement, as long as it is subject to the API. Moreover, Kubernetes provides a series of extension points to extend Kubernetes' features, as well as building your platform.

In the course of this book, we will walk you through the high-level Kubernetes architecture, its core concepts, best practices, and examples to help you master the essentials of Kubernetes, so that you can build your applications on Kubernetes, and also extend Kubernetes to accomplish complex requirements.

Activity 1.01: Creating a Simple Page Count Application

In this activity, we will create a simple web application that counts the number of visitors. We will containerize this application, push it to a Docker image registry, and then run the containerized application.

A PageView Web App

We will first build a simple web application to show the pageviews of a particular web page:

  1. Use your favorite programming language to write an HTTP server to listen on port 8080 at the root path (/). Once it receives a request, it adds 1 to its internal variable and responds with the message Hello, you're visitor #i, where i is the accumulated number. You should be able to run this application on your local development environment.

    Note

    In case you need help with the code, we have provided a sample piece of code written in Go, which is also used for the solution to this activity. You can get this from the following link: https://packt.live/2DcCQUH.

  2. Compose a Dockerfile to build the HTTP server and package it along with its dependencies into a Docker image. Set the startup command in the last line to run the HTTP server.
  3. Build the Dockerfile and push the image to a public Docker images registry (for example, https://hub.docker.com/).
  4. Test your Docker images by launching a Docker container. You should use either Docker port mapping or an internal container IP to access the HTTP server.

You can test whether your application is working by repeatedly accessing it using the curl command as follows:

root@ubuntu:~# curl localhost: 8080

Hello, you're visitor #1.

root@ubuntu:~# curl localhost: 8080

Hello, you're visitor #2.

root@ubuntu:~# curl localhost: 8080

Hello, you're visitor #3.

Bonus Objective

Until now, we have implemented the basics of Docker that we have learned in this chapter. However, we can demonstrate the need to link different containers by extending this activity.

For an application, usually, we need multiple containers to focus on different functionalities and then connect them together as a fully functional application. Later on, in this book, you will learn how to do this using Kubernetes; however, for now, let's connect the containers directly.

We can enhance this application by attaching a backend datastore to it. This will allow it to persist its state even after the container is terminated, that is, it will retain the number of visitors. If the container is restarted, it will continue the count instead of resetting it. Here are some guidelines for building on top of the application that you have built so far.

A Backend Datastore

We may lose the pageview number when the container dies, so we need to persist it into a backend datastore:

  1. Run one of the three well-known datastores: Redis, MySQL, or MongoDB within a container.

    Note

    The solution to this activity can be found at the following address: https://packt.live/304PEoD. We have implemented Redis for our datastore.

    You can find more details about the usage of the Redis container at this link: https://hub.docker.com/_/redis.

    If you wish to use MySQL, you can find details about its usage at this link: https://hub.docker.com/_/mysql.

    If you wish to use MongoDB, you can find details about its usage at this link: https://hub.docker.com/_/mongo.

  2. You may need to run the container using the --name db flag to make it discoverable. If you are using Redis, the command should look like this:

    docker run --name db -d redis

Modifying the Web App to Connect to a Backend Datastore

  1. Every time a request comes in, you should modify the logic to read the pageview number from the backend, then add 1 to its internal variable, and respond with a message of Hello, you're visitor #i, where i is the accumulated number. At the same time, store the added pageview number in the datastore. You may need to use the datastore's specific SDK Software Development Kit (SDK) to connect to the datastore. You can put the connection URL as db:<db port> for now.

    Note

    You may use the source code from the following link: https://packt.live/3lBwOhJ.

    If you are using the code from this link, ensure that you modify it to map to the exposed port on your datastore.

  2. Rebuild the web app with a new image version.
  3. Run the web app container using the --link db:db flag.
  4. Verify that the pageview number is returned properly.
  5. Kill the web app container and restart it to see whether the pageview number gets restored properly.

Once you have created the application successfully, test it by accessing it repeatedly. You should see it working as follows:

root@ubuntu:~# curl localhost: 8080

Hello, you're visitor #1.

root@ubuntu:~# curl localhost: 8080

Hello, you're visitor #2.

root@ubuntu:~# curl localhost: 8080

Hello, you're visitor #3.

Then, kill the container and restart it. Now, try accessing it. The state of the application should be persisted, that is, the count must continue from where it was before you restarted the container. You should see a result as follows:

root@ubuntu:~# curl localhost: 8080

Hello, you're visitor #4.

Note

The solution to this activity can be found at the following address: https://packt.live/304PEoD.