Quick and dirty way to run Pods

Just as we can execute docker run to create containers, kubectl allows us to create Pods with a single command. For example, if we'd like to create a Pod with a Mongo database, the command is as follows.

kubectl run db --image mongo  

You'll notice that the output says that deployment "db" was created. Kubernetes runs more than a single Pod. It created a Deployment and a few other things. We won't go into all the details just yet. What matters, for now, is that we created a Pod. We can confirm that by listing all the Pods in the cluster:

kubectl get pods  

The output is as follows:

NAME                READY STATUS            RESTARTS AGE
db-59d5f5b96b-kch6p 0/1   ContainerCreating 0        1m  

We can see the name of the Pod, its readiness, the status, the number of times it restarted, and for how long it has existed (its age). If you were fast enough, or your network is slow, none of the pods might be ready. We expect to have one Pod, but there's zero running at the moment. Since the mongo image is relatively big, it might take a while until it is pulled from Docker Hub. After a while, we can retrieve the Pods one more time to confirm that the Pod with the Mongo database is running.

kubectl get pods  

The output is as follows:

NAME                READY STATUS  RESTARTS AGE
db-59d5f5b96b-kch6p 1/1   Running 0        6m  

We can see that, this time, the Pod is ready and we can start using the Mongo database.

We can confirm that a container based on the mongo image is indeed running inside the cluster.

eval $(minikube docker-env)
    
docker container ls -f ancestor=mongo  

We evaluated minikube variables so that our local Docker client is using Docker server running inside the VM. Further on, we listed all the containers based on the mongo image. The output is as follows (IDs are removed for brevity):

IMAGE COMMAND                CREATED       STATUS       PORTS NAMES
mongo "docker-entrypoint.s..." 5 minutes ago Up 5 minutes       k8s
_db_db-...

As you can see, the container defined in the Pod is running.

Figure 3-1: A Pod with a single container

That was not the best way to run Pods so we'll delete the deployment which, in turn, will delete everything it envelops, including the Pod.

kubectl delete deployment db  

The output is as follows:

deployment "db" deleted  

Why did I say that was not the best way to run Pods? We used the imperative way to tell Kubernetes what to do. Even though there are cases when that might be useful, most of the time we want to leverage the declarative approach. We want to have a way to define what we need in a file and pass that information to Kubernetes. That way, we can have a documented and repeatable process, that can (and should) be version controlled as well. Moreover, the kubectl run was reasonably simple. In real life, we need to declare much more than the name of the deployment and the image. Commands like kubectl can quickly become too long and, in many cases, very complicated. Instead, we'll write specifications in YAML format. Soon, we'll see how we can accomplish a similar result using declarative syntax.