- The DevOps 2.3 Toolkit
- Viktor Farcic
- 1111字
- 2021-08-13 15:41:14
Creating Services by exposing ports
Before we dive into services, we should create a ReplicaSet similar to the one we used in the previous chapter. It'll provide the Pods we can use to demonstrate how Services work.
Let's take a quick look at the ReplicaSet definition:
cat svc/go-demo-2-rs.yml
The only significant difference is the db container definition. It is as follows.
... - name: db image: mongo:3.3 command: ["mongod"] args: ["--rest", "--httpinterface"] ports: - containerPort: 28017 protocol: TCP ...
We customized the command and the arguments so that MongoDB exposes the REST interface. We also defined the containerPort. Those additions are needed so that we can test that the database is accessible through the Service.
Let's create the ReplicaSet:
kubectl create -f svc/go-demo-2-rs.yml kubectl get -f svc/go-demo-2-rs.yml
We created the ReplicaSet and retrieved its state from Kubernetes. The output is as follows:
NAME DESIRED CURRENT READY AGE go-demo-2 2 2 2 1m
You might need to wait until both replicas are up-and-running. If, in your case, the READY column does not yet have the value 2, please wait for a while and get the state again. We can proceed after both replicas are running.
We can use the kubectl expose command to expose a resource as a new Kubernetes service. That resource can be a Deployment, another Service, a ReplicaSet, a ReplicationController, or a Pod. We'll expose the ReplicaSet since it is already running in the cluster.
kubectl expose rs go-demo-2 \ --name=go-demo-2-svc \ --target-port=28017 \ --type=NodePort
We specified that we want to expose a ReplicaSet (rs) and that the name of the new Service should be go-demo-2-svc. The port that should be exposed is 28017 (the port MongoDB interface is listening to). Finally, we specified that the type of the Service should be NodePort. As a result, the target port will be exposed on every node of the cluster to the outside world, and it will be routed to one of the Pods controlled by the ReplicaSet.
There are other Service types we could have used.
ClusterIP (the default type) exposes the port only inside the cluster. Such a port would not be accessible from anywhere outside. ClusterIP is useful when we want to enable communication between Pods and still prevent any external access. If NodePort is used, ClusterIP will be created automatically. The LoadBalancer type is only useful when combined with cloud provider's load balancer. ExternalName maps a service to an external address (for example, kubernetes.io).
In this chapter, we'll focus on NodePort and ClusterIP types. LoadBalancer will have to wait until we move our cluster to one of the cloud providers and ExternalName has a very limited usage.
The processes that were initiated with the creation of the Service are as follows:
- Kubernetes client (kubectl) sent a request to the API server requesting the creation of the Service based on Pods created through the go-demo-2 ReplicaSet.
- Endpoint controller is watching the API server for new service events. It detected that there is a new Service object.
- Endpoint controller created endpoint objects with the same name as the Service, and it used Service selector to identify endpoints (in this case the IP and the port of go-demo-2 Pods).
- kube-proxy is watching for service and endpoint objects. It detected that there is a new Service and a new endpoint object.
- kube-proxy added iptables rules which capture traffic to the Service port and redirect it to endpoints. For each endpoint object, it adds iptables rule which selects a Pod.
- The kube-dns add-on is watching for Service. It detected that there is a new service.
- The kube-dns added db container's record to the dns server (skydns).
The sequence we described is useful when we want to understand everything that happened in the cluster from the moment we requested the creation of a new Service. However, it might be too confusing so we'll try to explain the same process through a diagram that more closely represents the cluster.
Let's take a look at our new Service.
kubectl describe svc go-demo-2-svc
The output is as follows:
Name: go-demo-2-svc Namespace: default Labels: db=mongo language=go service=go-demo-2 type=backend Annotations: <none> Selector: service=go-demo-2,type=backend Type: NodePort IP: 10.0.0.194 Port: <unset> 28017/TCP TargetPort: 28017/TCP NodePort: <unset> 31879/TCP Endpoints: 172.17.0.4:28017,172.17.0.5:28017 Session Affinity: None External Traffic Policy: Cluster Events: <none>
We can see the name and the namespace. We did not yet explore namespaces (coming up later) and, since we didn't specify any, it is set to default. Since the Service is associated with the Pods created through the ReplicaSet, it inherited all their labels. The selector matches the one from the ReplicaSet. The Service is not directly associated with the ReplicaSet (or any other controller) but with Pods through matching labels.
Next is the NodePort type which exposes ports to all the nodes. Since NodePort automatically created ClusterIP type as well, all the Pods in the cluster can access the TargetPort. The Port is set to 28017. That is the port that the Pods can use to access the Service. Since we did not specify it explicitly when we executed the command, its value is the same as the value of the TargetPort, which is the port of the associated Pod that will receive all the requests. NodePort was generated automatically since we did not set it explicitly. It is the port which we can use to access the Service and, therefore, the Pods from outside the cluster. In most cases, it should be randomly generated, that way we avoid any clashes.
Let's see whether the Service indeed works:
PORT=$(kubectl get svc go-demo-2-svc \ -o jsonpath="{.spec.ports[0].nodePort}") IP=$(minikube ip) open "http://$IP:$PORT"
Git Bash might not be able to use the open command. If that's the case, replace the open command with echo. As a result, you'll get the full address that should be opened directly in your browser of choice.
We used the filtered output of the kubectl get command to retrieve the nodePort and store it as the environment variable PORT. Next, we retrieved the IP of the minikube VM. Finally, we opened MongoDB UI in a browser through the service port.
As I already mentioned in the previous chapters, creating Kubernetes objects using imperative commands is not a good idea unless we're trying some quick hack. The same applies to Services. Even though kubectl expose did the work, we should try to use a documented approach through YAML files. In that spirit, we'll destroy the service we created and start over.
kubectl delete svc go-demo-2-svc