PODs are instantiated across various nodes by the scheduler and each POD gets it’s unique virtual IP. Moreover, these instances need to be scaled up or down as per our load requirements and, also, need to be replaced in case of failures.So, the set of the backing POD ips for an application may keep changing.
The service API objects in K8s enable us to:
- keep track of the dynamic sets of backing PODs for an application
- provide service urls for load balanced application access
- expose the application to internal and\or external clients
Besides this it also provides ability to customize the default mechanism to address custom needs.
K8s Service – There are 4 basic types of service
K8s supports 4 basic types of service :
- ClusterIP : Only for internal access
- NodePort : It builds on top of the clusterIP service and provides exposure to external clients
- LoadBalancer : A load balancer in front of NodePort service
- ExternalName: A service for access external endpoints
As all the other services are like some variation of the cluster IP service, we will explore this in detail and then move on to see how other one vary from this.
1. ClusterIP : Used only for internal access.
The service API object is shown on the left side the diagram and when this gets deployed the following things happen internally.
- K8S assigns a cluster IP to the service
- The service create an Endpoints object based on spec.selector and keeps track of the backing POD IPs.
- kube-proxy creates the routing rules from cluster ip (@port) to endpoint ips(@ target ports) for load balancing purpose.
- Service Discovery: With the help of DNS service,if available, a dns entry for network access is also created as shown. This helps the clients not to worry about the clusterIP that gets assigned dynamically to the service.
Different options for customization:
- Custom Endpoints : Say, some clustered MySql instances are running outside the K8s cluster. And, you want to access this DB cluster from applications running inside K8s.
- You have the option to create a cluster service with no spec.selector
- Create an Endpoints object with the same name as the cluster service.
- Specify the targeted DB IPs in the Endpoints object.
- ClusterIP: None – This creates a HeadlessService
- No cluster IP is assigned and, hence, no routing rules in kube-proxy.
- The service adds separate DNS entries for each of the backing PODs, instead of the regular single DNSentry for the service.
- Specify a ClusterIP: You can hardcode a specific ip from the allowed range of cluster ips. The service will use this as the cluster IP as long as it is not taken up by another service.
Kube-proxy manages the load balancing (L4 type) as part of the ClusterIP service. In case you need to go for advanced L4 load balancing, you have the option of going with Headless Service and implement your own load balancing using the endpoint addresses available in the associated EndPoint object.
2. NodePort : The easiest option to provide external access.
- K8s creates the regular ClusterIP service as discussed earlier.
- K8s assigns a NodePort and creates proxy routing rules for routing requests from NodePorts on K8s Nodes to the ClusterIP service.
- This enables traffics coming at the assigned NodePort on any K8s node to get routed to one of the endpoint PODs.
Although its quite an easy approach to expose your service to the external clients it comes with some serious drawbacks :
- You need to open firewall access on the NodePort for a set of your K8s Nodes. Which is a serious security concern.
- The clients need to be intimated about any changes in the node IPs as well as the assigned NodePort. Which leads to maintenance issues.
A LoadBalancer service helps in solving these two issues by adding a LoadBalancer at the top. Let us see, how?
3. LoadBalancer: Allows you to add a load balancer on top of a NodePort service.
- The load balancer which connects to the NodePort service internally and provides a single point of access to the external clients.
- From security point of view, you need not have to expose your kubernetes nodes directly to your client.
- Keeping track of the available Nodes and any changes to the service NodePort is now the responsibility of the load balancer.
But, the load balancer solution has its own drawbacks:
- Adding an external load-balancer, adds to your cost.
- Moreover, if you have to expose multiple services , you can not expose them all on the same port(80) on a single load-balancer. So, you have to go for multiple load-balancers which will increase your cost many folds.
Luckily to overcome this cost issue of using multiple load balancers, K8s provides a nice solution called Ingress. We will look into this solution in a separate article.
4. External Name: A K8S service for external endpoints.
apiVersion: v1 kind: Service metadata: name: my-db-service namespace: test spec: type: ExternalName externalName: my.test.database.com
- No Cluster ID is assigned, no Endpoints is used and no proxying is setup by K8s.
- DNS name against the externalName contains the list of endpoints
- The redirection happens at the DNS level
When a K8s application tries to access my-db-service, it gets modified with the contents of my.test.database.com
This is useful in pointing to set of external applications or database cluster (not managed by K8s).