K8s – Namespace

A Namespace enables us to divide the physical cluster of servers into multiple virtual clusters as shown below.

Virtual Clusters(Namespaces) sharing the same Physical Cluster of Nodes

When 22 players are running around on the same field, how do we watch them? The first thing we do is to group them using there colorful jerseys.

In a similar way, these virtual clusters group the project specific objects spread all over cluster of servers using their namespace attribute values.

So, all objects in the cluster with the same namespace value fall into one virtual cluster. They are visible to each other and can access each other. But, objects with different namespace values stay independent of each other.

Thus the namespace, allows us to share the physical cluster with multiple teams, projects and environments by creating virtual clusters.

 

Default K8s Namespaces

$ kubectl get namespace
NAME                STATUS   AGE
default             Active   2m36s
kube-node-lease     Active   2m37s
kube-public         Active   2m37s
kube-system         Active   2m37s

The list above shows the Namespaces that a K8s the cluster has by default .

  • default : It is the default namespace assigned to the deployed objects by K8s when we deploy a namespaced object without specifying a namespace.
  • kube-system : Kubernetes uses this for creating its own objects.
  • kube-public : Its the Namespace accessible to all users. We can use this for keeping shared object available to all the clusters.
  • kube-node-release : It is used for the lease objects associated with each node that improves the performance of the node heartbeats as the cluster scales.

 

Creating and using a Namespace

Create a Namespace
apiVersion: v1
kind: Namespace
metadata:
 name: demo-project-dev

The .yaml above, shows a Namespace API object that we will use for creating a dev environment for our demo-project.

We can deploy this .yaml using apply command or, as an alternative, we use the create [namespace] command as shown :

$ kubectl apply -f demo-project-dev.yaml

namespace/demo-project-dev created

//We can use this as an alternative to the above command
$ kubectl create namespace demo-project-dev

As we retrieve all our namespace, we can now see our newly added demo-project-dev environment, ready for use.

$ kubectl get namespace
NAME                STATUS   AGE
default             Active   2m36s
demo-project-dev    Active   52s
kube-node-lease     Active   2m37s
kube-public         Active   2m37s
kube-system         Active   2m37s

 

Deploy an Application & Verify its Scope

K8s uses namespace attribute to logically group objects in a namespace.

Let us deploy a hello-app from google sample project, into the default and demo-project-dev namespace.

When we deploy something without specifying the namespace, K8s deploys that to default namespace. Hence, for deploying into ‘demo-project-dev’ namespace, we will have to specify the namespace explicitly as shown.

# Deploy into default namespace 
kubectl create deployment hello --image=gcr.io/google-samples/hello-app:1.0

# Deploy into demo-project namespace (-n demo-project-dev)
kubectl create deployment hello  --image=gcr.io/google-samples/hello-app:1.0 -n demo-project

As we retrieve the pods using get pod command from both the namespaces, we can clearly observer the following things:

  • We have two separate PODs as they have their own names and ips.
  • As each namespace shows its own pod instance, it confirms that an object belonging to a namespace is not visible to the other namespaces.
  • We can also see how K8s has assigned the Namespace attribute to each of these instances for grouping them into their namespaces.
 
$ kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
hello-6649fc6cf7-jlv96   1/1     Running   0          47m   172.18.0.4   minikube   <none>           <none>

$ kubectl describe pod |head -2
Name:         hello-6649fc6cf7-jlv96
Namespace:    default    #This attribute helps in grouping objects in a namespace 
 
 
$ kubectl get pod -n demo-project-dev -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP           NODE       NOMINATED NODE   READINESS GATES
hello-6649fc6cf7-tk7bs   1/1     Running   0          2m15s   172.18.0.5   minikube   <none>           <none>

$ kubectl describe pod -n demo-project-dev |head -2
Name:         hello-6649fc6cf7-tk7bs
Namespace:    demo-project-dev
 

As each namespace hold their own set of objects in a separate scope, it enables us to create multiple projects on the shared servers.

 

Namespaced vs Non-Namespaced Objects.

All API objects do not belong to a namespace, as some are shared across the projects.

K8s categorizes the objects which always belong to a specific project as namespaced objects and assigns a namespace value when they are deployed. Thus, we confine their visibility only to the assigned namespace.

On the other hand, some objects are shared across multiple projects and, hence, they do not have a namespace value. As a result, we can access them from any namespace.

Below are few example showing the objects of both the types.

We can use the following commands to get the complete list of the namespaced and non-namespaced objects.

#To get all Namespace objects
kubectl api-resources --namespaced=true

# To get all Non-namespace objects
kubectl api-resources --namespaced=false
$ kubectl api-resources --namespaced=true
NAME                        SHORTNAMES   APIGROUP                    NAMESPACED   KIND
bindings                                                             true         Binding
configmaps                  cm                                       true         ConfigMap
endpoints                   ep                                       true         Endpoints
events                      ev                                       true         Event
limitranges                 limits                                   true         LimitRange
persistentvolumeclaims      pvc                                      true         PersistentVolumeClaim
pods                        po                                       true         Pod
podtemplates                                                         true         PodTemplate
replicationcontrollers      rc                                       true         ReplicationController
resourcequotas              quota                                    true         ResourceQuota
secrets                                                              true         Secret
serviceaccounts             sa                                       true         ServiceAccount
services                    svc                                      true         Service
controllerrevisions                      apps                        true         ControllerRevision
daemonsets                  ds           apps                        true         DaemonSet
deployments                 deploy       apps                        true         Deployment
replicasets                 rs           apps                        true         ReplicaSet
statefulsets                sts          apps                        true         StatefulSet
localsubjectaccessreviews                authorization.k8s.io        true         LocalSubjectAccessReview
horizontalpodautoscalers    hpa          autoscaling                 true         HorizontalPodAutoscaler
cronjobs                    cj           batch                       true         CronJob
jobs                                     batch                       true         Job
leases                                   coordination.k8s.io         true         Lease
endpointslices                           discovery.k8s.io            true         EndpointSlice
events                      ev           events.k8s.io               true         Event
ingresses                   ing          extensions                  true         Ingress
ingresses                   ing          networking.k8s.io           true         Ingress
networkpolicies             netpol       networking.k8s.io           true         NetworkPolicy
poddisruptionbudgets        pdb          policy                      true         PodDisruptionBudget
rolebindings                             rbac.authorization.k8s.io   true         RoleBinding
roles                                    rbac.authorization.k8s.io   true         Role
$ kubectl api-resources --namespaced=false
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
componentstatuses                 cs                                          false        ComponentStatus
namespaces                        ns                                          false        Namespace
nodes                             no                                          false        Node
persistentvolumes                 pv                                          false        PersistentVolume
mutatingwebhookconfigurations                  admissionregistration.k8s.io   false        MutatingWebhookConfiguration
validatingwebhookconfigurations                admissionregistration.k8s.io   false        ValidatingWebhookConfiguration
customresourcedefinitions         crd,crds     apiextensions.k8s.io           false        CustomResourceDefinition
apiservices                                    apiregistration.k8s.io         false        APIService
tokenreviews                                   authentication.k8s.io          false        TokenReview
selfsubjectaccessreviews                       authorization.k8s.io           false        SelfSubjectAccessReview
selfsubjectrulesreviews                        authorization.k8s.io           false        SelfSubjectRulesReview
subjectaccessreviews                           authorization.k8s.io           false        SubjectAccessReview
certificatesigningrequests        csr          certificates.k8s.io            false        CertificateSigningRequest
runtimeclasses                                 node.k8s.io                    false        RuntimeClass
podsecuritypolicies               psp          policy                         false        PodSecurityPolicy
clusterrolebindings                            rbac.authorization.k8s.io      false        ClusterRoleBinding
clusterroles                                   rbac.authorization.k8s.io      false        ClusterRole
priorityclasses                   pc           scheduling.k8s.io              false        PriorityClass
csidrivers                                     storage.k8s.io                 false        CSIDriver
csinodes                                       storage.k8s.io                 false        CSINode
storageclasses                    sc           storage.k8s.io                 false        StorageClass
volumeattachments                              storage.k8s.io                 false        VolumeAttachment

One important point to notice here is the namespace object. Even though it forms the boundary of a project, we do not categorize it as a namespaced object.

This is because, if we categorize it as a namespaced object we can not view it from other namespaces. Thus, we can never list the all of the namespaces from any namespace.

 

Finalizers in Namespace

Useful for making any specialized cleanups of resources when we delete the namspace.

We have used the following command to fetch our dev project.

kubectl get namespace my-project-dev -o yaml

As we can see, K8s has add a default task under sepec.finalizers. When we delete our namespace, this will do a regular clean up for the Namespace.

If we need to do any special clean up, we can add our own finalizes in our Namespace definition files.

The namespace as stored in K8s cluster:

apiVersion: v1
kind: Namespace
metadata:
  name: my-project-dev
  #few metadata cleaned up for clarity purpose
spec:
  finalizers:
  - kubernetes
status:
  phase: Active

 

Using Context to Switch the Default Namespace

In kubectl, the current context decides the namespace it points to by default.

While working on kubectl, if we do not specify a namespace, K8s deploys our objects into the default namespace.

This is because, by default kubectl points to a context that does not point to any namespace. And, as a result, it uses the default namespace.

The following command shows this in a minikube installation as an example:

$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /root/.minikube/ca.crt
    server: https://160.74.10.56:8443
  name: minikube
contexts:
- context:           # This context does not point to any namespace and, hence, points to default namespace
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube

For making any desired namespace our default namespace, we can follow the below 2 steps.

Step 1 : Add a new context

Add a new context, dev that points to our namespace, demo-project-dev .

//Create a context using set-context
$ kubectl config set-context dev --namespace=demo-project-dev --cluster=minikube --user=minikube
Context "dev" created.

//Verify the context using get-contex
$ kubectl config get-contexts dev
CURRENT   NAME   CLUSTER    AUTHINFO   NAMESPACE
          dev    minikube   minikube   demo-project-dev

Step 2 : Switch to the new context

Switch to ‘dev’ context using ‘use-context’, to make ‘demo-project-dev’ as our default namespace.

//The command to switch to a new context
$ kubectl config use-context dev
Switched to context "dev".

//Command to verify current context 
$ kubectl config current-context
dev

 

How to use Namespace for organizing our projects?

We use the namespace for creating projects and environments. The diagram shows a typical way to organize our projects :

  • Use separate namespaces for creating separate projects & environments
  • Use separate namespaces for deploying the shared tools & applications
    • e.g. shared database, auditing , monitoring, reporting applications

 

Conclusion

Namespaces allows us to share our physical clusters with multiple teams, projects and environments.

The shared servers in K8s comes with many advantages. First of all it minimizes our provisioning hassles. Being shared, it enhances resource utilization, reduces the administration cost and provides a better fail-over handling.

Shared Server Space Demands Proper Resource Allocation.

With projects of different priorities and criticality, it is important to have a proper resource allocation at our project levels. Without such plans, our low priority environments may consume all our resources. And, make the critical production clusters starving.

Apart from cluster level allocation, we may need allocations within the cluster as well. We need to manage CPU, memory, storage usages and leaks, inside and out side our clusters.

We can use a mix of options like Admission Control Mechanism, ResourceQuota, LimitRange ect to put these checks at appropriate levels.

Shared Server Space Demands Proper Access Control.

The other important part is to have a proper access control policy in place. With multiple teams sharing the workspace, there can be many intentional or unintentional activities across projects. We have to make sure we manage this as well.