WTH is the Kube Control Manager [Kube CM]?
It is the Core component in the K8s that helps to maintain the Current vs Desired State of the Cluster (the State is stored in the ETCD Cluster); The Controllers continuously monitor the resources to match the desired state.
Control Manager and Scheduler have to detect the drift in the Current vs Desired State.
The control manager is responsible for the Concept of Self-healing in Kubernetes.
Desired vs Current State
Desired State is the state you define for your application in Kubernetes; so it's the configuration that describes how your workloads should behave.
It is generally defined in Declarative Syntax using either the YAML or JSON file.
Let us consider an example of the deployment file [deployment.yaml]
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3 # Desired Count of Replicas
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: nginx
So from the above yaml, we can deduce the Desired State of the above deployment.yaml file would be of 3 replicas.
The Current State is the actual state of resources running currently in the K8s cluster at the current point of time i.e. right now.
The current state may include several running pods, status, and node health.
K8s always monitor the status of the Cluster to detect the drift in the Current State vs Desired State and if they are found then the Control Manager and the Kube Scheduler have to take the corrective actions.
Steps to check Current vs Desired State
View the Desired State
kubectl get deployment <deployment name> -o yaml
View the Current State of the Pods
kubectl get pods -l key=value
Show the Difference as to why K8s is taking actions to reconcile the state.
kubectl describe deployment <deployment-name>
Some of the Widely known K8s Native Resources lie under the Kubernetes Control Manager.
Node Controller
Replication Controller
Deployment Controller
Deployment Controller
Job and CronJob Controller
Service Account and Token Controller
Endpoint Controller
How does K8s maintain Reconciliation?
Reconciliation seems daunting, but it can be analogous to the dynamic auto-adjustment of resources in a Cluster. eg. Automatic Climate Control in Context to Cars
K8s does follow a Control Loop Mechanism to match the Current State to the Desired State
They follow the steps below
Watch API Server – Controllers keep watching the Current State of resources.
Detect Drift – Kubernetes triggers reconciliation if the Current State deviates from the Desired State.
Take Action – Controllers create, delete, or update resources to restore the Desired State.
Consider a Scenario where the Pods crashed for some reason; this came to notice by Watch in the Kube API Server; leading to a drift in the Current vs Desired State; In that case, the Replication Controller immediately creates a new Pod to manage the desired state.
Can we also manage something beyond K8s Native Vanilla Resources?
YES, indeed we can manage it and that is exactly where the Operators Shine.
Operators
Operators are custom controllers built using the same control loop principle. They watch for custom resources you define and take action to manage them.
Consider the Example of managing sensitive information in Kubernetes, there are Kubernetes Secrets but they are just base64 encoded(which doesn't seem like encryption in itself as is easily decodable) so we need to store the secrets in a secured way, we can do it with the help of external-secrets-operator
It watches for a custom resource defining your secrets and securely retrieves them at runtime (or just in time) from external sources like Vault or any cloud provider's Secrets Manager (AWS/Google/Azure). It then injects these secrets into your pods using the control loop, ensuring your applications have access to sensitive information without it being stored directly in the container image.
WTH is the Cloud Control Manager [Kube CCM]?
It is the native K8s Component that allows the Cluster to interact with the Cloud Provider Specific Service.
The Cloud Controlled manager makes a loosely coupled interactivity between K8s with Cloud Providers.
The goal is to ease the development of Plugins with widely used Cloud Providers like AWS, Azure, GCP, etc. i.e. reducing the management overhead for K8s also providing scalability and modularity
Cloud Control Manager Workflow
Steps by which can be easier to remember
A Kubernetes object (Node, Service, or PV) is instantiated/created.
The Kubernetes API Server then persists this request in etcd.
The Cloud Controller Manager (CCM) then recognizes the request and communicates with the API of the widely known Cloud Provider.
The cloud provider returns with the required information (e.g., a new Load Balancer, a new storage volume).
CCM notifies the Kubernetes API Server of the cloud provider's response.
Making Kubernetes resources smoothly jelling out with cloud infrastructure.
Cloud Control Manager Scenario
Let's say you have a 3-node Kubernetes cluster running on AWS EC2 instances.
One of the worker nodes (Node-2) crashes due to an EC2 hardware failure.
The Node Controller in the Cloud Control Manager detects that Node-2 has stopped responding.
Kubernetes marks Node-2 as NotReady and starts evicting pods. The AWS Auto Scaling Group launches a new EC2 instance to replace Node-2.
The Node Controller detects the new instance, updates metadata, and joins it to the cluster.
Kube CM vs Kube CCM
Features | Control Manager | Cloud Control Manager |
---|---|---|
Purpose | Manages the K8s Native Objects Desired State | Manages the Cloud-Specific Resources like Nodes, Storage |
Example Controllers | Node, ReplicaSet, Deployment, Namespace, HPA. | Node, Route, Service, Persistent Volume. |
Conclusion
The Control Manager ensures Kubernetes maintains the desired state across nodes, storage, and networking. At the same time, the Cloud Control Manager (CCM) integrates Kubernetes with cloud providers for automated node management, networking, and storage provisioning. Together, they enable a self-healing, scalable, and cloud-native Kubernetes environment, ensuring high availability and seamless operations.
Survived this deep dive? Stay ahead—subscribe (at the bottom) for more DevOps wisdom.
EzyInfra.dev is a DevOps and Infrastructure consulting company helping clients in Setting up the Cloud Infrastructure (AWS, GCP), Cloud cost optimization, and managing Kubernetes-based infrastructure. If you have any requirements or want a free consultation for your Infrastructure or architecture, feel free to schedule a call here.
Share this post