Inside Kubernetes [Part 1]: Authentication and Authorization in Kube API Server

Inside Kubernetes [Part 1]: Authentication and Authorization Mechanism in Kube API Server

Cover Image

Understanding the K8s components

As more DevOps Engineers move towards container orchestration, Kubernetes (K8s) becomes an essential tool. It may seem complex, but don't worry—we're here to help you understand the basics of its architecture.

Kubernetes works with clusters, which have two main components: the Control Plane (or Master Node) and the Data Plane (the Worker Nodes). The Data Plane is where your actual applications run, inside what are called Pods. Let's break it down further to make it even easier!

Control Plane Components

The Control Plane components help ensure that the application’s current state matches the desired state; Control Plane Components make these global decisions about the cluster.

Note: For better reliability, it's recommended to have the Master Node in High Availability Mode, with an odd number of nodes (such as 3, 5, or 7), depending on the complexity of the application.

API Server (kube-apiserver):

The API Server is your gateway to the Kubernetes cluster. Whenever you interact with K8s, it’s through the API Server, which turns high-level commands kubectl into HTTP REST APIs, processing your requests and exposing the cluster’s endpoints.

It secures communication with TLS encryption, ensuring safe access and blocking unauthorized interference.

Internally, the API Server communicates with other components via gRPC calls, keeping everything fast and efficient.

The API Server also handles authentication and authorization, controlling who can enter the cluster and what they can do once inside.


Authentication

The user authentication in the API server should be handled using the Least Privilege Principle.

Types of Authentication in API Server

  1. Client Certificates Authentication

  2. Service Account Authentication

  3. OpenID Connect Authentication

1. Client Certificates Authentication

  • Client Certificate Authentication is typically used by admins to manage user or developer authentication. While it can be tedious to configure for each new user, it works well for smaller teams, as it only requires manual rotation.

  • The process is error-prone, involving manual setup and certificate creation for each user, which leads to scalability issues as the team grows.

  • This method relies on TLS certificates for authentication.

Here’s how the workflow for Client Certificate Authentication looks:

2. Service Account Authentication

  • They’re native to Kubernetes, so no need for external providers —Service Accounts handle it all.

  • Most commonly used for Pods and other workloads, Service Accounts automatically generate JWT tokens, making authentication seamless.

  • They work hand-in-hand with Roles and RoleBindings, managing user access and permissions within your cluster. Kubernetes takes care of the heavy lifting, keeping things simple and secure.

  • Service Accounts are perfect for granting temporary access to Kubernetes without creating a full user identity. They’re ideal for self-hosted clusters where external IAM solutions (like AWS IAM or OIDC) aren't in play.

  • However, there's a catch—if the token is leaked, anyone can impersonate the user associated with that token.

  • Another challenge: Service Account tokens don’t expire by default, so you’ll need to manually rotate them to maintain security.

3. OpenID Connect Authentication:

  • This method is popular in large enterprises using third-party identity providers like Google, Azure AD, and Okta, offering the scalability needed for centralized access control through existing identity systems.

    Scenario

    An Organization wants to use The GKE and wants developers to log in with Google Accounts instead of configuring the local K8s users.

Steps to configure that will be as follows

1. Configure the Google Cloud IAM with K8s RBAC

2. Developers authenticate using Google OAuth2 Tokens

3. Assign IAM roles for cluster access

To complete the setup, the API Server needs to be configured to accept Google OAuth tokens and integrate with the OIDC system.

apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
spec:
  containers:
    - name: kube-apiserver
      args:
        - --oidc-issuer-url=https://accounts.google.com
        - --oidc-client-id=my-client-id
        - --oidc-ca-file=/etc/kubernetes/pki/oidc-ca.crt
        - --oidc-username-claim=email
        - --authorization-mode=RBAC

The RBAC Configuration for the Developer Access will look like

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: oidc-user-binding
subjects:
  - kind: User
    name: "[email protected]"  # Google OAuth User
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: view  # Read-only access
  apiGroup: rbac.authorization.k8s.io

Then we need to log in via the kubectl

Developers can authenticate with Google and Kubernetes

gcloud auth login
kubectl config set-credentials developer --token=$(gcloud auth print-access-token)
kubectl config set-context developer-context --cluster=gke-cluster --user=developer
kubectl config use-context developer-context
  • There could arise an issue with the RBAC giving the overprivileged to the users and also with the 100% dependency on the external Identity Providers.


Authorization

Authorization determines if a particular user/service is allowed to do something on a particular resource.

Types of Authorization

  1. RBAC [Role-based Access Control]

  2. ABAC [Attribute-based Access Control]

1. RBAC [Role-Based Access Control]

RBAC is the most commonly used and recommended authorization method in Kubernetes. It uses roles and role bindings to define what actions a user or service can perform on specific resources.

# Role.yml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "list"]

RoleBinding: Bind the pod-reader role to a user or ServiceAccount:

#roleBinding.yml

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: default
subjects:
  - kind: ServiceAccount
    name: user-service-account
    namespace: default
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

The above configuration will allow the user-service-account to pod-reader role

2. ABAC (Attribute-Based Access Control)

ABAC grants access based on attributes such as user identity, namespace, resource type, and more. In ABAC, access decisions are made based on a policy file that contains rules about which attributes are allowed to access which resources.

The Kubernetes admin creates the policy file to specify who can perform what actions on resources

Creating the policy file can be can itself be a possess a challenge as it involves a complicated task as it involves the user's identity, namespace, resource name, and operation.

Example of the policy file

{
  "apiVersion": "abac.authorization.k8s.io/v1",
  "kind": "Policy",
  "spec": {
    "user": "user1",
    "namespace": "default",
    "resource": "pods",
    "verb": "get"
  }
}


Conclusion

The Kubernetes API Server ensures secure cluster access via authentication and authorization. Authentication verifies the identity using methods like client certificates, OIDC, or service accounts, while authorization checks if the identity has the necessary permissions, typically through RBAC, or ABAC. This combination allows Kubernetes to control access and maintain cluster security efficiently.


EzyInfra.dev is a DevOps and Infrastructure consulting company helping clients in Setting up the Cloud Infrastructure (AWS, GCP), Cloud cost optimization, and manage Kubernetes-based infrastructure. If you have any requirements or want a free consultation for your Infrastructure or architecture, feel free to schedule a call here.

Share this post

K8s Got You Stuck? We’ve got you covered!

We design, deploy, and optimize K8s so you don’t have to. Let’s talk!
Loading...