Why Securing Kubernetes Is Challenging but Critical #


Focus Area Risk or Concern Recommended Action
Cluster Infrastructure Core components like the API Server, etcd, and Nodes are central control points. If compromised, the whole cluster is at risk. Enforce strong authentication & authorization, encrypt communication and data at rest, and tightly control access to Nodes.
Application Security Business apps may have coding flaws or exposed APIs that attackers can exploit. Regularly audit apps for security gaps. Use runtime protection tools like Falco to detect suspicious behavior.
Container & Host Runtime Exploiting container runtimes or host OS can lead to container escape (ability of applications or processes running inside a container to access resources outside of the container that are not supposed to be available to them) — compromising the node or adjacent apps. Use minimal, hardened base images. Keep host OS patched. Apply host-level protections like AppArmor or SELinux.
Pod & Workload Isolation Pods may have excessive privileges or misconfigured access. Use Pod Security Admission, assign roles with RBAC, and block privilege escalation. Avoid risky features like hostPath.
Network Access Control By default, all Pods can talk to each other. Define Network Policies to explicitly allow necessary connections only. Use mTLS for encrypted, verified service-to-service communication.
Secrets & Configuration Secrets (e.g., passwords, API keys) can be exposed via logs, misconfiguration, or unauthorized access. Store secrets securely using Kubernetes Secrets or tools like Vault. Encrypt everything at rest and restrict access.
Image Supply Chain Vulnerabilities can be introduced via third-party or outdated container images. Scan images before deployment using tools like Trivy or Clair. Sign and verify images before they’re run in production.
Visibility & Threat Detection Without monitoring, breaches or misconfigurations may go unnoticed. Enable audit logging. Monitor workloads with runtime tools. Use centralized dashboards and alerting for early response.

What is the need for Kubernetes Service Accounts? #


  • Provide Identity for Pods (Authentication): Service accounts assign a secure identity to applications running inside Pods so they can interact with the Kubernetes API and external resources
  • Enable Secure API Access from Pods: Kubernetes automatically injects a token into each Pod, allowing it to make authenticated API calls using its service account
  • Limit Access to a Specific Namespace: Each service account belongs to one namespace, helping isolate permissions and control access locally
  • Control Permissions with RBAC: Use Role-Based Access Control to define what actions a service account can perform — such as reading secrets or listing pods
  • Avoid Using Default Service Account: For security, define custom service accounts with only the permissions your application truly needs
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-service-account
  namespace: dev

Practical Example: RBAC with Service Account #


  • ServiceAccount Provides Pod Identity: A ServiceAccount represents an identity for Pods to authenticate with the Kubernetes API
  • Role Defines Allowed Actions: A Role lists what actions (like get, list, create) are allowed on which Kubernetes resources within a specific namespace
  • RoleBinding Connects Identity to Permissions: A RoleBinding links a Role to a ServiceAccount, granting that identity the defined permissions
  • Used for Least-Privilege Access: This trio enables giving Pods only the exact access they need—no more, no less
  • Supports Multiple Bindings: One ServiceAccount can be bound to multiple Roles, and a Role can be bound to multiple ServiceAccounts
# WHAT: ServiceAccount used by the Deployment
# WHY:  To provide API identity for the Pod
apiVersion: v1
kind: ServiceAccount
metadata:
  name: pod-reader-sa
  namespace: my-namespace

---
# WHAT: Role that allows listing Pods
# WHY:  To grant limited API read access
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: my-namespace
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list"]

---
# WHAT: Bind Role to ServiceAccount
# WHY:  Allow pod-reader-sa to list Pods
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: bind-pod-reader
  namespace: my-namespace
subjects:
- kind: ServiceAccount
  name: pod-reader-sa
  namespace: my-namespace
roleRef:
  kind: Role
  name: pod-reader
  namespace: my-namespace
  apiGroup: rbac.authorization.k8s.io

---
# WHAT: Deployment with 1 replica
# WHY:  Periodically lists Pods in namespace
# HOW:  Uses curl and mounted token
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pod-reader-deploy
  namespace: my-namespace
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pod-reader
  template:
    metadata:
      labels:
        app: pod-reader
    spec:
      # Tells Kubernetes to run this Pod
      # using the specified ServiceAccount
      serviceAccountName: pod-reader-sa
      containers:
      - name: curl
        image: curlimages/curl:latest
        command:
          - /bin/sh
          - -c
          - |
            while true; do
              # Token is automatically mounted at this path by Kubernetes
              TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
              curl -s --header "Authorization: Bearer $TOKEN" \
              --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
              https://kubernetes.default.svc/api/v1/namespaces/my-namespace/pods;
              echo "-----";
              sleep 10;
            done

What is the need for ClusterRole and ClusterRoleBinding? #


  • Access Across All Namespaces: Use ClusterRole when a Pod or user needs to list, get, or modify resources (like Pods or Nodes) across multiple namespaces or the whole cluster
  • Grant Permissions to Non-Namespaced Resources: Cluster-wide resources like nodes, persistentvolumes, clusterroles, and namespaces can only be managed using a ClusterRole
  • Avoid Over-Provisioning Roles in Each Namespace: Instead of duplicating the same Role in every namespace, a single ClusterRole simplifies management and ensures consistency
  • Secure But Flexible Access Control: ClusterRole with ClusterRoleBinding can grant access to specific users, groups, or service accounts across the cluster without modifying every namespace
  • Enable Admin or Read-Only Cluster-Wide Views: Required for dashboards, monitoring tools (e.g., Prometheus, Lens) that need visibility across namespaces
# WHAT: ServiceAccount for pod to use
# WHY:  Provides identity to access the API
# WHERE: Namespace 'my-namespace'
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cluster-pod-reader-sa
  namespace: my-namespace

---
# WHAT: Cluster-wide role to read Pods
# WHY:  Allows listing all Pods across namespaces
# WHEN: Used by tools needing full visibility
# WHERE: Not namespace-specific (cluster-level)
apiVersion: rbac.authorization.k8s.io/v1

# ClusterRole is required for:
# - Accessing non-namespaced resources (e.g., nodes)
# - Access across all namespaces
#
# Use Role if you only need access
# within a single namespace
kind: ClusterRole
metadata:
  name: cluster-pod-reader
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "list"]

---
# WHAT: Binds ClusterRole to ServiceAccount
# WHY:  Grants service account access to use the role
# WHERE: Applies to my-namespace namespace service account
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: bind-cluster-pod-reader
subjects:
  - kind: ServiceAccount
    name: cluster-pod-reader-sa
    namespace: my-namespace
roleRef:
  kind: ClusterRole
  name: cluster-pod-reader
  apiGroup: rbac.authorization.k8s.io

---
# WHAT: Deployment to list Pods using API
# WHY:  Demo to show access via the ServiceAccount
# HOW:  Uses curl to call Kubernetes API
# WHERE: Runs in 'my-namespace' namespace
apiVersion: apps/v1
kind: Deployment
metadata:
  name: read-pods-deploy
  namespace: my-namespace
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pod-reader
  template:
    metadata:
      labels:
        app: pod-reader
    spec:
      serviceAccountName: cluster-pod-reader-sa
      containers:
        - name: reader
          image: curlimages/curl:latest
          command:
            - /bin/sh
            - -c
            - >
              while true; do
              TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token);
              curl -s --header "Authorization: Bearer $TOKEN"
              --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
              https://kubernetes.default.svc/api/v1/pods;
              echo "---";
              sleep 10;
              done

What is the need for a Network Policy? #


  • Enforce Zero Trust Networking: By default, Kubernetes allows all Pod-to-Pod communication. Network policies help enforce least privilege—only explicitly allowed traffic is permitted
  • Restrict Unwanted Communication Between Services: In a microservices setup, not all services should talk to each other. For example, a frontend service may access the backend, but it shouldn't talk directly to the database
  • Protect Sensitive Data in Transit: Limit traffic to only expected sources and destinations—e.g., only allow traffic to the database from backend services, not from all Pods
  • Control Egress and Ingress Traffic: Ensure that only specific Pods can access external services (egress) or be reached by them (ingress)—important for compliance and audit
  • Limit Blast Radius of Compromised Pods: If one Pod is compromised, network policies can prevent it from reaching other sensitive services like payment or admin microservices
  • Create Logical Security Boundaries: Apply isolation by namespace, label, or app role (e.g., only role: backend can access role: db)

Give a Practical Example of Network Policy #


# Blocks all incoming traffic unless explicitly allowed
# WHEN: Use as a baseline before applying any allows
# WHY:  Ensures "default deny" security posture
# HOW:  Apply once for namespace-wide deny

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: my-microservices
spec:
  # Matches all Pods in the namespace
  podSelector: {}
  policyTypes:
    # This policy is about incoming traffic.
    - Ingress
    # No ingress section defined =>
    # no Pods are allowed to receive any traffic


---
# WHAT: Allow only 'frontend' to talk to 'backend'
# WHY:  Enforce service-level isolation
# WHEN: Frontend calls APIs on backend
# WHERE: All in 'my-microservices' namespace
# HOW:  Match Pod labels using 'type'

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-backend-from-frontend
  namespace: my-microservices
spec:
  podSelector:
    matchLabels:
      role: backend
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              role: frontend

---
# WHAT: Allow only 'backend' to talk to 'database'
# WHY:  Protect DB from direct external access
# WHEN: Backend persists data to database
# WHERE: All in 'my-microservices' namespace
# HOW:  Match Pod labels using 'type'

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-database-from-backend
  namespace: my-microservices
spec:
  podSelector:
    matchLabels:
      role: database
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              role: backend

When do you use Pod Security Admission? #


  • Pod Security Admission: Enforces Kubernetes' built-in security standards (restricted, baseline, privileged) at the namespace level to control which Pods can be admitted based on their security context
    • Privileged: No restrictions (for legacy workloads)
    • Baseline: Moderate restrictions; avoids known privilege escalations
    • Restricted: Strongest controls; disallows host access, privilege escalation, most volume types
  • Fail Early with Clear Errors: Admission controller rejects non-compliant pods at creation time, giving immediate feedback instead of allowing unsafe pods to run
  • Enforce Secure Defaults for All Teams: Helps platform teams ensure every developer follows security best practices (e.g., disallow hostPath usage)
  • Apply Policies Based on Namespace: Apply different levels of security (restricted, baseline, privileged) to different environments (e.g., prod vs dev)
  • Prevent Dangerous Pod Configurations: Blocks pods that attempt to run as root or use privileged containers, reducing the risk of accidental or malicious escalations
  • Avoid Misuse of Host Resources: Prevents containers from mounting sensitive host directories or using host networking, which could expose the host node

How does Pod Security Admission work? What is the role of Pod Security Standards (PSS)? #


  • Pod Security Admission: Pod Security Admission (PSA) is a built-in admission controller that checks Pod specifications before they are created in the cluster
  • Enforce Security Rules at Namespace Level: PSA works by evaluating Pods against predefined rules configured per namespace
  • Leverage Pod Security Standards (PSS): Kubernetes defines three built-in security profiles — privileged, baseline, and restricted — to describe increasing levels of security
    • Privileged: No restrictions (for legacy workloads)
    • Baseline: Moderate restrictions; avoids known privilege escalations
    • Restricted: Strongest controls; disallows host access, privilege escalation, most volume types
  • Use Labels to Set Enforcement Mode: Admins apply labels like pod-security.kubernetes.io/enforce: restricted to a namespace to apply a PSS level
  • Offer Flexible Modes of Operation: PSA supports three modes:
    • enforce: Blocks Pods that do not meet the configured Pod Security Standard—used to strictly enforce security policies in production
    • audit: Logs violations of the security standard without blocking Pod creation—ideal for monitoring policy compliance
    • warn: Displays warnings to users at creation time if a Pod would violate policy—useful for developer awareness during testing or onboarding
  • Apply Different Modes for Dev and Prod: You can use warn in dev namespaces to educate developers, and enforce in prod to block insecure Pods
  • Avoid Complex Custom Policies: PSS gives you a standardized, Kubernetes-native way to enforce common security best practices without writing your own policies
# WHAT: Creates a new namespace for secure workloads
# WHY:  To isolate and enforce security settings on all Pods
#       created in this namespace
apiVersion: v1
kind: Namespace
metadata:
  name: secure-apps
  labels:
    # WHAT: Enforce Pod Security Standard (PSS) level
    # WHY:  Blocks Pods that violate this level
    # OPTIONS:
    # - privileged: least secure (allows everything)
    # - baseline:    reasonable defaults (e.g. no privileged)
    # - restricted:  most secure (e.g. must drop capabilities,
    #                run as non-root, no hostPath, etc.)
    pod-security.kubernetes.io/enforce: restricted

    # WHAT: Log violations without blocking the Pod
    # WHY:  Helps security teams monitor risk before enforcing
    # OPTIONS: same as 'enforce'
    pod-security.kubernetes.io/audit: restricted

    # WHAT: Warn users during Pod creation if it would violate policy
    # WHY:  Educates users early without failing the request
    # OPTIONS: same as 'enforce'
    pod-security.kubernetes.io/warn: restricted

    # By adding all three rules:
    # You block insecure Pods, log the attempt,
    # and show the user a warning—all at once.

    # ALTERNATIVE:
    # In dev namespaces, you can use:
    # pod-security.kubernetes.io/enforce: baseline
    # pod-security.kubernetes.io/audit: restricted
    # pod-security.kubernetes.io/warn: restricted
    # This allows slightly less secure Pods (baseline),
    # but still monitors and warns about restricted-level violations.    

What is the need for SecurityContext? #


  • SecurityContext: Defines privilege and access settings for a Pod or container, helping enforce safe defaults
  • Avoid Running as Root: By default, containers may run as the root user, which can be dangerous if compromised
  • Enforce Non-Root Execution: Use runAsNonRoot and runAsUser to force applications to run with limited privileges
  • Restrict Privilege Escalation: Set allowPrivilegeEscalation: false to prevent processes from gaining more permissions than they start with
  • Drop Unnecessary Capabilities: Use capabilities.drop to remove Linux kernel capabilities like NET_ADMIN (Network Administration) or SYS_TIME (Change System Time) for tighter security
  • Apply Consistent Security Defaults: Define securityContext at the Pod level to apply secure settings to all containers inside a Pod
  • Improve Isolation and Defense-in-Depth: Helps reduce impact if a container is compromised, protecting host and other workloads
# WHAT: A secure Pod definition
# WHY:  Enforce strong security settings at both
#       Pod and container levels
# WHEN: Use when running sensitive workloads
# WHERE: Applies cluster-wide unless namespaced
apiVersion: v1
kind: Pod
metadata:
  name: secure-pod
spec:
  # Pod-level securityContext applies to all containers
  securityContext:
    runAsUser: 1000
    # WHAT: Run processes as user ID 1000
    # WHY:  Prevent root-level access in containers
    # WHEN: App runs safely as non-root

    fsGroup: 2000
    # WHAT: Shared file system group ID
    # WHY:  Enables shared volume access across
    #       all containers in the Pod
    # WHEN: Needed when volumes must be writable
    #       by multiple containers

  containers:
    - name: app
      image: myapp:latest
      # Container-specific security settings
      securityContext:
        allowPrivilegeEscalation: false
        # WHAT: Prevents processes inside the container
        #       from gaining more privileges than their parent
        #       (e.g., using setuid/setgid to become root)
        # WHY:  Blocks privilege escalation attacks

        readOnlyRootFilesystem: true
        # WHAT: Makes container's root filesystem read-only
        # WHY:  Prevents tampering or writing temp files
        # WHEN: App doesn't need to write to root FS
        # NOTE: Use tmp or volume mounts if write is needed

        capabilities:
          drop:
            - ALL
          # WHAT: Drop all Linux capabilities
          # WHY:  Follows least-privilege principle
          # WHEN: App doesn't require special system calls

List a few tools to enhance security posture of workloads running in Kubernetes #


Tool Category Explanation
Trivy Vulnerability Scanning Scans container images, code, and Kubernetes configs to detect known vulnerabilities and misconfigurations before deployment. Helps ensure only secure workloads are promoted.
Snyk Vulnerability Scanning Identifies security issues in application dependencies, Dockerfiles, and infrastructure-as-code. Offers fix suggestions, making it ideal for development teams and CI/CD pipelines.
Clair Vulnerability Scanning Automatically scans container images in registries for CVEs (Common Vulnerabilities and Exposures). Often integrated into image repositories to prevent insecure images from being deployed.
Falco Runtime Threat Detection Monitors running workloads for suspicious activity like shell access, file changes, or network connections. Helps detect security breaches in real time.
Seccomp Runtime Hardening Restricts system calls a container can make, reducing the potential damage from compromised workloads. Enforces minimal permissions at the Linux kernel level.
AppArmor Runtime Hardening Defines security profiles that limit container behavior (e.g., file and network access).
SELinux Host & Container Isolation Applies strict access control rules to prevent containers from accessing host or other container resources. Often required in high-security or government environments.