Kubernetes Security

<< Back to Technical Glossary

Kubernetes Security Definition

Kubernetes is an extensible, portable, open-source container orchestration platform that dominates the enterprise market. A huge number of organizations manage some portion of their container workloads and services using Kubernetes, making it a massive, rapidly expanding ecosystem. This means Kubernetes security tools, support, and services are widely available, but there are also serious security risks in Kubernetes container environments, including runtime security incidents, misconfigurations, and other Kubernetes security vulnerabilities.

Kubernetes security risks generally correspond to phases in the container lifecycle. Therefore, best practices and practical recommendations for Kubernetes security—sometimes referred to as Kubernetes container security—are linked to responding correctly to threats at runtime, avoiding misconfigurations during the deploy and build phases, and remediating known vulnerabilities during the build phase. These Kubernetes security best practices are essential to securing cloud-native infrastructure and applications.

Image depicts a Kubernetes Security diagram of the 4C's of cloud native security: cloud, clusters, containers, and code.

 

Kubernetes Security FAQs

What is Kubernetes Security?

Kubernetes security vulnerabilities and challenges are varied and numerous:

Containers are widespread. Containers enable greater portability, speed, and ability to leverage microservices architectures, but they can also increase your attack surface and create security blind spots. Their distributed nature also makes identifying risks, misconfigurations, and vulnerabilities quickly more difficult. It is also more challenging to maintain adequate visibility into your cloud-native infrastructure as more containers are deployed.

Misused images and image registries can pose security risks. Businesses need strong governance policies for building and storing images in trusted image registries. Build container images using approved, secure base images that are scanned regularly. Launch containers in a Kubernetes environment with only images from allowlisted image registries.

Containers must communicate. Containers and pods talk to each other and to internal and external endpoints to function properly. This makes for a sprawling deployment environment in which it is often prohibitively difficult to implement network segmentation. A malicious actor has the potential to move about broadly inside the environment should they breach a container, based on how broadly it communicates with other containers and pods.

Default Kubernetes configuration options are typically the least secure. Kubernetes is designed to simplify management and operations and speed application deployment in keeping with DevOps principles. Thus, a rich set of controls exists for securing clusters and their applications effectively.

For example, Kubernetes pod security policies control how pods communicate, much like firewall rules. When a pod has an associated Kubernetes pod security policy, it is only allowed to communicate with the assets that network policy defines. However, Kubernetes does not apply a network policy to a pod by default, meaning that all pods can talk to each other in a Kubernetes environment, leaving them open to risk.

Management, access, and storage of secrets and sensitive data such as keys and credentials present another configuration risk in containers, and should be mounted into read-only volumes, not passed as environment variables.

Compliance challenges for Kubernetes and containers are unique. Internal organizational policies, industry standards and benchmarks, and security best practices were often created not for cloud-native environments, but for traditional application architectures. Businesses must automate audits and monitoring to successfully operate at scale, given the dynamic and distributed nature of containerized applications.

Kubernetes runtime security challenges for containers are both familiar and new. Kubernetes can be treated as immutable infrastructure, destroyed and recreated from a common template rather than changed or patched when it’s time to update. This is a security advantage of containers, but their launch and removal speed and general ephemerality is also a challenge. Detecting a threat in a running container means stopping and relaunching—but not without identifying the root problem and reconfiguring whatever component caused it. Runtime security risks also include running malicious processes in compromised containers; crypto mining and network port scanning for open paths to valuable resources are examples of this.

To cope with these Kubernetes security concerns and others, it is essential to integrate security into each phase of the container lifecycle: building, deployment, and runtime. Following best practices for building and configuring your Kubernetes cluster and deployments and securing Kubernetes infrastructure reduces overall risk.

Kubernetes Security Best Practices

Build Phase

To secure Kubernetes clusters and containers, start in the build phase by building secure container images and scanning images for known vulnerabilities. Best practices include:

Use minimal base images. Avoid using base images with shell script or OS package managers. If you must include OS packages which could contain vulnerabilities, at a later step, remove the OS package manager.

Remove extra components. Remove debugging tools from containers in production. Do not include or retain common tools that could be useful to attackers in images.

Update images and third-party tools. All images and tools you include should be up to date, with the latest versions of their components.

Identify known vulnerabilities with a Kubernetes security scanner. Scan images by layer for vulnerabilities in third-party runtime libraries and OS packages your containerized applications use. Identify vulnerabilities within your images and determine whether they are fixable. Label non-fixable vulnerabilities and add them to a filter or allowlist so the team does not get hung up on non-actionable alerts.

Integrate security. Use image scanning and other Kubernetes security testing tools to integrate security into your CI/CD pipeline. This automates security and generates alerts when fixable, severe vulnerabilities are detected.

Implement defense-in-depth and remediation policies. Identifying a security risk in a running deployment using a container image demands immediate action, so keep a remediation workflow and policy checks in place. This allows the team to detect such images, and act to update them immediately.

Deploy Phase

Before deployment, Kubernetes infrastructure must be configured securely. This demands visibility into what types of Kubernetes infrastructure will be deployed, and how. To properly engage in Kubernetes security testing and identify and respond to security policy violations, you need to know:

  • what will be deployed, including the pods that will be deployed, and image information, such as vulnerabilities or components
  • where deployment will happen, including which namespaces, clusters, and nodes
  • the shape of the deployment—for example communication permissions, and pod security context
  • what it can access, including volumes, secrets, and other components such as the orchestrator API or host
  • compliance—does it meet security requirements and policies?

 

Based on these factors, follow these best practices for Kubernetes security during deployment:

Isolate sensitive workloads and other Kubernetes resources with namespaces. A key isolation boundary, namespaces provide a reference for access control restrictions, network policies, and other critical security controls. Limit the impact of destructive actions or errors by authorized users and help contain attacks by separating workloads into namespaces.

Deploy a service mesh. A service mesh tightly integrates with the infrastructure layer of the application, offering a consistent way to secure, connect, and observe microservices. A service mesh controls how various services share data in their east west communication in a distributed system, usually with sidecar proxies.

A service mesh delivers dynamic traffic management and service discovery, including traffic splitting for incremental rollout, canary releasing, and A/B testing, and traffic duplicating or shadowing. Situated along the critical path for all system requests, a service mesh can also offer insights and added transparency into latency, frequency of errors, and request tracing. A service mesh also supports cross cutting requirements, such as reliability (circuit-breaking and rate limiting) and security (providing TLS and service identity).

Control traffic between clusters and pods with Kubernetes network policies. Default Kubernetes configurations that allow every pod to communicate are risky. Network segmentation policies can prevent cross-container lateral movement by an attacker.

Limit access to secrets. To prevent unnecessary exposure, ensure deployments access only secrets they require.

Assess container privileges. Kubernetes security assessment should consider container capabilities, privileges, and role bindings, which all come with security risk. The least privilege that allows intended function and capabilities is the goal.

Control pod security attributes, including privilege levels of containers, with pod security policies. These allow the operator to specify, for example:

  • Do not allow privilege escalation.
  • Do not run application processes as root.
  • Use a read-only root filesystem.
  • Drop unnecessary and unused Linux capabilities.
  • Do not use the host network or process space.
  • Give each application its own Kubernetes Service Account.
  • Use SELinux options for more fine-tuned control.
  • If it does not need to access the Kubernetes API, do not mount the service account credentials in a container.

 

Assess image provenance. To maintain Kubernetes security, don’t deploy code from unknown sources, and use images from allowlisted or known registries only.

Extend image scanning into the deploy phase. Vulnerabilities can be disclosed in between deployment and scanning, so enforce policies at the deploy phase by rejecting images built over 90 days ago, or using an automated tool.

Use annotations and labels appropriately so teams can identify Kubernetes security issues and respond to them easily.

Enable Kubernetes role-based access control (RBAC). Kubernetes RBAC controls access authorization to a cluster’s Kubernetes API servers, both for service accounts and users in the cluster.

Runtime Phase

During the runtime phase, containerized applications experience a range of new security challenges:

Leverage contextual information in Kubernetes. Kubernetes build and deploy time can help your team evaluate actual versus expected runtime activity to identify suspicious activity.

Extend vulnerability scanning and monitoring to container images in running deployments. This should include newly discovered vulnerabilities.

Tighten security with built-in Kubernetes controls. Limit the capabilities of pods to eliminate classes of attacks that require privileged access. For example, read-only root file systems can prevent any attacks that depend on writing to the file system or installing software.

Monitor active network traffic. Limit insecure and unnecessary communication by comparing existing traffic to allowable traffic based on Kubernetes network security policies.

Leverage process allowlisting to identify unexpected running processes. Creating this kind of allowlist from scratch can be challenging; look to security vendors with expertise in Kubernetes and containers.

Analyze and compare runtime activity of the same deployments in different pods. Containerized applications may be replicated for fault tolerance, high availability, or scale. Replicas should behave almost identically; if they do not, investigate further.

Scale suspicious pods or stop and restart in case of Kubernetes security breach. Contain a successful breach using Kubernetes native controls by instructing them to stop and then restart instances of breached applications or automatically scale suspicious pods to zero.

Follow the CIS benchmarks for Kubernetes security best practices as well.

Does VMware NSX Advanced Load Balancer provide a Kubernetes Security Solution?

VMware NSX Advanced Load Balancer is based on a scale-out, software-defined architecture that provides observability, traffic management, Kubernetes security, and a rich set of tools to ease rollouts and application maintenance. VMware NSX Advanced Load Balancer offers an elastic, centrally orchestrated proxy services fabric with analytics, cloud-native web application security, and load balancing and ingress services for container-based applications running in Kubernetes environments.

The VMware NSX Advanced Load Balancer provides the cloud-native approach for application networking services and traffic management that enterprises adopting Kubernetes need. VMware NSX Advanced Load Balancer delivers scalable, enterprise-class container ingress to deploy and manage cloud-native applications in Kubernetes environments

Learn how to secure your application and data by making intent-based decisions on whether to authorize, block, or quarantine access based on a Common Vulnerability Scoring System (CVSS) score above a predefined threshold here.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.