NGINX Ingress Controller Definition
A Kubernetes ingress controller is a specialized load balancer for containerized environments that bridges external and Kubernetes services, abstracting the complexity of Kubernetes application traffic routing away. The NGINX ingress controller for Kubernetes runs in a Kubernetes environment with NGINX Plus or NGINX Open Source instances.
NGINX ingress controller for Kubernetes:
Monitors Kubernetes ingress resources and NGINX ingress resources to load balance traffic to containers on the Kubernetes platform
Manages networking, traffic, communication, and security on Layers 4 through 7
Deploys resources based on its configuration and automatically updates rules
Three NGINX ingress controllers for Kubernetes exist:
NGINX Ingress Controller FAQs
What is NGINX Ingress Controller?
A Kubernetes ingress controller is a specialized load balancer for containerized environments. For enterprises managing containerized applications, Kubernetes has become the de facto standard. However, moving production workloads into Kubernetes generates new application traffic management complexity and resulting challenges.
A Kubernetes ingress controller bridges external and Kubernetes services, abstracting away Kubernetes application traffic routing complexity. Kubernetes ingress controllers:
- Load balance outside traffic to containers running inside the Kubernetes platform
- Manage egress traffic inside a cluster for services which need to communicate with other services outside of a cluster
- Deploy and create ingress resources based on their Kubernetes API configuration
- Monitor running pods in Kubernetes and automatically update load-balancing rules as they add or remove pods from a service.
The NGINX ingress controller for Kubernetes is a daemon—a production-grade ingress controller—that runs in a Kubernetes environment with NGINX Plus or NGINX Open Source instances. It monitors Kubernetes ingress resources and NGINX ingress resources to discover where ingress load balancing is required by requests for services. NGINX ingress controller for Kubernetes manages networking, controls traffic, and enhances security on Layers 4 through 7.
Various ingress controllers use NGINX, and there are three iterations of NGINX ingress controller for Kubernetes:
Some features for production-grade app delivery are unique to the NGINX Plus version.
How Does NGINX Ingress Controller Work?
An ingress controller is a Kubernetes cluster component that configures an HTTP load balancer based on the cluster user’s ingress class and other resources. To understand how the NGNIX ingress controller works it is essential to consider NGINX ingress controller configuration.
The goal of an NGNIX ingress controller is assembling an NGINX ingress controller configuration file (nginx.conf). After any change in the configuration file (except changes that impact only an upstream configuration) NGINX must reload. Use lua-nginx-module to reload NGINX.
The most important piece of NGINX ingress controller architecture is the NGINX model. Successful NGINX ingress controller deployment hinges upon understanding when and how to replace the NGINX model.
Typically, an ingress controller checks for updates or needed changes using the synchronization loop pattern. To achieve this, the user builds a model to reflect the state of the cluster in a point in time configuration file using various ingress objects from the cluster, including: configmaps, endpoints, ingresses, secrets, and services.
FilteredSharedInformer, a Kubernetes informer, allows the user to react to changes such as adding or removing objects. However, because there is no way to predict whether any one change will affect the final configuration file, the user must create a new model on every change based on the cluster state for comparison.
If the new model is the same, there is no need for a reload and new NGINX configuration. If the changes are limited to endpoints, the system sends the new endpoints to a Lua handler, and again stops the reload and new NGINX configuration. However, if there are differences between the new and running models beyond mere endpoints, this triggers the creation of a new NGINX configuration and a reload.
For information about Helm installation and a Helm chart, see the NGINX documentation here.
NGINX Ingress Controller Monitoring
NGINX ingress controller metrics are exposed in the Prometheus format. To expose metrics, edit the NGINX ingress controller service with the relevant annotations and port configurations. Then, edit the daemonset.yaml configuration file of the ingress controller to detect the exposed port. Finally, either create AdditionalScrapeConfigs or configure an additional serviceMonitor to enable the Prometheus instance to expose the metrics by scraping the ingress controller endpoints.
NGINX Controller vs Other Kubernetes Ingress Controllers
To make the right architectural choice to deploy a Kubernetes cluster for a specific application, assess the requirements from the business, the developers, and the application itself. Here are a few common comparisons:
Traefik vs NGINX ingress controller
The NGINX ingress controller service uses the NGINX web server as a proxy. The Traefik Kubernetes Ingress provider is an ingress controller for the Traefik proxy.
Originally, Traefik was created to route requests within the dynamic environments of microservices. This led to its canary releases, continuous configuration updates with no restarts, metrics export, REST API, support for multiple load balancing algorithms, support for various protocols, web UI, and many other useful features. Traefik also supports Let’s Encrypt certificates out of the box. However, to access the controller’s high availability users must install and its own KV-storage.
Application Load Balancer (ALB) vs NGINX ingress controller
ALB delivers Layer 7 load balancing of HTTP and HTTPS traffic for Amazon Web Services (AWS) users disappointed by the limited features of the Classic Load Balancer. However, ALB still lacks the full range of capabilities of the NGINX ingress controller, including load balancers (NGINX Plus) and dedicated reverse proxies (NGINX).
HAProxy Ingress vs NGINX ingress controller
HAProxy is a load balancer and proxy server. It offers DNS-based service discovery, a “soft” update to configuration without loss of traffic, and dynamic configuration through API as part of the Kubernetes cluster. HAProxy supports the developer emphasis on optimization, high speed, and efficiency of resource consumption. HAProxy also supports balancing algorithms.
Does Avi Offer Advanced Kubernetes Ingress Solutions?
Yes. Avi Networks offers an advanced Kubernetes ingress controller with multi-cloud application services and enterprise-grade features. Avi’s machine learning based automation and observability bring container-based applications into enterprise production environments.
Avi Vantage is based on a software-defined, scale-out architecture that provides container services for Kubernetes beyond typical Kubernetes ingress controllers, such as observability, security, traffic management, and a rich set of tools to simplify application rollouts and maintenance.
Avi Networks provides a centrally orchestrated, elastic proxy services fabric with dynamic load balancing, micro-segmentation, security, service discovery, and analytics for containerized applications running in K8s environments. This container services fabric consists of a centralized control plane and distributed proxies:
- Avi Controller: A central control, management and analytics plane that communicates with the Kubernetes controller, deploys and manages the lifecycle of data plane proxies, configures services and aggregates telemetry analytics from the Avi Service Engines.
- Avi Service Engine: A service proxy providing ingress services such as load balancing, WAF, GSLB, IPAM/DNS in the dataplane and reporting real-time telemetry analytics to the Avi Controller.
Avi Networks has a cloud connector model that is agnostic to the underlying Kubernetes cluster implementations. The Avi Controller integrates via REST APIs with Kubernetes ecosystems including Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), Red Hat OpenShift, VMware Pivotal Container Services (PKS), VMware Tanzu Kubernetes Grid (TKG), and more.
Learn more here.