A Kubernetes Ingress is a set of rules that exposes cluster services externally. For an Ingress to handle traffic and function, Kubernetes uses an Ingress controller resource that implements Ingress rules within the cluster. Unlike other controllers, Kubernetes does not start an Ingress controller automatically. Rather, it lets administrators choose one or multiple Ingress controllers within a cluster.
While Kubernetes maintains the NGINX (Kubernetes managed), AWS Load Balancer Controller, and GCP Load Balancing, the option to choose an advanced controller based on specific use cases is equally supported.
Through an Ingress, cluster administrators set up traffic routing rules without exposing node services or creating load balancers.
Ingress in Kubernetes is primarily comprised of two components:
The Ingress controller reads and processes information from the Ingress object and implements the configurations within the cluster.
Just like any other Kubernetes resource, an Ingress object includes fields for apiVersion, kind, and metadata. The object’s name is a valid DNS subdomain name and relies on annotations to configure advanced options specific to the Ingress controller. The YAML spec file also includes the information needed to configure proxy servers and load balancers.
The configuration specifications for a minimal Kubernetes Ingress resource would look similar to this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: darwin-minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
Ingress enables cluster administrators to direct HTTP(S) traffic by matching incoming requests against specific rules. Each Ingress rule contains the following specifications:
host
to whom the rules will apply. If no host is specified, the Ingress resource applies the rules to all incoming HTTP traffic;
paths
associated with the backend services;
backend
in custom resource definitions.
Most cloud providers offer managed Kubernetes to simplify cluster setup and administration. These platforms include pre-installed Ingress controllers to manage external access to the Kubernetes services. The controllers feature seamless native integration with cloud services, reducing the manual effort required to set up Ingress.
Some popular cloud-specific Ingress controllers include:
As compared to cloud-specific Ingress controllers, open-source Ingress controllers are platform agnostic, meaning they are not specific to any cloud vendor or managed Kubernetes platform. These controllers are mostly maintained by an active community of volunteers, although some are also managed by dedicated teams. Open-source controllers offer high feature velocity and no vendor lock-in and are ideal for both high-volume or low-volume production systems.
Some popular open-source controllers include NGINX, Istio, Emissary and Traefik.
Though the standard Kubernetes Ingress resource defines basic load balancing and traffic routing capabilities, it is considered insufficient for most production workloads. To help with this, advanced Ingress controllers include scalable traffic management capabilities, while helping to implement resilient load balancing and seamless development release cycles.
While there is a plethora of advanced Ingress controllers that offer useful features and support different use cases, the list below considers the following factors: protocol support, API gateway features, enterprise support, and advanced traffic management.
On account of its proven reputation for technical innovation and ease of use, the NGINX Ingress controller remains the most popular traffic management solution for Kubernetes and containerized applications. NGINX provides load balancing, caching, a web application firewall (WAF), and API gateway for Kubernetes clusters, and is mostly used as a simple reverse proxy for dynamic workloads.
It is also important to note that there are two separate NGINX Ingress controller projects: community managed and NGINX managed. While both are equally popular, the latter is owned and managed by NGINX and comes with both free and premium options.
The NGINX Ingress object includes several features for production-grade Kubernetes environments, such as:
The Istio Ingress Gateway is built on Envoy, which proxies data-plane traffic for simpler flow control. Operating at the edge of the Istio service mesh that receives incoming HTTP/TCP requests, the gateway then passes these requests to the Istio Kubernetes Ingress object that manages Ingress traffic using custom routing rules. These rules enable the easy control of API calls and HTTP traffic between cluster services, while simplifying fundamental traffic routing configurations, such as timeouts, circuit breakers, retries, and advanced deployment rollouts**.**
Formerly known as Ambassador, Emissary-ingress is an open-source Kubernetes API gateway that relies on a declarative self-service deployment model. The gateway is built on the Envoy proxy to provide advanced traffic routing functions, such as automatic retries, rate limiting, circuit breakers, and load balancing. Emissary integrates with popular service mesh, distributed tracing, and observability solutions, so administrators can stay on top of Kubernetes application performance.
Traefik is an HTTP reverse proxy and load balancing platform that configures itself dynamically for Kubernetes service networking. To do this, the Ingress controller listens to the Kubernetes API, then automatically generates routes to connect external requests to services and update configurations without requiring restarts. Along with leveraging Let’s Encrypt to offer TLS security for incoming requests, Traefik provides performance metrics through major observability platforms, such as StatsD, Prometheus, InfluxDb, and Datadog, for easier monitoring.

*Quick Note: In most cases, the Ingress controllers covered above can be used for a number of environments and workloads. The use cases discussed in this post depict the most suitable applications and are not limited to the ones covered.
The Kubernetes Ingress object allows cluster administrators to provide routing rules that guide access to cluster services. These rules outline different specifications used to expose containerized applications outside the cluster. While there are multiple ways of directing incoming HTTP(S) traffic to applications in the cluster, Ingress is the most efficient, since it eliminates the need to create different load balancers.
This article explored some of the advanced Ingress solutions for Kubernetes that support various use cases. A production-grade Kubernetes ecosystem relies on several external and internal services to work in tandem. Though the Kubernetes-managed NGINX Ingress controller is a popular option, it is strongly recommended to diligently assess your requirements and choose the controller that best supports your use case.