Network Load Balancer

<< Back to Technical Glossary

Network Load Balancer Definition

A network load balancer distributes network traffic across multiple WAN links, virtual machines, or servers to avoid overloading any single host without using complex routing protocols. Any load balancer sits in front of servers and acts as the “traffic cop” routing requests from clients across capable servers optimally to maximize capacity utilization and speed. The load balancer ensures that servers are not overworked, and redirects traffic to healthy servers when a single server goes down. Likewise, the WAF network load balancer automatically sends requests to a new server when it is added to the server group.

A cloud network load balancer efficiently distributes network load and client requests across multiple servers, ensuring reliability and high availability, and delivering the flexibility to add servers based on demand and remove them later. Load balancing aims to eliminate single points of failure and optimize application reliability.

A global server load balancer (GSLB) is a common type of network load balancer which distributes incoming user requests across groups of servers that are distributed across multiple geographic regions. Because servers are close, either geographically or via network hops, in all but the most extreme cases of server and network failure, organizations enjoy high availability performance from their websites and users experience fast responses to requests.

This image depicts requests from clients moving through a network load balancer and being distributed amongst servers.

Network Load Balancer FAQs

What is a Network Load Balancer?

Network Load Balancers use variables such as destination ports and IP addresses to distribute traffic. They function on OSI Layer 4, so they are not intended to be context-aware or to consider cues at the application layer such as cookie data, content type, user location, custom headers, or application behavior. Network Load Balancers consider only the network-layer information contained inside the packets they direct.

Network Load Balancers offer the following benefits:

  • Ability to scale to millions of requests per second to handle volatile workloads
  • Support for static IP addresses
  • Ability to assign one elastic IP address per enabled subnet
  • Support registering targets including those outside the VPC by IP address
  • Support routing requests on a single EC2 instance to multiple applications and registering each IP address or instance using multiple ports with the same target group
  • Support independent monitoring of service health, with health checks defined at the target group level and many metrics reported there

 

Network Load Balancer Architecture

A Network Load Balancer layer serves as the single point of contact, distributing incoming traffic  across many registered targets and increasing application availability. Functioning on Layer 4 of the Open Systems Interconnection (OSI) model, each second a Network Load Balancer handles up to millions of requests.

Users can enable multiple Availability Zones for a load balancer to increase the fault tolerance of applications and ensure each enabled Availability Zone has at least one registered target for each target group. For each enabled Availability Zone, ELB creates a network interface that every load balancer node in the Availability Zone uses to get a static IP address.

What is Load Balancing in Networking?

How does network load balancing work? Load balancing distributes network traffic smoothly and evenly across multiple functional, healthy targets to ensure no one server becomes overloaded. Load balancing spreads workload evenly to increase application responsiveness and availability. Modern software load balancers also enhance application security.

Load Balancing Techniques in Networking

Here are some of the more common methods and technologies for how to use network load balancing.

Network Load Balancer SSL

To establish an encrypted link between a browser and a web server, Secure Sockets Layer (SSL) is the standard security technology. Before passing requests on, a load balancer frequently decrypts SSL traffic; this is called SSL termination. The load balancer thus improves application performance by saving the web servers from needing to engage in decryption.

Unfortunately, SSL termination can expose the application to possible attack as it transmits unencrypted traffic between the load balancers and the web servers. This risk is reduced when the load balancer and the web servers are within the same data center.

The SSL pass-through is another solution in which the load balancer merely passes a request still encrypted to the web server for decryption. This delivers extra security although it uses more CPU power.

Network Load Balancing Failover

Network load balancing failover is an automatic process that, along with failback, moves backend VMs to and from the active pool for the load balancer. These network load balancing techniques allow the system to remove unhealthy VMs and ensure the system is healthy.

Load Balancing and Security

Load balancing is critical to cyber security, especially as more organizations move to the cloud. The load balancer’s innate offloading function defends against distributed denial-of-service (DDoS) attacks by shifting malicious traffic toward a public cloud provider and away from the target server. Hardware defense against DDoS attacks, such as a perimeter firewall, can be prohibitively expensive and demand significant upkeep. Software types of network load balancing with cloud offload provide cost-effective and efficient security.

Load Balancing Algorithms

A variety of network load balancing methods exist. Which load balancing algorithm is best suited for a given use case depends on the specific facts.

  • Least Connection Method. This load balancing algorithm selects the server with the fewest active connections and directs traffic to it. This is ideal when many persistent connections exist for unevenly distributed traffic.
  • Least Response Time Method. This algorithm directs traffic to the server with the lowest average response time and the fewest active connections.
  • Round Robin Network Load Balancing Method. This technique directs traffic to the first available server and then sorts that server to the bottom of the queue. This method is ideal when there are not many persistent connections and servers are of equal specification.

 

IP Hash. In this case, which server receives the request is determined by the addresses of the clients.

What is a Network Load Balancing Cluster?

There is a difference between a failover cluster and network load balancing and between load balancing and server clustering generally.

A failover cluster provides redundancy and high availability but doesn’t distribute workload. Load balancing improves performance by distributing a workload across multiple servers. Server clustering combines servers to operate as a single entity.

Both network load balancing and server clustering coordinate multiple servers to manage a greater workload, but load balancers can more easily be integrated into existing architecture and used to distribute workload, while server clusters typically demand identical hardware.

Network load balancing clusters incorporate load balancing software and prioritize balancing jobs among all cluster servers. High performance clusters perform specific tasks very rapidly using multiple servers and support data intensive projects such as real-time data processing and live-streaming.

The most basic type of Kubernetes network load balancer is load distribution. Kubernetes operates two methods of load distribution through the kube-proxy feature.

Advantages and Disadvantages of Network Load Balancing

There are a number of advantages of Network Load Balancing to consider:

Connection-based Load Balancing on OSI Layer 4. Load balance both UDP and TCP traffic, routing connections to targets such as microservices and containers.

TLS Offloading. Network Load Balancer supports TLS session termination. This preserves the source IP address for back-end applications and enables users to delegate TLS termination tasks to the load balancer.

Sticky Sessions. Sticky sessions as defined by affinity with the source IP address at the target group level and routes requests from one client to the same target during one session.

Low Latency. Network Load Balancer delivers low latency for sensitive applications.

Preserve Source/Remote IP Address. Network Load Balancer retains the client side source IP address and source ports for the incoming connections unmodified, allowing the back-end to see the client IP address and applications to use it in further processing.

Static IP support. Network Load Balancer automatically provides a single static IP address for every Availability Zone or subnet that applications can use as the load balancer’s front-end IP. This makes using a firewall to allowlist an application easier than it was with Classic Load Balancer.

Elastic IP support. Network Load Balancer provides the possibility of assigning one Elastic IP for each Availability Zone or subnet, essentially offering a fixed IP option.

Long-lived TCP Connections. Ideal for WebSocket kinds of applications, Network Load Balancer supports long-lived TCP connections that can be open for months or even years, which is perfect for adtech, gaming, IoT, and more.

Central API Management. With the same API as Application Load Balancer, Network Load Balancer enables users to conduct health checks, work with target groups, and support containerized applications by load balancing across multiple ports of the same instance.

Zonal Isolation. The Network Load Balancer is designed for single zone application architectures. It automatically fails over to healthy Availability Zones if the existing Availability Zone somehow fails.

Reduced Bandwidth Usage. Most applications experience a cost reduction for load balancing with NLB compared to Classic Load Balancers or Application Load Balancers.

Network Load Balancer Limits

The main disadvantage of Network Load Balancer is no SSL offloading. The inherent qualities of the OSI Layer 4 mean that Network Load Balancer does not support SSL offloading. Application Load Balancer or Classic Load Balancer or other OSI Layer 7 compliant load balancers and software load balancing platforms support SSL offloading.

When to Use Network Load Balancer

Among the best use cases for Network Load Balancer include:

  • A demand for seamless support of high-volume or spiky inbound TCP requests
  • A need to support an elastic or static IP address
  • To support more than one port on an EC2 instance while using container services

 

Network Load Balancing Services, Software, and Tools

Network load balancing services make implementing network load balancing easy. Similarly, using network load balancing software saves you time, time that would have been spent learning how to configure network load balancing. And while most network load balancer configuration is not too complex, why should your team was time on manual configuration when network load balancing tools and software defined networking load balancing eliminate the need?

Does Avi Offer Network Load Balancing?

In addition to full-featured load balancing, Avi offers advanced security, application analytics, application monitoring, multi-cloud traffic management, on-demand autoscaling, and more. The Avi Networks load balancer also deploys in virtualized, bare metal, or container environments, delivering enterprise-grade services that far exceed those of virtualized legacy appliances.

Other load balancers and load balancing platforms offer basic load balancing, but lack the advanced policy support, full-featured load balancing, and enterprise-class features Avi delivers:

  • Comprehensive persistence
  • Advanced HTTP content switching capabilities
  • DNS services and GSLB across multiple clouds
  • Customizable health monitoring

Learn more about how Avi delivers a superior network load balancing alternative here.