Azure Load Balancer

<< Back to Technical Glossary

Azure Load Balancer Definition

An Azure load balancer is an ultra-low-latency Open Systems Interconnection (OSI) model Layer 4 inbound and outbound load balancing service for all UDP and TCP protocols. Built to handle millions of requests per second, an Azure application load balancer distributes incoming traffic among healthy VMs to deliver high availability. The Azure elastic load balancer further ensures high availability across availability zones by remaining zone redundant.

The Azure classic load balancer allows users to configure the front-end IP to include one or more public IP addresses. Front-end IP configuration renders applications and the Azure WAF load balancer internet accessible.

Virtual machines use virtual network interface cards (NICs) to connect to an Azure software load balancer. A back-end address pool connected to the Azure standard load balancer contains the IP addresses of the virtual (NICs) to distribute traffic to the VMs. The Azure load balancer monitors specific ports on each VM with a health probe to ensure only operational VMs receive traffic.

This image depicts an Azure Load Balancer intaking client requests, then identifying which machines can manage them, and forwarding requests accordingly.

Azure Load Balancer FAQs

What is Azure Load Balancer?

Microsoft Azure, formerly called Windows Azure, is the public cloud computing platform from Microsoft. Azure offers a range of cloud services for computer, networking, analytics, and storage.

Users build and run applications with these services in the public cloud, and load balancing provides higher availability and scalability by extending incoming requests across many virtual machines (VMs). Users can create a load balancer in the Azure portal.

A load balancer fields client requests, identifies which machines can manage them, and forwards them accordingly. An Azure Load Balancer is a cloud-based system that allows users to control a set of machines as a single machine.

Azure Load Balancer serves as the single point of contact for clients at layer 4 of the Open Systems Interconnection (OSI) model. It distributes inbound traffic that arrives at the front end to pool instances on the backend. The backend pool instances can be virtual machine scale set instances or Azure Virtual Machines. Load-balancing rules and health probes determine how the traffic is distributed.

Public load balancers provide outbound connections and balance internet traffic for VMs inside the virtual network. They achieve this by translating private IP addresses so they are public.

Private, or internal load balancers, that load balance traffic inside a virtual network are used where the frontend is the only place where private IPs are needed. In a hybrid situation, users can access a load balancer frontend from an on-premises network.

Azure Load Balancer Overview

In terms of types of load balancers in Azure, Microsoft Azure offers a portfolio that consists of multiple load balancing and network traffic management services. These can be used alone or in combination, although the optimal solution may require them all, depending on your requirements.

The Azure load-balancing portfolio includes Traffic Manager, Application Gateway, and Azure Load Balancer.

To deliver global DNS load balancing, Traffic Manager assesses incoming DNS requests against user routing policy and responds with a healthy endpoint. Azure load balancer routing methods include:

  • Geography-based routing distributes application traffic to endpoints based on user geographic location.
  • MultiValue routing enables users to send IP addresses of multiple application endpoints in a single DNS response.
  • Performance routing reduces latency by sending the requestor to the closest endpoint.
  • Priority routing directs traffic to the primary endpoint and reserves backup endpoints.
  • Subnet-based routing distributes application traffic to endpoints based on user subnet or IP address range.
  • Weighted round-robin routing distributes traffic to each endpoint based on assigned weighting.

 

The client connects directly to the endpoint Traffic Manager. When an endpoint is unhealthy, Traffic Manager detects it and redirects clients to healthy instances.

Functioning as an application delivery controller (ADC) as a service with various Layer 7 load-balancing capabilities, Application Gateway is essentially the Azure layer 7 load balancer. It enables customers to offload CPU-intensive TLS termination to it to optimize web farm productivity.

Other Layer 7 routing capabilities include the ability to host multiple websites with just one application gateway, cookie-based session affinity, round-robin distribution of incoming traffic, and URL path-based routing. Application Gateway can be configured as an internal-only gateway, an Internet-facing gateway, or a hybrid. A fully Azure managed platform, Application Gateway is highly available and scalable, with robust logging capabilities and diagnostics intended to enhance manageability.

Finally, rounding out the Azure SDN stack is Azure Load Balancer, offering low-latency, high-performance Layer 4 load-balancing services for TCP and UDP protocols. The Azure load balancer manages outbound and inbound connections and users can manage service availability with TCP and HTTP health-probing options, defining rules to map inbound connections to back-end pool destinations and configuring public and internal load-balanced endpoints.

The Azure load balancer architecture consists of 5 objects.

  • IP Address. A public IP address if it is intended to be public-facing or a private IP address if it is intended to be internal.
  • Backend Pool. A backend pool of virtual machines and rules about which should receive traffic.
  • Health Probes. These are probes or rules designed to assess resource health in the backend pool.
  • Load Balancing Rule. Combines IP addresses, backend pool, and health probes, together with rules for how to load balance traffic to the backend resources and on which backend port in the backend pool traffic should go.
  • Optional NAT Rule. This enables port network address translation (NAT) to one of the backend servers on a specific port in the pool.

 

Using Traffic Manager, Application Gateway, and Azure Load Balancer together can enable many sites to achieve several design goals:

  • Internal Load Balancing. The Load Balancer stands in front of the high-availability cluster, ensuring only healthy and active database endpoints are exposed to the application and that only healthy databases receive connection requests. Users can optimize Azure load balancer performance by distributing passive and active replicas independent of the front-end application across the cluster.
  • Independent Scalability. The application owner can scale request workloads independently because the web application workload is separated by content type. Application Gateway routes traffic based on application health and the specified rules.
  • Reduced Latency. Traffic Manager reduces latency by automatically directing the user to the closest region.
  • Multi-Geo Redundancy. If one region fails, Traffic Manager seamlessly and automatically routes traffic to the nearest region.

 

For Azure load balancer troubleshooting tips, information on how to configure a load balancer in Azure, and more, refer to the Azure load balancing page and find Azure load balancer documentation here.

Azure Load Balancer Features and Advantages

Why use Azure Load Balancer? Use Azure Load Balancer to create highly available services and scale applications. Load Balancer provides high throughput and low latency, supports both inbound and outbound scenarios, and scales up for all TCP and UDP applications.

Use cases for Azure Standard Load Balancer include:

  • Load balance both internal and external Azure virtual machine traffic.
  • Distribute resources within and across zones to increase availability.
  • Configure Azure VM outbound connectivity.
  • Monitor load-balanced resources with health probes.
  • Access VMs in a virtual network by port and public IP address with port forwarding.
  • Support IPv6 load balancing.
  • Azure standard load balancer provides multi-dimensional metrics that can be grouped, filtered, and broken out through Azure Monitor.
  • Load balance services on multiple IP addresses, ports, or both.
  • Move load balancer resources, both internal and external, across Azure regions.

 

The Azure load balancer is designed on the zero-trust network security model to be a secure part of a virtual network by default. That network itself is isolated and private, and the load balancer does not store user data. Use network security groups (NSGs) to permit allowed traffic only.

Azure Load Balancer supports availability zones scenarios and can either be zone redundant, zonal, or non-zonal. Increase availability by aligning distribution across zones and resources within them. Just select the appropriate type of frontend needed and configure the zone-related properties.

Azure Load Balancer Disadvantages

There are a few drawbacks to using Azure Load Balancer, particularly compared with more advanced load balancing solutions.

  • The Azure Load Balancer does not support PaaS services.
  • Azure Load Balancer runs on level 4 of the OSI model for TCP and UDP, so it lacks intelligent, content-based traffic routing mechanisms for URL or HTTP traffic.
  • In contrast to many other load balancing solutions, Azure Load Balancer does not act as a reverse proxy or interact with the payload of a TCP or UDP flow.
  • For VMs that lack public IPs, Azure translates the source IP address that is private using SNAT with port masquerading to a public source IP address. This outbound flow IP address is public, but it cannot be assigned, reserved, or saved, so it is not compatible with a whitelist system.

 

Is There An Azure Kubernetes Load Balancer?

The Azure Kubernetes Service (AKS) supports both inbound and outbound scenarios on L4 of the Open Systems Interconnection (OSI) model. Inbound flows arrive at the front end of the load balancer then get distributed to backend pool instances.

There are two purposes for a public Azure Load Balancer configuration integrated with AKS:

  • Translate the nodes of private IP addresses to a public IP address in the outbound pool to deliver outbound connections to cluster nodes inside the AKS virtual network.
  • To create highly available services and scale applications easily by providing access to applications via Kubernetes services.

 

Where private IPs are required on the frontend, a private or internal load balancer is needed. Internal load balancers function inside a virtual network and in a hybrid scenario, users can also access the frontend from an on-premises network.

What Is Azure Load Balancer Auto-Scaling?

Autoscaling allows organizations to scale cloud services such as virtual machine instances or server capacities up or down automatically, based on defined conditions such as utilization levels, traffic, or other criteria. The Azure Load Balancer auto-scaling feature saves money by scaling instances in and out automatically.

How Does Azure Load Balancer Failover Work?

Automatic failover is a core feature of any application load balancer. When an origin within your load balancing configuration fails, failover redirects requests to help maintain system availability. However, because failover protection occurs at the network and application layers of the Open Systems Interconnection (OSI) model, layer 3 and layer 7, Azure Load Balancer alone cannot provide this functionality.

Azure Load Balancer Monitoring

Azure Load Balancer collects metrics and logs and other kinds of monitoring data, in tandem with Azure as a whole. Load Balancer also provides additional monitoring data through:

  • Health Probes
  • REST API
  • Resource health status

 

What is Session Persistence in Azure Load Balancer?

Session persistence, also called session affinity, client IP affinity, or source IP affinity, is a distribution mode. In this mode, connections from the same client will go to the same backend instance in the pool.

Client IP uses a two-tuple hash to route to backend instances: source IP and destination IP. This specifies that the same backend instance will handle successive requests from the same client IP address. Client IP and protocol use a three-tuple hash to route to backend instances: source IP, destination IP, and protocol type. This specifies that the same backend instance will handle successive requests from the same protocol and client IP address combination.

Does VMware NSX Advanced Load Balancer Offer a Microsoft Azure Load Balancer Alternative?

Yes, The VMware NSX Advanced Load Balancer is an Azure load balancing solution. Performance across multi-cloud environments and enterprise-grade application networking services are essential to migrating applications to Azure. As a cloud-native, elastic web application security and load balancing solution for Microsoft Azure with built-in application analytics, The VMware NSX Advanced Load Balancer delivers an enterprise-grade, software-defined solution that includes an Intelligent Web Application Firewall, a Software Load Balancer, and a Container Ingress Controller for container-based applications.

The VMware NSX Advanced Load Balancer Azure platform delivers central management of L4 – L7 application services across any cloud environment. The architecture offers enterprise-grade Azure load balancing services, including application security, caching, content-switching, elastic load balancing, GSLB, predictive autoscaling, real-time insights into application performance, SSL offload, and end-to-end automation.

Find out how much easier Azure migration can be—all without third-party logging tools—here.