Multi Availability Zones for Azure Cloud

Microsoft Azure supports availability zones based on high availability. Avi Vantage supports multi availability zones for Microsoft Azure cloud. The Service Engines and application load balancers provisioned in Azure also use the new availability zone based HA. This document discusses the Microsoft Azure and Avi Vantage setup details for multi availability zones in the Azure cloud.

For complete information on availability zones supported for Azure, refer to:

Configuration Setup

Microsoft Azure

On Microsoft Azure, ensure that the region selected supports the multi availability zone feature.

Currently Avi Vantage supports Service Engine only across availability zones within a region.

Avi Vantage

On Avi Vantage, enable the use_enhanced_ha flag in Azure cloud configuration. This flag cannot be enabled, if the cloud has a Service Engine or a virtual service already created.

The virtual service, pool creation configuration workflow remains the same as usual. Azure allows subnets, ALBs, and public IPs to span across zones. So, the virtual service needs only one VIP which will be scaled out to Service Engines from different availability zones.

Support for Three Availability Zones for Microsoft Azure

Starting with Avi Vantage release 21.1.1, the support for three availability zones (AZs) per Service Engine group is available, and the option for selecting availability zones is configurable now. Prior to the Avi Vantage 21.1.1, the number of availability zone was set to 2, by default, and the value was not editable. By default, two AZs were available in the azure configuration at the cloud level. The Service Engine can be created in either of the availability zones.

With the recent enhancement, now Service Engine can be created in any of the three availability zones as per the requirement. In case if zone is not supported to region, there would be error raised while creating Service Engine.

Configuring Three Availability Zones

Use the use_enhanced_ha option available under the azure_configuration mode to configure the three availability zones as required for a Service Engine group. Below are the commands for configuring the AZ’s.

[admin:10-10-1-1]: > configure cloud Az1
[admin:10-10-1-1]: cloud> azure_configuration
[admin:10-10-1-1]: cloud:azure_configuration> use_enhanced_ha
Overwriting the previously entered value for use_enhanced_ha
[admin:10-10-1-1]: cloud:azure_configuration> availability_zones 1
[admin:10-10-1-1]: cloud:azure_configuration> availability_zones 2
[admin:10-10-1-1]: cloud:azure_configuration> availability_zones 3
[admin:10-10-1-1]: cloud:azure_configuration> save
[admin:10-10-1-1]: cloud> save

The following is the output after configuring three availability zones.

|   use_enhanced_ha            | True                                                                             |
|   use_managed_disks          | True                                                                             |
|   availability_zones[1]      | 1                                                                                |
|   availability_zones[2]      | 2                                                                                |
|   availability_zones[3]      | 3                                                                                |
|   use_standard_alb           | False

Note: Ensure that the availability_zones configuration is exactly as the above. Adding incorrect values for availability_zones will result in SE creation failure.

Advanced Load Balancer Setup

Similar to basic ALB, the cloud connector will create one standard SKU internal ALB and one standard external ALB per SE group. Each virtual service will have a single VIP and optionally a public IP. The Service Engine VMs will be created in individual availability zones. Each virtual service will be placed across at least two AZs. The cloud connector creating the standard SKU ALBs will provide availability across the Azure AZs. The standard ALB requires Network Security Groups (NSG) configured explicitly allowing the traffic to flow in for the public IPs.

The scheme adopted for Network Security Group (NSG) creation for Azure is as explained here.


Network Security Groups Setup

The cloud connector will create one NSG per cloud and associate it with each SE NIC created in that cloud. By default, NSG has the following rules configured:


These rules ensure that all inbound and outbound traffic to the virtual network is allowed. The Azure load balancer can then probe the VM and the VM is able to access the internet.

In addition to these, when a public IP is configured the cloud_connector will add one rule per public IP to this NSG. This allows inbound traffic from any source to the public IP:port configured on the virtual service. The priority of these rules start from 100. Post configuration the NSG inbound rules will look as follows:


Azure supports upto 500 rules per NSG which will accommodate the number of public IPs.

The following is the order of the setup:

  1. Creating NSG – One NSG will be created per cloud when the first SE is created for the cloud.
  2. Associating NSG to NIC – When the Service Engine is created, the NSG will be associated with the SE NIC at the time of creating the NIC. This ensures that the NIC comes up with the NSG.
  3. Creating NSG rule – When a virtual service with public IP is created, and an attach_ip call is issued to the cloud_connector, a rule for that public IP is configured in the NSG. The rule will have all the ports (or range of ports) configured as a part of the virtual service.
  4. Deleting NSG rule – When the public IP is removed, the corresponding NSG rule will be removed from the NSG as part of the periodic garbage collection cycle.
  5. Deleting NSG – NSG will be deleted when the cloud is deleted.