Multiple Azure Load Balancer Support for OpenShift

Overview of Multiple Azure Load Balancer within an Avi Service Engine Group

By default, Avi Controller creates one Azure Load Balancer (ALB) per SE Group which introduces a limit on the number of virtual service IPs (VIPs) or ports that can be supported on a given Avi Service Engines. Starting with 18.1.3, Avi Vantage supports multiple Application Load Balancer (ALBs) within a single Avi Service Engine group. The Avi Controller also manages the creation of multiple Azure Load Balancers and distributions of SEs across the Availability Zones.

Benefits of using Multiple ALB

The following are the benefits of using multiple Azure Load Balancer in Azure:

  • The new ALB creating and scaling out is executed seamlessly and without any user intervention. The system automatically creates the ALBs as and when needed.
  • This feature works with the Basic ALBs, so no extra cost is involved.
  • As it is limited to the configured SE group only, changes in configuration are not required for the remaining virtual services, HA or scale out parameters for the different SE groups within the same cloud.

How Multiple Azure Load Balancers in OpenShift Works with Avi Vantage

The following points explain how Avi Controller performs scaling out of virtual services using multiple ALB feature.

  1. Avi Controller automatically detects all the Availability Sets (AS) where SEs are created.
  2. When a virtual service is created, the Avi Controller finds an ALB with free space. If there is no free space in existing ALB, Avi Controller creates a new ALB and points it to an unused AS. The virtual service gets placed on the SEs in that AS.
  3. The number of virtual services per SE group is proportional to the number of AS used in the OpenShift cluster. The number of virtual services is no longer limited to a single Azure LB.

workflow

The diagram mentioned above explains the following workflows:

  • The OpenShift nodes are distributed across the following two Availability Sets:
    • Availability Set 1
    • Availability Set 2
  • Avi Controller is hosting virtual services for the applications (app1.analytics.avi.com, and app200.analytics.avi.com). These applications are communicating to the OpenShift nodes in Availability Set 1.
  • When the limit for the ALB rule is reached, the Avi Controller creates a new ALB (using Availability Set 2) within the same SE Group.
  • The new application (app300.analytic.avi.com) will communicate to the new ALB associated with Availability Set 2.

Prerequisites for Configuring Multiple Azure Load Balancer in OpenShift

  • Create OpenShift nodes across multiple Azure Availability Sets (AS).

Configuring Avi Vantage for Multiple Azure Load Balancers in OpenShift

The following are the configuration steps required to enable multiple Azure Load Balancers in OpenShift.

  1. Create OpenShift nodes across multiple Azure Availability Sets (AS). For more information on creating OpenShift nodes, refer to Nodes in OpenShift Cloud.
  2. Create a no access cloud.
  3. Set the enable_multi_lb option on the Default-Group of the no access cloud created in the previous step.
  4. Set the following knobs for the multiple lb rule to restrict the number of rules used on the ALB.

    • Maximum rules per ALB
    • Maximum public VIPs per ALB
  5. Convert the no access cloud created in the Step number 2 to an OpenShift cloud. For the information on integrating OpenShift with Avi Vantage,refer to Installing Avi Vantage in OpenShift/Kubernetes

    Refer to the following articles for configuring Azure and OpenShift cloud. * For configuring OpenShift cloud, refer to Installing Avi Vantage in OpenShift * For integrating Azure in OpenShift, refer to Azure IPAM for OpenShift

Limitations while Using Multiple Azure Load Balancers

The following are the limitation of using Multiple Azure Load Balancer in OpenShift:

  • For scaling out, Avi Vantage might need a higher number of SEs than the standard ALB option.
  • It is difficult to upgrade existing SE groups to this scheme, hence new cloud needs to be created to enable the feature.