LBaas v2 Driver

Overview

Load Balancer as a Service (LBaaS) helps meet the agility and application traffic demands of organizations implementing private cloud infrastructure. Using an ‘as-a-service’ model, LBaaS creates a simple model for application teams to spin up load balancers.

About Avi Vantage

The Avi Vantage Platform provides enterprise-grade distributed ADC solutions for on-premises as well as public-cloud infrastructure. Avi Vantage also provides built-in analytics to diagnose and improve the end-user application experience, while making operationalizing easier for network administrators.

Avi Vantage is a complete software solution which runs on commodity x86 servers or as a virtual machine and is entirely accessible via REST API calls.

The Avi Vantage OpenStack LBaas Solution

Avi Vantage integrates natively with OpenStack components to deliver a fully orchestrated, self-service driven OpenStack LBaaS architecture. Avi Vantage provides on-demand autoscaling in response to changing performance and load requirements, automated provisioning of L4-L7 services, and web application security resulting in intelligent LBaaS for OpenStack implementations.

OpenStack LBaaS v2

OpenStack LBaaS v1 is simple and lacks critical features like the ability to listen to multiple ports for a single IP, SSL support, and more. LBaaS v2 allows configuration of multiple listener ports on a single load balancer IP address. A listener is mapped to and listens for requests made to a given network port. Multiple listeners for a load balancer, allow requests to come in via multiple network ports.

lbaas v1 and v2

This article explains the Avi driver for LBaas v2.0 in detail.

Mapping LBaaS Objects to Avi Objects

Highlights of Avi Object Model

  • Virtual service is the central object for describing a load balancer instance in Avi Vantage.
  • Each virtual service has an IP address, and one or more service ports where that virtual service will listen to.
  • Each virtual service has a default pool.
  • A pool consists of one or more servers and the traffic is load balanced across those servers.
  • A virtual service can have one or more TLS certificates.
  • Pools can not be shared across virtual services.
  • Multiple virtual services can listen on the same IP address, but can not have overlapping service ports.

Owing to the mismatch between Avi model and LBaaS v2 model, we can not directly map the LBaaS load balancer to a single virtual service object.

LBaaS Load Balancer

When the LBaaS load balancer is created, the Avi Driver does not instantiate any new objects on Avi.

Converting LBaaS V2 Objects to Avi Vantage

  1. For each listener in v2, virtual service is created on Avi Vantage. This means a load balancer in lbaas v2 may end up with multiple virtual services on Avi Vantage.
  2. The OpenStack pool is replicated for each listener that the pool is used for.
  3. For each Server Name Indication (SNI) container on a listener, we need to create a virtual service in Avi Vantage (a child virtual service). We also need to create a new Avi pool, for each LBaaS v2 pool, for each SNI associated with that listener.
  4. In health monitors, HTTP codes can not be directly mapped from LBaaS v2 to Avi. Avi only supports 2XX, 3XX, 4XX, etc., whereas LBaaS APIs support specific codes to be checked against.
  5. In the case of certificates managed in Barbican, Avi driver obtains the certificate from Barbican and pushes it to the Avi Controller.
  6. After the Avi driver creates virtual services and pools on Avi Vantage, based on LBaaS objects, users can still make changes to those objects directly via Avi APIs. This could be desirable in cases where LBaaS APIs lack certain features that are available via direct Avi APIs. For example, users may wish to monitor a different port than the port configured in a pool object to determine if the backend members of that pool are alive. Such a configuration can be done via Avi APIs after the relevant objects are created.
  7. For virtual service/pool/health monitor fields that are relevant from the LBaaS API’s perspective, on any LBaaS object update, the Avi Driver performs a GET API operation to obtain the current object, modifies it based on the configuration of the corresponding LBaaS object. It then performs a PUT operation. Thus, any changes made to the Avi object using Avi APIs for those fields that are exposed via LBaaS APIs will be lost.

Installing Avi Driver for LBaaS v2

Avi driver is available in these distribution formats.

  • pip Wheel
  • DEB
  • Red Hat Package Manager (RPM)

Install the driver on machines running neutron-server (Neutron API server, not neutron network node) using one of the following commands.

  • $> pip install <WHEEL>
  • $> dpkg -i <DEB>
  • $> rpm -i <RPM>

Configuration (neutron.conf)

  1. Enable the use of Avi Driver for LBaaS API calls by adding the following in your neutron.conf. Under the [service_providers] section, add Avi Vantage as shown below.

    
     [service_providers]
     service_provider = LOADBALANCERV2:avi_adc:avi_lbaasv2.avi_driver.AviDriver:default
     

    This makes Avi the default driver. In case, any other driver has to be used, then do not mention “:default”.

    [avi_adc] is the name of the service provider.
    This can be changed by modifying the default service provider.

  2. Add a section for the Avi Controller configuration. Provide the IP of your Avi controller and admin credentials to access the Avi Controller.

    
     [avi_adc]
     address=AVI-CONTROLLER-IP
     user=admin
     password=PASSWORD
     
  3. The field use_placement_network_for_pool is by default set to false. If in a OpenStack deployment, there are tenants with the same subnet ranges (Classless Inter-Domain Routing (CIDR)), for example, 10.10.20.0/24, the first subnet will be chosen by default. Set use_placement_network_for_pool= True. This configuration avoids any conflict and the correct IP will be chosen.

  4. Restart neutron-server after updating the configuration files. For example, $> service neutron-server restart.
    Note: This command will vary depending on your installation.

SE Group and VRF Context

A user creates a load balancer and other objects like a listener, pool, pool member, certificates, etc using the Load Balancer as a Service (LBaaS) v2 API. The Avi LBaaS driver makes an API call to create the required object.

Avi Vantage provides the capability of creating Service Engine groups (SE groups) in a cloud. Multiple SE groups can exist in a cloud with varying properties. For example, the user can configure to have more CPU, and more RAM in SEs belonging to one group (for example SE Group 1), which can be used for production purposes versus SEs in another group (SE Group 2) which is used for testing purposes.

Similarly, in Avi Vantage, Virtual Routing Framework, or (VRF) contexts can be created. VRF is a method of isolating traffic within a system.

LBaaS v2 load balancer creation is enhanced with flavors, as discussed below.

Implementing Flavor Framework

There are three important steps that have to be executed to use flavors during load balancer creation.

Note: As a prerequisite, the SE groups or VRF contexts have to be created in the Controller by the admin, prior to creating the flavors.

Step 1: Creating flavors.
Flavors are created by the admin, as shown below.


+--------------------------------------+------+----------------+---------+
| id                                   | name | service_type   | enabled |
+--------------------------------------+------+----------------+---------+
| 351dc844-c6d5-43d1-a1e8-937d31772f45 | SEG2 | LOADBALANCERV2 | True    |
| c84d0347-9e3d-4a4c-9d2c-e53425a46227 | SEG1 | LOADBALANCERV2 | True    |
+--------------------------------------+------+----------------+---------+
-----------------------Output Truncated-----------------------------------

SEG2 and SEG1 are the flavors of service type LOADBALANCERV2.

Step 2: Creating flavor profiles.
A flavor profile is created where the metainfo is specified. Metainfo is used to provide additional information to the driver implementing a load balancer object. For example, to specify the SE group to be used.


root@admin1-ocata:~# neutron flavor-profile-list
+--------------------------------------+-------------+---------+-------------------------------------------------------------+
| id                                   | description | enabled | metainfo                                                    |
+--------------------------------------+-------------+---------+-------------------------------------------------------------+
| 4902407e-951a-47f9-be9e-92bf1cfdb7cd |             | True    | {'se_group_ref': '/api/serviceenginegroup?name=se-group-1'} |
| f755528c-0d08-45fb-a88b-54748586990b |             | True    | {'se_group_ref': '/api/serviceenginegroup?name=se-group-2'} |
+--------------------------------------+-------Output Truncated+-------------------------------------------------------------+

In the metadata, SEG1 is referenced to SE Group 1 which was created in Avi Vantage.

There are two profiles:

  • ‘se_group_ref’: ‘/api/serviceenginegroup?name=se-group-1’
  • ‘se_group_ref’: ‘/api/serviceenginegroup?name=se-group-2’

where, se-group-1 and se-group-2 exist in Avi Vantage.

Step 3: Associating the flavor profile with the flavor.

As shown in the code snippet, from neutron flavor-show SEG1, we see that the,

  • service_profile is 4902407e-951a-47f9-be9e-92bf1cfdb7cd, belongs to se-group-1.
  • service_type is LOADBALANCERV2

root@admin1-ocata:~# neutron flavor-show SEG1
+------------------+--------------------------------------+
| Field            | Value                                |
+------------------+--------------------------------------+
| description      |                                      |
| enabled          | True                                 |
| id               | c84d0347-9e3d-4a4c-9d2c-e53425a46227 |
| name             | SEG1                                 |
| service_profiles | 4902407e-951a-47f9-be9e-92bf1cfdb7cd |
| service_type     | LOADBALANCERV2                       |
+------------------+--------------------------------------+
root@admin1-ocata:~# neutron flavor-profile-show 4902407e-951a-47f9-be9e-92bf1cfdb7cd
+-------------+-------------------------------------------------------------+
| Field       | Value                                                       |
+-------------+-------------------------------------------------------------+
| description |                                                             |
| driver      | avi_lbaasv2.avi_driver.AviDriver                            |
| enabled     | True                                                        |
| id          | 4902407e-951a-47f9-be9e-92bf1cfdb7cd                        |
| metainfo    | {'se_group_ref': '/api/serviceenginegroup?name=se-group-1'} |
+-------------+-------------------------------------------------------------+
root@admin1-ocata:~#

The SE Group is specified in the metainfo.

Note: In the metainfo, ‘se_group_ref’ can be replaced by ‘vrf_ref’ followed by the relevant path to use VRF context instead of SE Group.

When creating a neutron LBaaS load balancer, if the flavor SEG1 is used,

  1. The load balancer will check the flavor SEG1 and identify the service profile associated with it.
  2. From the service profile, the load balancer calls the Avi driver and passes this information to the driver.
  3. Avi driver then creates the virtual service that is created for the load balancer in the SE group, SEG 1.

Flavors can be created by admins only. However, any user can use the flavor while creating a load balancer.

Cleaning up OpenStack Resources while using Avi LBaaSv2 Driver

If you delete LBaaSv2 resources using Avi LBaaSv2 driver, Avi Controller leaves some of the network ports in OpenStack network for some time and does not delete them immediately. Hence you cannot delete the OpenStack network immediately after deleting the LBaaSv2 resources.

Root Cause

Avi Controller uses garbage collection mechanism to delete the network ports from OpenStack environment. Garbage collection is an independent background process. Avi object deleted APIs do not wait for garbage collection to kick-in or to finish. When Virtual Services and pools are deleted from Avi Vantage, the garbage collection usually kicks-in immediately and checks for any network ports to be deleted and issues port delete request.

The port delete request has following two parts:

  1. Detach the port from Service Engine VM.
  2. Delete the port once it is successfully detached.

Once you delete ports from OpenStack, you can delete the network resource.

However, this entire process of garbage collection can get delayed due to one of the following reasons:

  • The garbage collection can get delayed if there are other operations happening on the SE VM. For instance, if there is a Virtual Service that is placed on the SE VM from another tenant, and there is already a network port attached pending on it.

  • The garbage collection can also get delayed if there is load on the system.

  • The deletion of port also depends on when nova detaches the port from SE VM. If there is a lot of load on OpenStack system, the messages to Nova and Neutron might get delayed, hence delaying the process of detaching the port from VM.

  • The VS goes through a state change where VS and its default pool are deleted at once. This state change adds delay to the garbage collection process. This issue is resolved starting with Avi Vantage version 18.2.6.

Solution

If you want to delete the OpenStack LBaaSv2 resource, you should retry the deletion of OpenStack network, in case it fails due to ‘One or more ports have an IP allocation from this subnet’.