LBaaS V2 Driver

Overview

Load Balancer as a Service (LBaaS) helps meet the agility and application traffic demands of organizations implementing private cloud infrastructure. Using an ‘as-a-service’ model, LBaaS creates a simple model for application teams to spin up load balancers.

About Avi Vantage

The Avi Vantage platform provides enterprise-grade distributed ADC solutions for on-premises as well as public-cloud infrastructure. Avi Vantage also provides built-in analytics to diagnose and improve the end-user application experience, while making operationalizing easier for network administrators.

Avi Vantage is a complete software solution which runs on commodity x86 servers or as a virtual machine and is entirely accessible via REST API calls.

Avi Vantage OpenStack LBaaS Solution

Avi Vantage integrates natively with OpenStack components to deliver a fully orchestrated, self-service driven OpenStack LBaaS architecture. Avi Vantage provides on-demand autoscaling in response to changing performance and load requirements, automated provisioning of L4-L7 services, and web application security resulting in intelligent LBaaS for OpenStack implementations.

Note: LBaaS v1 has been deprecated and is no longer supported. Upgrading to LBaaS v2 is recommended.

OpenStack LBaaS v2

OpenStack LBaaS v1 is simple and lacks critical features like the ability to listen to multiple ports for a single IP, SSL support, and more.

Avi LBaaS v2 driver resides in its own repo. This repo will also host LBaaSv2 packages. To download LBaaS v2 packages, click here.

LBaaS v2 allows configuration of multiple listener ports on a single load balancer IP address. A listener is mapped to and listens for requests made to a given network port. Multiple listeners for a load balancer, allow requests to come in via multiple network ports.

lbaas v1 and v2

This article explains the Avi driver for LBaaS v2.0 in detail.

Mapping LBaaS Objects to Avi Objects

Highlights of Avi Object Model

  • Virtual service is the central object for describing a load balancer instance in Avi Vantage.
  • Each virtual service has an IP address, and one or more service ports for listening.
  • Each virtual service has a default pool.
  • A pool consists of one or more servers and the traffic is load balanced across those servers.
  • A virtual service can have one or more TLS certificates.
  • Pools can not be shared across virtual services.
  • Multiple virtual services can listen on the same IP address, but can not have overlapping service ports.

Owing to the mismatch between the Avi model and LBaaS v2 model, we can not directly map the LBaaS load balancer to a single virtual service object.

LBaaS Load Balancer

When the LBaaS load balancer is created, the Avi driver does not initiate any new objects on Avi Vantage.

Converting LBaaS V2 Objects to Avi Vantage

  1. For each listener in v2, virtual service is created on Avi Vantage. This means a load balancer in LBaaS v2 may end up with multiple virtual services on Avi Vantage.
  2. The OpenStack pool is replicated for each listener that the pool is used for.
  3. For each Server Name Indication (SNI) container on a listener, we need to create a virtual service in Avi Vantage (a child virtual service). We also need to create a new Avi pool, for each LBaaS v2 pool, for each SNI associated with that listener.
  4. In health monitors, HTTP codes can not be directly mapped from LBaaS v2 to Avi Vantage. Avi Vantage only supports 2XX, 3XX, 4XX, etc., whereas LBaaS APIs support specific codes to be checked against.
  5. In the case of certificates managed in Barbican, Avi driver obtains the certificate from Barbican and pushes it to the Avi Controller.
  6. After the Avi driver creates virtual services and pools on Avi Vantage, based on LBaaS objects, users can still make changes to those objects directly via Avi APIs. This could be desirable in cases where LBaaS APIs lack certain features that are available via direct Avi APIs. For example, users may wish to monitor a different port than the port configured in a pool object to determine if the backend members of that pool are alive. Such a configuration can be done via Avi APIs after the relevant objects are created.
  7. For virtual service/pool/health monitor fields that are relevant from the LBaaS API’s perspective, on any LBaaS object update, the Avi driver performs a GET API operation to obtain the current object, modifies it based on the configuration of the corresponding LBaaS object. It then performs a PUT operation. Thus, any changes made to the Avi object using Avi APIs for those fields that are exposed via LBaaS APIs will be lost.

Installing Avi Driver for LBaaS v2

Avi driver is available in these distribution formats:

pip Wheel $> pip install 'WHEEL'
DEB $> dpkg -i 'DEB'
Red Hat Package Manager (RPM) $> rpm -i 'RPM'


Install the driver on machines running neutron-server (Neutron API server, not neutron network node) using one of the following:

  • To install the Avi LBaaSv2 driver pip package use $ pip install avi_lbaasv2-18.2b20190501-py2-none-any.whl.

  • RPM package is used for RHEL or openSuSE platform. To install, use rpm --install avi-lbaasv2-18.2b20190501-1.noarch.rpm.

  • DEB package is used for Ubuntu or Debian based platforms. To install, use dpkg -i python-avi-lbaasv2_18.2b20190501-1_all.deb.

Configuring (neutron.conf)

  1. Enable the use of Avi driver for LBaaS API calls by adding the following in your neutron.conf. Under the [service_providers] section, add Avi Vantage as shown below.

    
     [service_providers]
     service_provider = LOADBALANCERV2:avi_adc:avi_lbaasv2.avi_driver.AviDriver:default
     

    This makes Avi the default driver. In case, any other driver has to be used, then do not mention “:default”.

    [avi_adc] is the name of the service provider.
    This can be changed by modifying the default service provider.

  2. Add a section for the Avi Controller configuration. To access the Avi Controller, provide the IP of your Avi Controller and admin credentials.

    
     [avi_adc]
     address=AVI-CONTROLLER-IP
     user=admin
     password=PASSWORD
     
  3. The field use_placement_network_for_pool is by default set to false. If in an OpenStack deployment, there are tenants with the same subnet ranges (Classless Inter-Domain Routing (CIDR)), for example, 10.10.20.0/24, the first subnet will be chosen by default. To avoid any conflict and to choose the correct IP, set use_placement_network_for_pool= True

  4. Restart neutron-server after updating the configuration files. For example, $> service neutron-server restart.
    Note: This command will vary depending on your installation.

SE Group and VRF Context

A user creates a load balancer and other objects like a listener, pool, pool member, certificates, etc using the LBaaS v2 API. The Avi LBaaS driver makes an API call to create the required object.

Avi Vantage provides the capability of creating Service Engine groups (SE groups) in a cloud. Multiple SE groups can exist in a cloud with varying properties. For example, the user can configure to have more CPU, and more RAM in SEs belonging to one group (for example SE Group 1), which can be used for production purposes versus SEs in another group (SE Group 2) which is used for testing purposes.

Similarly, in Avi Vantage, Virtual Routing Forwarding (VRF) contexts can be created. VRF is a method of isolating traffic within a system.

Flavor is a named resource used to schedule a provider driver with metadata at resource creation. Flavor framework is implemented as discussed below:

Implementing Flavor Framework

As a prerequisite, the SE groups or VRF contexts have to be created in the Controller by the administrator, prior to creating the flavors.

Let us use the following sample code to understand the implementation of flavor framework:


root@admin1-ocata:~# neutron flavor-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+------+----------------+---------+
| id                                   | name | service_type   | enabled |
+--------------------------------------+------+----------------+---------+
| 351dc844-c6d5-43d1-a1e8-937d31772f45 | SEG2 | LOADBALANCERV2 | True    |
| c84d0347-9e3d-4a4c-9d2c-e53425a46227 | SEG1 | LOADBALANCERV2 | True    |
+--------------------------------------+------+----------------+---------+
root@admin1-ocata:~#
root@admin1-ocata:~#
root@admin1-ocata:~# vi /etc/neutron/neutron.conf
root@admin1-ocata:~# neutron flavor-profile-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+-------------+---------+---------------------------------------------+
| id                                   | description | enabled | metainfo                                    |
+--------------------------------------+-------------+---------+---------------------------------------------+
| 4902407e-951a-47f9-be9e-92bf1cfdb7cd |             | True    | {'se_group_ref': '/api/serviceenginegroup?name=se-group-1'} |
| f755528c-0d08-45fb-a88b-54748586990b |             | True    | {'se_group_ref': '/api/serviceenginegroup?name=se-group-2'} |
+--------------------------------------+-------------+---------+---------------------------------------------+
root@admin1-ocata:~# neutron flavor-show SEG1
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+------------------+--------------------------------------+
| Field            | Value                                |
+------------------+--------------------------------------+
| description      |                                      |
| enabled          | True                                 |
| id               | c84d0347-9e3d-4a4c-9d2c-e53425a46227 |
| name             | SEG1                                 |
| service_profiles | 4902407e-951a-47f9-be9e-92bf1cfdb7cd |
| service_type     | LOADBALANCERV2                       |
+------------------+--------------------------------------+
root@admin1-ocata:~# neutron flavor-profile-show 4902407e-951a-47f9-be9e-92bf1cfdb7cd
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+-------------+-------------------------------------------------------------+
| Field       | Value                                                       |
+-------------+-------------------------------------------------------------+
| description |                                                             |
| driver      | avi_lbaasv2.avi_driver.AviDriver                            |
| enabled     | True                                                        |
| id          | 4902407e-951a-47f9-be9e-92bf1cfdb7cd                        |
| metainfo    | {'se_group_ref': '/api/serviceenginegroup?name=se-group-1'} |
+-------------+-------------------------------------------------------------+
root@admin1-ocata:~#


There are three important steps to be executed to use flavors during load balancer creation.

Step 1: Creating flavors
Flavors are created by the administrator. From the sample code, note that SEG2 and SEG1 are the flavors of service type LOADBALANCERV2.


+--------------------------------------+------+----------------+---------+
| id                                   | name | service_type   | enabled |
+--------------------------------------+------+----------------+---------+
| 351dc844-c6d5-43d1-a1e8-937d31772f45 | SEG2 | LOADBALANCERV2 | True    |
| c84d0347-9e3d-4a4c-9d2c-e53425a46227 | SEG1 | LOADBALANCERV2 | True    |
+--------------------------------------+------+----------------+---------+

Step 2: Creating flavor profiles
A flavor profile is created where the metainfo is specified. Metainfo is used to provide additional information to the driver implementing a load balancer object. For example, to specify the SE group to be used.


+--------------------------------------+-------------+---------+---------------------------------------------+
| id                                   | description | enabled | metainfo                                                    |
+--------------------------------------+-------------+---------+---------------------------------------------+
| 4902407e-951a-47f9-be9e-92bf1cfdb7cd |             | True    | {'se_group_ref': '/api/serviceenginegroup?name=se-group-1'} |
| f755528c-0d08-45fb-a88b-54748586990b |             | True    | {'se_group_ref': '/api/serviceenginegroup?name=se-group-2'} |
+--------------------------------------+---------------------------------------------------------------------+ 

In the metadata, SEG1 is referenced to SE Group 1 which was created in Avi Vantage.

There are two profiles:

  • ‘se_group_ref’: ‘/api/serviceenginegroup?name=se-group-1’
  • ‘se_group_ref’: ‘/api/serviceenginegroup?name=se-group-2’

where, se-group-1 and se-group-2 exist in Avi Vantage.

Step 3: Associating the flavor profile with the flavor
As shown in the sample code, from neutron flavor-show SEG1, we see that the,

  • service_profile is 4902407e-951a-47f9-be9e-92bf1cfdb7cd, belongs to se-group-1.
  • service_type is LOADBALANCERV2

+------------------+--------------------------------------+
| Field            | Value                                |
+------------------+--------------------------------------+
| description      |                                      |
| enabled          | True                                 |
| id               | c84d0347-9e3d-4a4c-9d2c-e53425a46227 |
| name             | SEG1                                 |
| service_profiles | 4902407e-951a-47f9-be9e-92bf1cfdb7cd |
| service_type     | LOADBALANCERV2                       |
+------------------+--------------------------------------+
root@admin1-ocata:~# neutron flavor-profile-show 4902407e-951a-47f9-be9e-92bf1cfdb7cd
+-------------+-------------------------------------------------------------+
| Field       | Value                                                       |
+-------------+-------------------------------------------------------------+
| description |                                                             |
| driver      | avi_lbaasv2.avi_driver.AviDriver                            |
| enabled     | True                                                        |
| id          | 4902407e-951a-47f9-be9e-92bf1cfdb7cd                        |
| metainfo    | {'se_group_ref': '/api/serviceenginegroup?name=se-group-1'} |
+-------------+-------------------------------------------------------------+
root@admin1-ocata:~#

The SE Group is specified in the metainfo.

Note: In the metainfo, ‘se_group_ref’ can be replaced by ‘vrf_ref’ followed by the relevant path to use VRF context instead of SE Group.

When creating a neutron LBaaS load balancer, if the flavor SEG1 is used,

  1. The load balancer will check the flavor SEG1 and identify the service profile associated with it.
  2. From the service profile, the load balancer calls the Avi driver and passes this information to the driver.
  3. Avi driver then creates the virtual service that is created for the load balancer in the SE group, SEG 1.

Flavors can be created by administrators only. However, any user can use the flavor while creating a load balancer.

Multiple Networks with the Same CIDR

Let us consider a deployment scenario in which addresses in a tenant overlap. If a VM is created in each of these networks, there are chances that these two VMs will have the same IP addresses.

When creating a virtual service, a virtual IP (VIP) is created. Avi Vantage creates a Service Engine (SE) and attaches a port to the SE to reach the backend servers. In a single-arm mode, the same port is used to host the virtual IP and reach the backend servers. Avi Vantage creates a default VRF context per tenant and configures the VRF context on the SE hosting the virtual service for that tenant. Having a single VRF context for the tenant is not sufficient to handle multiple networks with overlapping addresses. This will result in a conflict.

To address this issue, starting with Avi Vantage release 18.2.2, the feature vrf_context_per_subnet has been introduced. This feature is set to false by default. On the OpenStack Controller, set vrf_context_per_subnet = True and restart the neutron-server. The LBaaS v2 driver now creates a new VRF context per subnet in the tenant.

Notes:

  • vrf_context_per_subnet works only in a single-arm deployment.
  • This feature is applicable only for deployments that use VXLAN or GRE as ML2 type drivers.

With vrf context and vrf context per subnet, conflicts arising due to same CIDRs across tenants and across subnets are resolved.

Prerequisites

  • The feature vrf context per subnet works only when the Controller and the tenants are configured in the provider mode. By default this mode (shared SE deployment) is enabled. To ensure that the Controller is deployed with SE managed within provider context,
    1. Navigate to Administration > Settings > Tenant Settings.
    2. Click on the Edit icon.
    3. Select the option Service Engines are managed within the provider context, shared across tenants. to enable it.
    4. Click on Save. The Tenant Settings Config screen appears as shown below. tenant settings

    You can also enable this feature via the CLI by setting the flag se_in_provider_context to True as shown below.

    
      [admin:-controller]: > show systemconfiguration
      [admin:10-140-6-31]: > configure systemconfiguration
      <--OUTPUT TRUNCATED-->
      [admin:10-140-6-31]: systemconfiguration> global_tenant_config
      [admin:10-140-6-31]: systemconfiguration:global_tenant_config> se_in_provider_context
      Overwriting the previously entered value for se_in_provider_context
      [admin:10-140-6-31]: systemconfiguration:global_tenant_config> save
      [admin:10-140-6-31]: systemconfiguration> save
      +----------------------------------+------------------------------------+
      | Field                            | Value                              |
      +----------------------------------+------------------------------------+
      | uuid                             | default                            |
      | dns_configuration                |                                    |
      |   search_domain                  |                                    |
      | ntp_configuration                |                                    |
      |   ntp_servers[1]                 |                                    |
      |     server                       | 0.us.pool.ntp.org                  |
      |   ntp_servers[2]                 |                                    |
      |     server                       | 1.us.pool.ntp.org                  |
      |   ntp_servers[3]                 |                                    |
      |     server                       | 2.us.pool.ntp.org                  |
      |   ntp_servers[4]                 |                                    |
      |     server                       | 3.us.pool.ntp.org                  |
      | portal_configuration             |                                    |
      |   enable_https                   | True                               |
      |   redirect_to_https              | True                               |
      |   enable_http                    | True                               |
      |   sslkeyandcertificate_refs[1]   | System-Default-Portal-Cert         |
      |   sslkeyandcertificate_refs[2]   | System-Default-Portal-Cert-EC256   |
      |   use_uuid_from_input            | False                              |
      |   sslprofile_ref                 | System-Standard-Portal             |
      |   enable_clickjacking_protection | True                               |
      |   allow_basic_authentication     | True                               |
      |   password_strength_check        | False                              |
      |   disable_remote_cli_shell       | False                              |
      | global_tenant_config             |                                    |
      |   tenant_vrf                     | False                              |
      |   se_in_provider_context         | True                               |
      |   tenant_access_to_provider_se   | True                               |
      | email_configuration              |                                    |
      |   smtp_type                      | SMTP_LOCAL_HOST                    |
      |   from_email                     | admin@avicontroller.net            |
      |   mail_server_name               | localhost                          |
      |   mail_server_port               | 25                                 |
      |   disable_tls                    | False                              |
      | docker_mode                      | False                              |
      | ssh_ciphers[1]                   | aes128-ctr                         |
      | ssh_ciphers[2]                   | aes256-ctr                         |
      | ssh_ciphers[3]                   | arcfour256                         |
      | ssh_ciphers[4]                   | arcfour128                         |
      | ssh_hmacs[1]                     | hmac-sha2-512-etm@openssh.com      |
      | ssh_hmacs[2]                     | hmac-sha2-256-etm@openssh.com      |
      | ssh_hmacs[3]                     | umac-128-etm@openssh.com           |
      | ssh_hmacs[4]                     | hmac-sha2-512                      |
      | default_license_tier             | ENTERPRISE_18                      |
      | secure_channel_configuration     |                                    |
      |   sslkeyandcertificate_refs[1]   | System-Default-Secure-Channel-Cert |
      +----------------------------------+------------------------------------+
      [admin:10-140-6-31]: >
      

    As you can see, in the Controller, this is a global setting.

    In the tenant, however, configuration is tenant-specific. se_in_provider_context has to be set to True for vrf context per subnet to work.

    
      [admin:-controller]: > show tenant demo
      +----------------------------------+----------------------------------------+
      | Field                            | Value                                  |
      +----------------------------------+-----------------------------------------
      |uuid                              |tenant-2add06-oe44-4684-beb9-107aaoe2dff|
      |name                              |demo                                    |
      |local                             |False                                   |
      |config settings                   |                                        |
      |   tenant_vrf                     |True                                    |
      |   se_in_provider_context         |True                                    |
      |tenant_access_to_provider_se      |True                                    |
      +----------------------------------+----------------------------------------+
      
  • vrf_context_per_subnet is configured through LBaaS v2 driver. If you are using Avi Vantage as the driver (avi_dc), upgrade the LBaaS v2 driver in your set up and enable the new flag vrf_context_per_subnet in neutron config for avi_adc provider. Once you configure this feature, the new load balancers will be deployed in the new vrf context. However, the existing load balancers will remain as is.

Using vrf_context_per_subnet as a Global Setting

Creating Exclusive VRF context per subnet


[avi_adc]
address=10.140.6.31
user=admin
password="avi123"
cloud=Default-Cloud
vrf_context_per_subnet=True

Displaying Existing VRF Contexts in Avi Vantage (Demo Tenant)


[demo:10-140-6-31]: > show vrfcontext
+--------------+-------------------------------------------------+
| Name         | UUID                                            |
+--------------+-------------------------------------------------+
| demo-default | vrfcontext-6907ad2f-7160-4a1c-8815-ec3dbc409556 |
+--------------+-------------------------------------------------+

Creating a Load Balancer in OpenStack (Demo Tenant)


root@admin1-ocata:~# neutron lbaas-loadbalancer-create --name  demo-lb data4snw
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| description         |                                      |
| id                  | e29a4da7-f9cc-4c73-91ca-a02bca3d5a89 |
| listeners           |                                      |
| name                | demo-lb                              |
| operating_status    | ONLINE                               |
| pools               |                                      |
| provider            | avi_adc                              |
| provisioning_status | ACTIVE                               |
| tenant_id           | b2c6b0cb6ef542568051ae9c65170da6     |
| vip_address         | 10.0.3.3                             |
| vip_port_id         | 795e0bce-750e-4543-b41c-616fa3dcab14 |
| vip_subnet_id       | a051f4a6-eddd-41c0-bb0f-adb4e2b5f325 |
+---------------------+--------------------------------------+

Displaying New VRF Context created in Avi Vantage (Demo Tenant)


[demo:10-140-6-31]: > show vrfcontext
+---------------------------------------------+-------------------------------------------------+
| Name                                        | UUID                                            |
+---------------------------------------------+-------------------------------------------------+
| demo-default                                | vrfcontext-6907ad2f-7160-4a1c-8815-ec3dbc409556 |
| subnet-a051f4a6-eddd-41c0-bb0f-adb4e2b5f325 | vrfcontext-a051f4a6-eddd-41c0-bb0f-adb4e2b5f325 |
+---------------------------------------------+-------------------------------------------------+

Creating listener, pool and pool members

  • A new listener
    
      neutron lbaas-listener-create --loadbalancer demo-lb --protocol HTTP --protocol-port 80 --name demolb-listener
      +---------------------------+------------------------------------------------+
      | Field                     | Value                                          |
      +---------------------------+------------------------------------------------+
      | admin_state_up            | True                                           |
      | connection_limit          | -1                                             |
      | default_pool_id           |                                                |
      | default_tls_container_ref |                                                |
      | description               |                                                |
      | id                        | 42aa799f-2cfe-48b2-9fa9-198936b1906f           |
      | loadbalancers             | {"id": "e29a4da7-f9cc-4c73-91ca-a02bca3d5a89"} |
      | name                      | demolb-listener                                |
      | protocol                  | HTTP                                           |
      | protocol_port             | 80                                             |
      | sni_container_refs        |                                                |
      | tenant_id                 | b2c6b0cb6ef542568051ae9c65170da6               |
      +---------------------------+------------------------------------------------+
      
  • A new pool
    
      root@admin1-ocata:~# neutron lbaas-pool-create --listener demolb-listener --protocol HTTP --name demo-pool --lb-algorithm ROUND_ROBIN
      +---------------------+------------------------------------------------+
      | Field               | Value                                          |
      +---------------------+------------------------------------------------+
      | admin_state_up      | True                                           |
      | description         |                                                |
      | healthmonitor_id    |                                                |
      | id                  | 27533aa8-0fde-4016-8129-09ac99e8215c           |
      | lb_algorithm        | ROUND_ROBIN                                    |
      | listeners           | {"id": "42aa799f-2cfe-48b2-9fa9-198936b1906f"} |
      | loadbalancers       | {"id": "e29a4da7-f9cc-4c73-91ca-a02bca3d5a89"} |
      | members             |                                                |
      | name                | demo-pool                                      |
      | protocol            | HTTP                                           |
      | session_persistence |                                                |
      | tenant_id           | b2c6b0cb6ef542568051ae9c65170da6               |
      +---------------------+------------------------------------------------+
      
  • A new member
    
      root@admin1-ocata:~# neutron lbaas-member-create --subnet data4snw --address 10.0.3.10 --protocol-port 8080 demo-pool
      +----------------+--------------------------------------+
      | Field          | Value                                |
      +----------------+--------------------------------------+
      | address        | 10.0.3.10                            |
      | admin_state_up | True                                 |
      | id             | 494b7351-054e-4059-ba7b-bd653ecb657c |
      | name           |                                      |
      | protocol_port  | 8080                                 |
      | subnet_id      | a051f4a6-eddd-41c0-bb0f-adb4e2b5f325 |
      | tenant_id      | b2c6b0cb6ef542568051ae9c65170da6     |
      | weight         | 1                                    |
      +----------------+--------------------------------------+
      

The VRF context for the corresponding virtual service and pool in Avi Vantage is as below.

       
[demo:10-140-6-31]: > show virtualservice demo-lb:demolb-listener | grep vrf
| vrf_context_ref   | subnet-a051f4a6-eddd-41c0-bb0f-adb4e2b5f325 |
[demo:10-140-6-31]: >
[demo:10-140-6-31]: > show pool demo-pool-42aa799f-2cfe-48b2-9fa9-198936b1906f | grep vrf
| vrf_ref           | subnet-a051f4a6-eddd-41c0-bb0f-adb4e2b5f325 |
[demo:10-140-6-31]: >

The VRF context for the virtual service created with one-arm mode is as below.

 
[demo:10-140-6-31]: > show virtualservice demo-lb:demolb-listener | grep ign_poo
| ign_pool_net_reach | True                                       |
[demo:10-140-6-31]: >

Using vrf_context_per_subnet as a Tenant Specific Configuration

Setting vrf_context_per_subnet to True in the LBaaS driver creates exclusive VRF context per subnet. Once this configuration is enabled, this becomes a global setting. Whenever a load balancer is created in any tenant, a dedicated VRF context will be created for each tenant.

If you want VRF contexts to be applicable only for specific tenants with overlapping IP addresses only, you can create a flavor.

In the metainfo, vrf_context_per_subnet is the key and True is the value as shown below.

Creating a New Flavor


root@admin-newton:~# neutron flavor-create DEDICATED_VRF_CONTEXT LOADBALANCERV2
+------------------+--------------------------------------+
| Field            | Value                                |
+------------------+--------------------------------------+
| description      |                                      |
| enabled          | True                                 |
| id               | 6e80738f-56cb-4c30-a891-2d5db882a1f5 |
| name             | DEDICATED_VRF_CONTEXT                |
| service_profiles |                                      |
| service_type     | LOADBALANCERV2                       |
+------------------+--------------------------------------+

Creating a New Service Profile


root@admin-newton:~# neutron flavor-profile-create --driver 'avi_lbaasv2.avi_driver.AviDriver' --metainfo "{'vrf_context_per_subnet': 'True'}"
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| description |                                      |
| driver      | avi_lbaasv2.avi_driver.AviDriver     |
| enabled     | True                                 |
| id          | fb457cf8-6422-46bf-9135-605e180723ae |
| metainfo    | {'vrf_context_per_subnet': 'True'}   |
+-------------+--------------------------------------+

Associating Flavor with a Service Profile


neutron flavor-associate DEDICATED_VRF_CONTEXT fb457cf8-6422-46bf-9135-605e180723ae

The flavor that is created can be used during load balancer creation as shown below.


neutron lbaas-loadbalancer-create --flavor DEDICATED_VRF_CONTEXT --name mylb vip4snw

Cleaning up OpenStack Resources while using Avi LBaaSv2 Driver

If you delete LBaaSv2 resources using Avi LBaaSv2 driver, Avi Controller leaves some of the network ports in OpenStack network for some time and does not delete them immediately. Hence you cannot delete the OpenStack network immediately after deleting the LBaaSv2 resources.

Root Cause

Avi Controller uses garbage collection mechanism to delete the network ports from OpenStack environment. Garbage collection is an independent background process. Avi object deleted APIs do not wait for garbage collection to kick-in or to finish. When Virtual Services and pools are deleted from Avi Vantage, the garbage collection usually kicks-in immediately and checks for any network ports to be deleted and issues port delete request.

The port delete request has following two parts:

  1. Detach the port from Service Engine VM.
  2. Delete the port once it is successfully detached.

Once you delete ports from OpenStack, you can delete the network resource.

However, this entire process of garbage collection can get delayed due to one of the following reasons:

  • The garbage collection can get delayed if there are other operations happening on the SE VM. For instance, if there is a Virtual Service that is placed on the SE VM from another tenant, and there is already a network port attached pending on it.

  • The garbage collection can also get delayed if there is load on the system.

  • The deletion of port also depends on when nova detaches the port from SE VM. If there is a lot of load on OpenStack system, the messages to Nova and Neutron might get delayed, hence delaying the process of detaching the port from VM.

  • The VS goes through a state change where VS and its default pool are deleted at once. This state change adds delay to the garbage collection process. This issue is resolved starting with Avi Vantage version 18.2.6.

Solution

If you want to delete the OpenStack LBaaSv2 resource, you should retry the deletion of OpenStack network, in case it fails due to ‘One or more ports have an IP allocation from this subnet’.