Virtual Infrastructure Design of the Avi Vantage Platform

Overview

The virtual infrastructure design includes defining the configuration requirements of the underlying vCenter Server environment implemented by VMware Cloud Foundation.

When implementing the Avi Vantage Platform a number of specific configuration requirements exist within the virtual infrastructure. These include creation of a vCenter account and the creation of a dedicated vSphere Standard Switch on each ESXi host.

The following table summarizes design decisions for the virtual infrastructure to support the Avi Vantage platform:

Decision ID Design Decision Design Justification Design Implication
AVI-VI-VC-001 Create one Content Library on the Management Domain to store Avi Controller OVA Deploying OVA from the Content Library will be operationally easy to do Every time a new Avi Controller needs to be created, Avi Controller OVA would need to be copied to the vCenter from the admin's workstation

NSX-T Data Center Design for the Avi Vantage Platform

NSX-T configurations are required in the following areas to make the applications work:

  • Avi Service Engine scale-out
  • Application connectivity to external clients
  • Configurations required if DFW is enabled
    • Avi control plane/ data plane communication
    • Backend load-balanced pool communication if distributed firewall is enabled

Avi Service Engine Scale-Out

Avi Service Engine redirects traffic from the primary Avi Service Engine to secondary Avi Service Engines when using L2 scale out mode. This leads to asymmetric traffic which can get blocked by the Distributed Firewall (DFW) because of its stateful nature. Hence to ensure that the traffic is not dropped when a virtual service scales out, you should add the Avi Service Engine interfaces connected to the VIP/data segment to the DFW exclusion list.

This can be done by creating an NSGroup on NSX-T and adding the VIP/data segment as member. You can then add this NSGroup to the DFW exclusion list. This way if a new Avi Service Engine is deployed its VIP/data interface will dynamically get added to the Exclusion list.

In case the Avi Service Engines are connected to the backend server segment, adding the segment to Exclude list is not an option as that will put all servers in the list too. You need to add individual Avi Service Engine VMs as members to the NSGroup. This is the recommended setting.

Decision ID Design Decision Design Justification Design Implication
AVI-VI-SDN-001 Add Avi Service Engine VMs to DFW exclusion list The DFW will drop packets when applications are horizontally scaled active/active across multiple service engines. By adding the service engines to the DFW exclusion list allows for these traffic patterns to successfully traverse end-to-end. Virtual services that are horizontally scaled will experience packet drops. Alternative workaround to DFW exclusion list is to enable SE Tunnel mode (under Avi configuration) for the Avi Service Engines.

Application Connectivity to External Clients

If all applications require north-south connectivity or for simplicity of configuration, you can configure the following:

  • Tier 1 to advertise the entire VIP range to Tier 0.
  • Tier 0 to re-distribute all learned routes in that range to external peer.

This way whenever a new VIP is created, it will be automatically advertised to the external peer.

If only a certain application requires north-south access to allow external clients to reach the application, you can configure the following:

  • Tier 1 to advertise only the required applications VIP IP (/32) to Tier 0.
  • Tier 0 to re-distribute the VIP to external peer.

Configurations required if DFW is enabled

If Distributed Firewall is enabled on NSX the following allow rules on DFW are necessary for Avi Controller and the Avi Service Engines to function as expected:

Rule Source Destination Service Action
Avi Controller UI Access
Note: Required only if Avi Controller is connected to a NSX-T managed segment
Any (can be changed to restrict UI/API access) Avi Controller management IPs and the Cluster IP (if configured) TCP (80,443) Allow
Avi Controller cluster communication
Note: Required only if Avi Controller is connected to a NSX-T managed segment)
Avi Controller management IPs Avi Controller management IPs TCP (22, 8443) Allow
Avi Service Engines to Avi Controller Secure Channel
Note: Avi Service Engines initiates TCP connection for the secure channel to the Avi Controllers
Avi Service Engine management IPs Avi Controller management IPs TCP (22, 8443) and UDP (123) Allow
Avi Service Engines to Backend
Note: Client to VIP traffic does not require a DFW rule as the VIP interface is in Exclusion list. The front-end security can be enforced for each VIP using network security policies on the virtual service through policies configured on the Avi Controller
Avi Service Engine management IPs (recommended to create a NSGroup for Avi Service Engines) Backend server IPs (recommended to create a NSGroup for backend servers) Any (can be restricted to the service ports used for load-balancing) Allow