Cisco ACI Network Policy Mode on Read/No Access VMware Cloud

Overview

In the network policy mode of Cisco Application Centric Infrastructure (ACI), Avi Vantage is deployed in no orchestrator mode or read access mode on VMware infrastructure. Avi Service Engines are configured as BGP Layer 3 outside networks (L3Outs) in Cisco Application Policy Infrastructure Controller (APIC) to exchange the virtual service routes. Leaf or spine switches in ACI fabric learn these routes to forward the virtual service traffic to Avi Service Engines.

This design option is recommended where you have no write access permission for Avi Controller to integrate with vCenter or do not have vCenter deployed in the network.

In this deployment mode, the Avi Controller does not spin up Service Engines. Rather, the Service Engines are manually deployed on the VMware cloud and a connection is made back to the Avi Controller. In this deployment mode, Avi Service Engines are deployed in one-arm mode.

For more information about no access or read access mode on VMware cloud, refer to the Deploying in Read / No Access Mode section of Installing Avi Vantage for VMware vCenter.

For deployment details of Avi Vantage configured as BGP L3Outs , refer to Cisco ACI with Avi Vantage Deployment Guide.

Logical Network Topology

The following image represents the logical view of one-arm mode:

one-arm-mode

In this topology, the Avi SE is connected to a single port group on virtual distributed switch configured as BGP L3out in APIC. The client connections come from the ACI fabric and access the virtual service hosted on the Avi SE.

The network topology diagram below displays the one-arm mode deployment with BGP peering for Avi SEs hosted on VMware cloud in no access or read access mode.

one-arm-mode-with-bgp-peering

Logical Traffic Flow

The logical traffic flow for client virtual machines (VMs) accessing virtual service hosted on Avi Service Engines is as shown below:

  1. Client VM → ACI fabric → Client endpoint groups (EPG)
  2. Client EPG → Contract → L3Out external EPG
  3. L3Out external EPG → ACI fabric → Avi SE
  4. Avi SE → Load balancing to back-end servers → ACI fabric → L3Out external EPG
  5. L3Out external EPG → Contract → Web EPG
  6. Web EPG → Web server VM

traffic-flow

The return traffic follows the same path as the incoming traffic mentioned-above.

Deployment Considerations

Virtual Distributed Switch Considerations

In this design or deployment mode, the ESXi hosts are declared as external routed devices in APIC, so Avi Service Engines can peer up with the ACI fabric to exchange virtual service routes. You can either use the existing or a new vDS for Avi SE interfaces.

Avi Service Engine Routing Considerations

The Avi Service Engines only publish the virtual service routes, and the Service Engines do not learn any routes using BGP from any BGP peer. There might be cases where Service Engines are required to learn the routes from BGP peer to send the return traffic to the next appropriate hop which has forwarded the traffic. In such a scenario, you can use the auto gateway option which ensures that the return traffic is sent to the same MAC address from which traffic was received.


For more information on the auto gateway feature on Avi Vantage, refer to Auto Gateway.

High Availability Considerations

BGP protocol is used to exchange the routes, so the high availability is completely dependent on the BGP protocol. By default, the Service Engines are in active/active state. For active/standby mode of deployment, use local preference option in BGP in ACI fabric to choose routes from one of the SE as the most preferred route over the other SE. Using local preference option, helps in achieving the active/standby state for a virtual service.


For configuring local preference in ACI fabric, refer to Cisco APIC Layer 3 Networking Configuration Guide.