Design Guide for Cisco ACI with Avi Vantage

Overview

Cisco ACI

Cisco Application Centric Infrastructure (ACI) is a software defined networking solution offered by Cisco for data centers and clouds, which helps in increasing operational efficiencies, delivering network automation, and improving security for any combination of on-premises data centers, private, and public clouds.

img1

ACI is based on open architecture (open APIs and standards) which helps to integrate Layer 4-7 services in the network. ACI solution offers robust implementation of multi-tenant security, quality of service (QoS), and high availability.
The important building blocks of Cisco ACI are Nexus 9000 hardware and APIC Controller. APIC Controller provides centralized policy automation and management for ACI fabric. The features include common policy and management framework across physical, virtual, and cloud infrastructure.

The following is the list of terminologies used in ACI:

TERMINOLOGY DESCRIPTION
ACI Fabric A Virtual Extensible LAN (VXLAN) overlay configured by APIC on leaf or spine switches to provide end-to-end connectivity for clients or servers.
Bridge Domains A Bridge domain is a Layer 2 segment analogous to VLANs in a traditional network.
Endpoint Groups (EPGs) Endpoint Groups are associated with endpoints on the network. The endpoints are identified by their domain connectivity (virtual, physical, or outside) and their connectivity method. For Example:
a) Virtual machine port groups (VLAN, VXLAN)
b) Physical interfaces or VLANs including virtual port channels
c) External VLANs
d) External subnets
Contracts These are directional access lists between provider and consumer EPGs. They comprise of one or more filters (ACEs) to identify and allow traffic between the EPGs. By default, communications between EPGs are blocked and require a contract to allow it through.
Application Network Profiles These are containers which group together one or more EPGs and their associated connectivity policies.
L4-L7 Service Graph Templates This is a generic representation of an expected traffic flow in the network. These templates are reusable and can be used in multiple contracts.
L4-L7 Device This consists of the following two types:
a) Logical device – Represents a cluster of two devices that operate in the active/standby mode. This is a logical representation of the physical or virtual device (load balancer) along with logical interfaces which imply the connectivity.
b) Concrete device – Represents a service device, such as, a virtual load balancer. In the case of Avi Vantage, these are the actual SE VMs.
Tenants These are network wide administrative containers, which are like logical containers for application policies.

For more information on ACI network centric infrastructure, refer to ACI – Network-Centric Approach White Paper.

Avi Vantage

The Avi Vantage Platform provides enterprise-grade distributed ADC and iWAF (Intelligent Web Application Firewall) solutions for on-premise and public cloud infrastructure. Avi Vantage also provides inbuilt analytics that enhances the end-user application experience as well as ease of operationalizing for network administrators.

Avi Vantage is a complete software solution which runs on commodity x86 servers or as a virtual machine and is entirely enabled by REST APIs.

The product components include:

  • Avi Controller (control plane): Central policy and management plane that analyzes the real-time telemetry collected from Avi Service Engines and presents it in visual, actionable dashboards for administrators using an intuitive user interface built on RESTful APIs.
  • Avi Service Engines (data plane): Distributed load balancer with iWAF that are deployed closest to the applications across multiple cloud infrastructures. The Avi Service Engines collect and send real-time application telemetry to Avi Controller.

The Avi Vantage architecture is Controller-led, which de-couples the control plane and data plane. This architecture makes it possible to automate L4-L7 using Avi Controller so that the ACI is used to provide L2-L3 network automation for Service Engines.

Below is the architectural representation of Avi Vantage integration with ACI.

Software Requirements

The following are the recommended software requirements for Avi Vantage and ACI:

Component Version
Avi Controller 17.1 or later
Cisco APIC 1.03f or later
VMware vCenter 5.1, 5.5, 6.0, or 6.5

Integration Options for Avi Vantage in ACI Ecosystem

Avi Vantage can be integrated with ACI in the below mentioned modes:

  • Service Manager mode with REST API: This is a hybrid integration mode in which ACI handles L2-L3 automation for L4-L7 devices and Avi Vantage Controller handles configuring the L4-L7 services.
  • vCenter Integration with Avi Vantage in ACI Ecosystem: This is a traditional mode where Avi Vantage is not integrated with ACI and ACI is only used to provide client access between the client network and virtual service network. Avi Vantage only integrates with vCenter in write access mode.

Service Manager mode with REST API

The service manager mode with REST API provides complete automation and flexibility to insert L4-L7 services with ease in ACI fabric. The primary advantage is the end-to-end automation using Cisco ACI and Avi Vantage. The following three sections explain the detailed configuration workflow for this mode.

1. Day Zero Config

Note:
a) This is a one-time setup which requires vCenter deployment. Avi Controller should be configured with vCenter and APIC credentials under the cloud connector page.
b) Ensure that the Managed Mode checkbox is unchecked in the cloud connector page. With this, Avi Controller will start using the REST APIs for communication. Enter the Tenant and VMM domain details that are configured in ACI.

Navigate to Infrastructure > Clouds and create a new cloud. You can even use the default cloud, based on your requirement.

Click on Next and select the data center. If the virtual service network is not a directly connected network, then select the checkbox for Prefer Static Routes vs Directly Connected Network and use static routes for VIP’s network resolution.

Click on Next and select the management network for SE interfaces. For static address management, add the static address pool instead of DHCP server.

Note: The networks displayed in the above screenshot are the port groups which got imported from the vCenter. This list of networks is present here for management network selection. Once the management network is selected, Avi Controller will ignore all other management networks.

Avi Controller will create the L4-L7 device under the tenant which is mentioned in the cloud connector page. This L4-L7 device can be exported to other tenants for service graph creation in other tenants. The following screenshot displays Avi Controller registered as a L4-L7 device in ACI.

Create a L4-L7 service graph manually with two node cluster using the L4-L7 device which was created in the earlier step. Navigate to Tenant > L4-L7 Service Graph and choose Create New Service Graph with two node cluster as shown below.

These two nodes in a service graph represent a single Service Engine and is required for high availability and virtual service scaling feature.

Note: Use the naming convention as ADCTier1 and ADCTier2 in the service graph. These keywords are case-sensitive.

To address any issues, refer to the Troubleshooting section.

Once the service graph is created and associated with contracts and EPGs, proceed with the virtual service provisioning. For more details on associating contracts and EPGs, please refer to east-west or north-south deployment sections based on the required use case.

2. Network Provisioning in Avi Controller

In ACI ecosystem, Avi Controller supports only static IP mode for Service Engine interfaces. The bridge domains (BD) which are created in APIC for a particular tenant gets imported as network entities from APIC to Avi Controller.

Select a pool range for every network entity that gets imported from APIC. This pool range is used by Avi Controller to assign IP addresses for the created SEs.

Navigate to Infrastructure > Networks and select the cloud created. Edit the BD networks that are imported from ACI and add the IP address pool. Repeat the steps for other BDs.

3. Virtual Service Provisioning

  1. After completing the Day Zero configuration, APIC will create deployed service graph instance based on the contract associated in the earlier section.

  2. The deployed service graph instance from APIC is imported to the Avi Controller automatically.

  3. Create the virtual service. For the virtual service name, click on the drop-down and select the deployed service graph instance which was imported to Avi Controller and for pools select the EPGs configured for the servers.

  4. Creating virtual service will trigger the Service Engine creation in vCenter and will add the SEs to APIC as L4-L7 concrete device.

  5. Device or interface mapping and network stitching will be done automatically by APIC and no user intervention is required.

  6. After the SEs are created, the virtual service will be ready to accept the traffic.

The Avi SE will be deployed in Go-To mode (routed mode or two-arm mode).

Note: Each virtual service needs a contract with the associated service graph. For instance, creating 10 virtual services will require 10 associated service graphs. So, create a service graph template for once and associate it will all contracts for virtual service creation.

Below are the REST API communication workflow steps:

  1. Avi Vantage uses REST API to get the tenant details for creating a logical device.
  2. Once the tenant is chosen, Avi Vantage creates a L4-L7 device in the specific tenant.
  3. Manual service graph should be created in APIC along with the contract assignment for the EPGs. The APIC will create a deployed service graph instance which will be provided to Avi Vantage. This instance will be used for virtual service creation.
  4. APIC will sync the configured EPGs to Avi Controller and vCenter.
  5. After creating the VIP, Avi Controller will create the SEs and register it with APIC as a concrete device.
  6. APIC will map this device to the logical device context and map the interfaces between the logical interfaces and SE vNICs.
  7. APIC will interact with the VMM domain and create the dynamic port groups which will be mapped to the SE interfaces of VMware domain.

Note: This is the most recommended mode for any deployment as this allows service graph template customization for traffic flow, by adding firewall, IDS, etc., along with Avi Vantage.

Below is an example of the configuration workflow for Cisco ACI and Avi Vantage integration in service manager mode with REST API.

vCenter Integration with Avi Vantage in ACI Ecosystem

In this mode, Avi Vantage will not be integrated with APIC controller. Instead, the Controller will be integrated with VMware and the VMware infrastructure is used to configure the interfaces and port groups.

As seen above, there is no Avi Vantage integration with ACI, but with vCenter in write access mode. Given below is the configuration workflow for this mode.

To deploy Avi Vantage in vCenter with write access mode, refer to Installing Avi Vantage for VMware vCenter.

This is a traditional deployment where ACI provides access (contracts) between the clients and virtual service. ACI will not provide any L2-L3 automation in this case.

Configuring ACI Contracts for Avi Vantage

This section discusses configuring ACI contracts. For complete information on Cisco ACI infrastructure, refer to Operating Cisco Application Centric Infrastructure.

After deploying Avi Vantage in vCenter write access mode, you can create contracts to allow communication between the client and virtual services’ network. The contracts can be configured in ACI for the following two deployment modes:

Avi Vantage deployed in two-arm mode:
In this mode, the clients and servers are in a different network, as compared to virtual services that are hosted by Avi Vantage in a different network. Create a contract to allow communication between the client EPG and the virtual servers’ EPG. If Avi Vantage has an interface in the server EPG network, no contract is required between the server EPG and Avi Vantage.

Avi Vantage deployed in one-arm mode:
In this mode, the clients, servers, and the Avi load balancer are in the same network. So, no contracts are required, as all communication within an EPG are allowed by default. However, if Avi Vantage is present only in the client network with no interface in the server network, a contract is required between client EPGs and server EPGs.

Use Cases for Avi Vantage Service Manager Mode with REST API in ACI Ecosystem

This figure depicts an application traffic flow which includes the north-south traffic flow from clients to virtual services and also the east-west traffic flow for internal application communication.

This section discusses the following two use cases:

  • Avi Vantage deployment for east-west traffic
  • Avi Vantage deployment for north-south traffic

In both these designs Avi Vantage will be deployed in the Go-To mode (two-arm mode) and a service graph should be created as mentioned in the Service manager mode with REST API section. The service graphs will be the same for both the designs. You can even use a single service graph template for east-west traffic as well as for the north-south traffic design.

Avi Vantage Deployment for East-West Traffic

Assuming creation of service graph as mentioned in the Service manager mode with REST API section, this section explains the configuration for Cisco ACI EPG and contracts for east-west traffic.

The east-west traffic would generally be the server-to-server traffic and mostly traffic from one VM to another. So, bridge domains with VMM attachments for EPGs are used.

The most common design for any east-west traffic is a 3-Tier architecture, which is represented above and the naming convention is used for different objects in the configuration steps further.

ACI Configuration

Below are the steps for sample examples on how to configure ACI in east-west traffic. For more details on ACI fundamentals, refer to Cisco Application Centric Infrastructure Fundamentals.

Under Tenants, navigate to the configured tenant. To create an isolated network for the traffic, navigate to Networking > VRF and click on Create VRF.

Navigate to Bridge Domains, create a bridge domain with the name BD1 for web subnet. Select the VRF that was created in the earlier section and add the subnet for which ACI will create a SVI interface. This interface is also used as a gateway for the servers.

The screenshot below represents creating a bridge domain.

Follow similar steps for BD2 and BD3 bridge domains for application and database subnets.

Under Application Profile, create an application profile with the name 3-Tier-APP.

Navigate to 3-Tier-APP > Application EPGs and create EPGs for web, application, and database.

The following screenshot represents an example of creating EPGs.

Note: Select the checkbox for Associate to VM Domain Profiles and click on Next to add your VMM domains to this EPG, which communicates with vCenter and creates the port groups.

Navigate to Security Policy > Contracts > Create Contract. Enter the contract name and add contract subject with filter and service graph.

The example below represents creating contract and adding filter along with service graph.

  • Create two contracts, one each for WEB-APP traffic load balancing, and another for APP-DB traffic load balancing. As shown in the screenshot above, add a filter and service graph to both the contracts.

  • After creating contracts, associate the contracts with the EPGs.

  • Navigate to Application Profile 3-Tier-APP > Applications EPGs and select WEB-EPG. Navigate to Contract > Add Consumed Contract and select the contract that was created earlier for WEB-APP communication.

The screenshot represents an example of associating the contract with EPGs.

Follow similar steps for other EPGs and associate contracts accordingly.
For instance, for WEB-APP traffic load balancing, WEB-EPG will be the consumed contract and APP-EPG will be the provider contract. Similarly, for APP-DB traffic flow, APP-EPG will be the consumed contract and DB-EPG will be the provider contract.

In ACI terms, consumed contract will be analogous to client traffic and provider contract will be analogous to server contract. In this case, WEB-EPG is like the client EPG that consumes resources from APP-EPG, which is like the server EPG that provides resources.

Avi Vantage Configuration

After associating the contract with EPGs you will see the deployed graph instance. Refer to the Virtual Service Provisioning section under Service manager mode with REST API to configure the virtual service. Post this, Avi Vantage SEs will be mapped to the logical device under L4-L7 devices.

The screenshot below represents an example of concrete device and logical device mapping, where the cluster interfaces are mapped to the SE interfaces.

Note: The automatic port group mapping on vCenter for Avi Vantage SEs will usually require 2 to 3 minutes. During this period, the pool members are expected to be down. After assigning the port groups, the pool member status will be green and up.

Avi Vantage Deployment for North-South Traffic

Assuming the creation of service graph as mentioned in the Service manager mode with REST API section, this section explains the configuration for Cisco ACI EPG and contracts for north-south traffic.

The north-south traffic is between the clients and the servers. The clients can be directly connected to the fabric or to an external WAN using Layer3 Out on ACI. The servers are virtual services in this design.

ACI Configuration

Below are the steps for sample examples on how to configure ACI in east-west traffic. For more details on ACI fundamentals, refer to Cisco Application Centric Infrastructure Fundamentals.

Most of the configuration steps mentioned for east-west traffic is also applicable for north-south traffic deployments, except for the Layer 3 Outside network. This document uses different naming conventions for different objects in the configuration steps.

Usual cases have clients that are not directly connected to the ACI and they reach the ACI fabric using WAN link from branches. In such cases, Layer 3 Out is configured on ACI for clients to access the servers behind the ACI.

For Layer 3 Outs, you can use dynamic routing protocol or static routing protocol, depending on the WAN connectivity.

The virtual service-App bridge domain is the virtual service network, which are actual virtual services network hosted on the Service Engines.

You must configure Multiprotocol BGP (MP-BGP) in ACI fabric and also attach the Attachable Access Entity Profile (AAEP) policy, for physical domain connectivity. Refer to the following Cisco documentation to complete this configuration, following which you can configure the external routed network.

To configure an external routed network on ACI, navigate to Networking > External Routed Network > Create Routed Outside and provide a name for the Layer 3 out. Add the node profile and interface profile. Add the external networks (also referred to as external EPGs) which can be 0.0.0.0/0 to accept all routes and specific routes for specific subnets.

The screenshot below shows how to configure Layer 3 Outs on ACI

Add this VRF to the virtual service-App bridge domain that is created.

The screenshot below displays how to create external networks.

For virtual service-App bridge domain, associate the Layer 3 out as shown in the screenshot below. If the subnet routes are published to an external router, ensure that the subnets under bridge domains are set to public.

After ensuring external connectivity, you can check if the routes are populated under Inventory > Pod > Node > Protocols.

Create a contract along with service graph which is similar to the east-west contract, which is a change corresponding to the external networks’ addition to access virtual service. The contract between the external network and the virtual service-App bridge domain must be present along with the service graph.

For the contract created with service graph, assign the consumed contract role to the external network and the provider role to virtual service-App EPG, so that the communication is allowed.

The screenshot below represents the output seen after attaching the contract to the external network and virtual service-App EPG.

Avi Vantage Configuration

After associating the contract with EPGs you will see the deployed graph instance. Refer to the Virtual Service Provisioning section under Service manager mode with REST API to configure the virtual service. Post this, Avi Vantage SEs will be mapped to the logical device under L4-L7 devices.

Now the external clients should be able to access the virtual services hosted on Avi Vantage’s Service Engines.

Monitoring and Troubleshooting

Monitoring

Avi Controller hosts a real-time analytics dashboard which provides a rich end-user application experience and deep web security related issues.

As Avi solution is a combination of both load balancing and iWAF, it provides application load balancing and application security analytics in a single window. The following screenshot represents an example of Avi Vantage’s rich analytics.

Avi Vantage virtual service real-time metrics displays transactions per second, delay, response times, etc.

Avi Vantage logs provide detailed view of each connection. In this case, the end-to-end communication between client/virtual service/server is displayed.

Avi Vantage WAF analytics provides real-time web security attacks on the virtual service. Displays an on-going attack, along with specifics of client initiating the attack.

The following are the benefits of Avi Controller’s monitoring capabilities in Cisco ACI ecosystem:

  • Monitors load balancer (SE) and application server health.
  • Provides real-time application analytics.
  • Protects applications against L4-L7 DDoS attacks.
  • Monitors APIC EPG membership to automatically add or remove application instances from pools.
  • Performs load balancer auto scaling based on real-time performance metrics, such as, CPU, memory, bandwidth, connections, latency, etc.
  • Provides point-and-click simplicity for iWAF policies with central control.
  • Supports granular security insights on traffic flows and rule matches to enable precise policies using iWAF.

Troubleshooting

As discussed in the Monitoring section, you can dynamically isolate any issue related to client and server communication using Avi Controller, which helps in decreasing the MTTR for any case.

The following are a few common issues encountered and their possible resolution:

  1. Deployed service graph instance is not created in ACI or Avi Controller is not able to import the deployed service graph instance from ACI.
    Possible Cause and Recommended Solution: The possible cause for this issue is the service graph cluster node naming convention. Ensure that the cluster nodes are named as ADCTier1 and ADCTier2 in the service graph.

  2. LIF to CIF invalid mapping errors and Cdev config errors in ACI.
    Possible Cause and Recommended Solution: These are temporary errors seen for about 2 to 3 minutes during the dynamic port group mapping in vCenter. It takes about 2 minutes for the SEs to spin up. If the error persists, verify the communication between ACI and vCenter.
    For more information on different L4-L7 error messages and verification required for integration, refer to Troubleshooting Cisco Application Centric Infrastructure.

  3. Dynamic port groups are not created on vCenter and the SEs are not able to communicate with pools or clients.
    Possible Cause and Recommended Solution: This issue is seen when the VMM domain association fails for client/server EPGs. After creating VMM association, only the dynamic port groups are created by APIC on vCenter which can be assigned to the clients/servers in vCenter. So, ensure that the VMM domain is assigned to the EPGs.

  4. Pool servers are down, or virtual service is down.
    Possible Cause and Recommended Solution: This is caused when the static address pool allocation is not configured for SE interfaces. A static address pool needs to be allocated to the virtual service networks and pool networks, so that the SE interfaces are assigned to the network address. To complete this configuration, on Avi UI navigate to Infrastructure > Networks > Assign Static Address Pool. For more information, refer to Virtual Service Provisioning section under Service manager mode with REST API section.

You can access the log files on Avi Controller to verify the communication between ACI and Avi Vantage. The following are a few sample outputs of communication logs from Avi Controller:



admin@Demo-Controller-17:/$ more /opt/avi/log/apic_agent.log
[2017-11-02 08:55:55,712] INFO [apic_agent._get_apic_tenants:2573] [u'common', u'MTenant_Base1', u'CiscoScale', u'Demo', u'ApicScale', u'MTenant_2', u'vCentre-with-Avi', u'infra', u'User2', u'MTenant_Base', u'MTenant_1', u'ApicL3', u'miska', u'Development', u'Training', u'PJ', u'User1_A1', u'user3', u'User2-unmanaged', u'mgmt', u'UI_Test_Tenant', u'ApicVipShare', u'DemoCommon', u'TME_India']
[2017-11-02 08:55:56,988] WARNING [apic_agent._apic_refresh_helper:1262] LdevIf subscriptionRefresh failed status 403 rsp {"totalCount":"1","imdata":[{"error":{"attributes":{"code":"403","text":"Need a valid webtoken cookie (named APIC-Cookie) or a signed request with signature in the cookie APIC-Request-Signature for all REST API requests"}}}]}
[2017-11-02 08:56:15,928] INFO [apic_agent.Update:288] Updating APIC configuration 
[2017-11-02 08:56:15,929] INFO [apic_agent.Update:289] uuid: "APICCONFIG"
obj_type: APICCONFIGURATION
resource {
  cloud {
    uuid: "cloud-a5f36abc-82b2-449f-bde2-f7c53f826c1d"
    name: "Default-Cloud"
    vtype: CLOUD_VCENTER
    vcenter_configuration {
      username: "root"
      password: "password"
      vcenter_url: "10.X.X.X"
      privilege: WRITE_ACCESS
      datacenter: "Apic"
      management_network: "network-11-cloud-a5f36abc-82b2-449f-bde2-f7c53f826c1d"
      management_ip_subnet {
        ip_addr {
          addr: "10.X.X.X"
             type: V4
          }
          mask: 24
         }
       }


For any further troubleshooting assistance with integration issues, contact avinetworks-support@avinetworks.com.

Resources

  1. Avi Knowledge Base : https://kb.avinetworks.com/
  2. Cisco ACI Reference Guide : https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-737909.html
  3. Avi Vantage with Cisco ACI Solution Brief : http://info.avinetworks.com/solution-brief-cisco-aci-integration
  4. Cisco ACI White Paper : https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-737361.html
  5. Cisco ACI Service Graph Design White Paper : https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-734298.html
  6. Avi Vantage with Cisco ACI Demo Video : https://www.youtube.com/watch?v=xfQ_McCikJQ&t=2s