SE Data Plane Architecture and Packet Flow

Overview

This user guide explains the details of Service Engine data plane architecture and packet flow.

The Data Plane Development Kit (DPDK) comprises a set of libraries that boosts packet processing in data plane applications.

The following are the packet processing for SE data path:

  • Server health monitor

  • TCP/IP Stack - TCP for all flows

  • Terminate SSL

  • Parse protocol header

  • Server load balancing for SIP/L4/L7 App profiles

  • Sending and receiving packets

SE System Logical Architecture

se-system-logical-architecture

The following are the features of each components in SE system logical architecture:

  • Process — The following are the 3 processes in Service Engine:
    • SE-DP
    • SE-Agent
    • SE-Log-Agent
  • Work Process
    • SE-DP — The role of the process can be proxy-alone, dispatcher-alone, proxy-dispatcher combination.

      • Proxy-alone — Full TCP/IP, L4/L7 processing and policies defined for each app/virtual service.

      • Dispatcher-alone

        • Processes Rx of (v)NIC and distributes flows across the proxy services via per proxy lock-less RxQ based on current load of each proxy service.

        • Dispatcher manages the reception and transmission of packets through the NIC.

        • Polls the proxy TxQ and transacts to the NIC.

      • Proxy-dispatcher — This acts as a proxy and dispatcher depending on the configuration and resources available.

    • SE–Agent — This acts as a configuration and metrics agent for Controller. This can run on any available core.

    • SE-Log-Agent — This maintains a queue for logs. This performs the following actions:

      • Batches the logs from all SE processes and sends them to the log manager in Controller.
      • SE-Log-Agent can run on any available core.
  • Flow-Table — This is a table that stores relevant information about flows.

    • Maintains flow to proxy service mapping.

Based on the resources available, the service engine configures optimum number of dispatchers. You can override this by using Service Engine group properties. There are multiple dispatching schemes supported based on the ownership and usage of NICs.

  • A single dispatcher process owning and accessing all the NICs.

  • Ownership of NICS distributed among a configured number of dispatchers.

  • Multi-queue configuration where all dispatcher cores poll one or more NIC queue pairs, but with mutually exclusive se_dp to queue pair mapping.

The remaining instances are considered as proxy. The combination of NICs and dispatchers determine the PPS that a SE can handle. The CPU speed determines the maximum data plane performance (CPS/RPS/TPS/Tput) of a single core, and linearly scales with the number of cores for a SE.

Tracking CPU Usage

CPU is intensive in the following cases:

  • Proxy
  • SSL Termination
  • HTTP Policies
  • Network Security Policies
  • WAF
  • Dispatcher
  • High PPS
  • High Throughput
  • Small Packets (for instance, DNS)

Packet Flow from Hypervisor to Guest VM

SR-IOV

Single Root I/O Virtualization (SR-IOV) assigns a part of the physical port (PF - Platform Function) resources to the guest operating system. A VF (VF - Virtual Function) is directly mapped as the vNIC of the guest VM and the guest VM needs to implement the specific VF’s driver.

SR-IOV is supported on CSP and OpenStack no-access deployments.

For more details on SR-IOV, refer to SR-IOV with VLAN and Avi Vantage (OpenStack No-Access) Integration in DPDK.

Virtual Switch

Virtual switch within hypervisor implements L2 switch functionality and forwards traffic to each guest VM’s vNIC. Virtual switch maps a vLAN to a vNIC or terminates overlay networks and maps overlay segment-ID to vNIC.

Note: AWS/Azure clouds have implemented the full virtual switch and overlay termination within the physical NIC and network packets bypass the hypersivor.

In these cases, as VF is directly mapped to the vNIC of the guest VM, guest VM needs to implement specific VF’s driver.

VLAN Interfaces and VRFs

VLAN

VLAN are logical physical interfaces that can be configured with an IP address. This acts as child interfaces of the parent vNIC interface. VLAN interfaces can be created on port channels/bonds.

VRF Context

A VRF identifies a virtual routing and forwarding domain. Every VRF has its own routing table within the SE. Similar to a physical interface, a VLAN interface can be moved into a VRF. The IP subnet of VLAN interface is part of the VRF and its routing table. The packet with a VLAN tag is processed within the VRF context. Interfaces in two different VRF contexts can have overlapping IP addresses.

Health Monitor

Health monitors run in data paths within proxy as synchronous operations along with packet processing. Health monitors are shared across all the proxy cores, hence linearly scales with the number of cores in SE.

For instance, 10 virtual services with 5 servers in a pool per virtual service and one HM per server is 50 health monitors across all the virtual services. 6 core SE with dedicated dispatchers will have 5 proxies. Each proxy will run 10 HMs and all the HM status is maintained within shared memory across all the proxies.

Custom external health monitor runs as a separate process within SE and script provides HM status to the proxy.