Avi Service Engine Design for the Avi Vantage Platform

Logical Design of Avi Service Engine for the Avi Vantage Platform

The Avi Service Engines are VMs that provide the data path functionality and service workloads that require load-balancing. Applications that require load-balancing are also placed in the same virtual infrastructure workload domain. All Avi Service Engines belonging to a workload domain would be managed by a unique Tenant and scoped to a No-Orchestrator Cloud on the Avi Controller. Avi Service Engines would be scoped to the Avi Controller cluster which maps to the NSX-T instance managing the workload domains.

The following diagram shows VCF deployment depicting multiple workload domains managed by a single NSX-T instance. Each NSX-T instance is mapped to an Avi Controller cluster.

img2

The following table summarizes the design decisions for deploying Avi Service Engine for the Avi Vantage platform:

Decision ID Design Decision Decision Justification Design Implication
AVI-VI-VC-013 Distribute Avi Service Engines across as many ESXi hosts in the Workload Domain as possible This allows for maximum fault tolerance Reduced fault tolerance if multiple Avi Service Engines are hosted on the same ESXi host
AVI-SE-001 Multiple Avi Service Engines could be hosted on the same ESXi host if required if the Avi Service Engines are placed in different SE Groups Although not recommended, sharing the same ESXi host between Avi Service Engines from different SE Groups, would still allow for the same level of fault tolerance There might be performance degradation and reduced fault tolerance
AVI-CTLR-017 Create one Tenant and one Cloud within the Tenant for each Workload Domain requiring load-balancing services Allows for maximum flexibility in terms of
  • Workload isolation
  • Life cycle management
All applications will be shared and scoped to a single Tenant/Cloud

Scoping Avi Service Engines to Avi Controller Cluster

There is one Avi Controller cluster deployed for every NSX-T instance managing workload domains. Workloads requiring load-balancing services would require Avi Service Engines to be deployed. Workloads are typically scoped to a workload domain. Therefore, Avi Service Engines should be scoped to the Avi Controllers that map to the respective workload domains (via the NSX-T instance mapping).

The following table summarizes design decisions for placing applications on a stretched cluster on Avi Service Engines for the Avi Vantage platform:

Decision ID Design Decision Decision Justification Design Implication
AVI-SE-002 Scope Avi Service Engines to the Avi Controller cluster mapped to the workload domain in which they are spawned Required for application hosting and LCM to properly function Application cannot be created, and LCM cannot be done

Dedicated Edge Workload Domain

A Dedicated Edge Workload Domain is a Workload Domain created solely for the use of edge services. If the dedicated edge Workload Domain is large enough to support the capacity needs for hosting all the required Avi Service Engines across all of the Workload Domains managed by its NSX-T instance, then Avi Service Engines can be centrally deployed in this Workload Domain.

Note: Choose if you want to deploy Avi Service Engines based on per Workload Domain or centrally in the Dedicated Edge Workload Domain based on the capacity requirements and the projected growth. Avi recommends the Avi Service Engines to be hosted on a per Workload Domains basis.

The following table summarizes design decisions for deploying Avi Service Engines on a Dedicated Edge Workload Domain for Avi Vantage Platform:

Decision ID Design Decision Decision Justification Design Implication
AVI-CTLR-018 Create only one Tenant/Cloud on the Avi Controller instead of the default recommendation which is of one Tenant/Cloud per Workload Domain Allows for centralized placement of Avi Service Engines Capacity growth might be a challenge
Might not work in all cases due to scale restrictions
AVI-SE-003 Choose the correct Tenant/Cloud on the appropriate Avi Controller to create the Avi Service Engine Applications would properly be scoped within the appropriate Tenant/Cloud Policies and data path entities might not be isolated across workload domains
AVI-SE-004 Create separate SE Groups to host applications from different Workload Domains Allows for application isolation
Allows for flexible life cycle management
No application isolation
Life cycle management would be challenging

Availability Zones – Stretched Cluster Deployments

A Workload Domain could have ESXi hosts within a cluster stretched between two physical locations. Applications (Workloads) requiring load balancing, might intend to use High-Availability between the two physical locations. In such a situation care must be taken to place Avi Service Engines on ESXi hosts between the two physical sites and also to carefully place load balanced applications on these Avi Service Engines.

The following table summarizes design decisions for placing applications on a stretched cluster on Avi Service Engines for the Avi Vantage Platform:

Decision ID Design Decision Decision Justification Design Implication
AVI-SE-005 Create all Avi Service Engines in the primary physical location.
Note: Primary physical location might vary between SE Groups
Ensures primary location is the ‘active’ location. Non ideal placement might occur if Avi Service Engines are spawned in the secondary location.
AVI-VI-VC-014 When using two availability zones, create a virtual machine group for the Avi Service Engine VMs Ensures that the Avi Service Engines VMs can be managed as a group and added to VM/Host rules You must add virtual machines to the allocated groups manually
AVI-VI-VC-015 When using two availability zones, create a should-run VM-Host affinity rule to run all Avi Service Engines on the group of hosts in Availability Zone 1 (Primary) Ensures that all Avi Service Engine VMs are located in the primary Availability Zone Avi Service Engine VM placement, therefore, load balanced application placement would be indeterministic

Deployment Specification of Avi Service Engine for the Avi Vantage Platform

How to Size Avi Service Engines

Avi Networks publishes minimum and recommended resource requirements for new Avi Service Engines. However, network and application traffic may vary. This section provides some guidance on sizing. You can consult with your local Avi Sales Engineer for more recommendation that is tailored the exact requirements.

Avi Service Engines can be configured with a minimum of 1 vCPU core and 1 GB RAM up to a maximum of 64 vCPU cores and 256 GB RAM. In write access mode, Service Engine resources for newly created SEs can be configured within the SE group properties. When creating an Avi SE in no access modes, Avi SE resources are allocated manually by an administrator via the hypervisor or by the size of the hardware used for bare-metal servers.

CPU

CPU scales very linearly as more cores are added. CPU is a primary factor in SSL handshakes (TPS), throughput, compression, and WAF inspection. For vCenter clouds, the default is 2 CPU cores, not reserved. However, CPU reservation is highly recommended.

Memory

Memory scales near linearly. It is used for concurrent connections and HTTP caching. Doubling the memory will double the ability of the Avi Service Engine to perform these tasks. The default is 2 GB memory, reserved within the hypervisor for VMware clouds.

PPS

For throughput-related metrics, the hypervisor is likely going to be the bottleneck. vSphere/ESX 6.x version supports about 1.2M PPS for a virtual machine such as Avi Service Engine.

RPS

RPS is dependent on the CPU or the PPS limits. It indicates the performance of the CPU and the limit of PPS that the SE can push. On VMware, Avi Service Engine can provide ~40k RPS per core running on Intel v3 servers. Max RPS on an Avi Service Engine VM running on ESXi would be ~160k.

Disk

Avi Service Engines may store logs locally before they are sent to the Avi Controllers for indexing. Increasing the disk will increase the log retention on the SE. SSDs are highly recommended, as they can write the log data faster. The recommended minimum size for storage is 10 GB, ((2 * RAM) + 5 GB) or 10 GB, whichever is greater. 10 GB is the default for Avi Service Engines deployed in VMware clouds.

Avi Service Engine Performance Guidelines

The following table provides guidelines to size a Avi Service Engine VM with regards to performance:

Metric Per core performance Max performance on a single Avi Service Engine VM
L4 connections per second 40k 80k
HTTP requests per second 50k 175k
HTTP throughput 5 Gbps 7 Gbps
SSL throughput 1 Gbps 7 Gbps
SSL new transactions per second (ECC) 2000 40K
SSL new transactions per second (RSA 2K) 750 40K

Note: If the performance required by the application is more than what a single Avi Service Engine VM can service, then active/active scale-out would need to be used.

The following table summarizes the design decisions for sizing Avi Service Engines:

Decision ID Design Decision Decision Justification Design Implication
AVI-SE-006 If performance requirements are unknown, set default sizing of Avi Service Engines to
  • 2 vCPU - with reservation
  • 4 GB Memory - with reservation
  • 20 GB Disk
This size of Avi Service Engines can handle traffic for a medium workload application while consuming few licenses Avi Service Engine sizing might have to be adjusted based on the required performance

Configure Tenants on the Avi Controller – Isolate Workload Domains

As each NSX-T instance can manage multiple workload domains, as an extension each Avi Controller cluster can also manage multiple workload domains. Avi Vantage’s default recommendation is to abstract each workload domain into a dedicated tenant/ cloud. As workload domain is viewed as a workload separation construct in most cases. A separate tenant per workload domain would allow for configuration isolation and a separate cloud per workload domain would allow for data path level isolation.

Note: If multiple workload domains managed by the same NSX-T instance is to be consumed as a single entity for workload placement, then the collection of such workload domains can be abstracted as a single tenant and cloud on the Avi Controller.

The following table summarizes the design decisions for creating a tenants for workload domains on the Avi Vantage platform:

Decision ID Design Decision Decision Justification Design Implication
AVI-CTLR-019 Create one Tenant per workload domain managed by the NSX-T instance Provides configuration isolation between workload domains No configuration isolation between workload domains

Configuring Clouds on the Avi Controller - No Orchestrator

The clouds are containers for the environment that Avi Vantage will be used to provide load-balanced services. Avi Vantage predominantly supports the following  two types of clouds:

  • No Access Clouds — This type of a cloud object will provide no ecosystem life cycle management for the Avi Service Engines created in the scope of this cloud. Avi Service Engine VM creation, deletion, network placement or assignment etc. will be done by the admin.

  • Write Access Clouds — This type of a cloud object will provide fill ecosystem life cycle management for applications and Avi Service Engines in the scope of this cloud. Life cycle management will include operation like Service Engine image upload, creation, deletion, network placement and programming, IP address assignment, etc.

For the intent and purposes of this reference architecture, only Cloud deployments of type No Access will be supported.

During initial setup of the Avi Controller cluster, a No Access type default cloud, named Default-Cloud, is created.

A cloud can contain multiple Service Engine Groups which are essentially Avi Service Engine containers and a load-balanced application is deployed in the scope of a cloud. Therefore, any given load-balanced application as well as an Avi Service Engine will be scoped to a single or unique cloud on the Avi Controller.

Note: This section will only consider workload domains managed by a single Avi Controller cluster. The same design decisions will apply to each Avi Controller cluster within the VCF instance.

The following table summarizes the design decisions for creating a cloud on the Avi Vantage platform:

Decision ID Design Decision Decision Justification Design Implication
AVI-CTLR-020 Create one Avi Cloud object per Workload Domain that requires Load Balancing services.
Note: The Cloud might need to be created within the tenant that maps to the workload domain.
This allows for maximum flexibility, control and isolation in terms of application deployment. If applications are provisioned across multiple workload domains and are managed by a single NSX-T instance, a single Cloud can be created to host load balancing services across these workload domains.
Note: This is supported on the Avi Vantage platform based on line of business requirement.
AVI-CTLR-021 Setup a No Orchestrator Cloud For VCF 3.9.1, a No Orchestrator Cloud will have to be setup to onboard Avi Service Engines. Avi Controller versions lesser than 20.1.1 do not support write-access NSX-T Cloud.
AVI-VI-VC-016 Create one Content library on each of the Workload Domains to store Avi Service Engine OVAs. Deploying OVAs from the Content library will be operationally easy to do. Every time a new Avi Service Engine needs to be created; you need to copy Avi Service Engine OVA to the vCenter from the admin’s workstation.

Configuring Avi Service Engine Groups

Avi Service Engines (SEs) are created within a group called Service Engine Group known as SE group, which contains the definition of how the Avi Service Engines should be sized, placed, and made highly available. Each cloud will have at least one SE group. The options within an SE group may vary based on the type of cloud within which they exist and its settings, such as no access versus write access mode. Avi Service Engines may only exist within one group. Each SE group acts as an isolation domain. Service Engine resources within an SE group may be moved around to accommodate virtual services, but Service Engine resources are never shared between SE groups.

Depending on the change, a change made to an SE group:

  • May be applied immediately,
  • Only applied to Avi Service Engines created after the changes are made, or
  • Require existing Avi Service Engines to be rebooted before the changes can take effect

Multiple SE groups may exist within a cloud. A newly created virtual service will be placed on the ‘default’ SE group, though this can be changed via the virtual service > Advanced page while creating a virtual service via the advanced wizard. SE groups provide data plane isolation. Therefore, moving a virtual service from one SE group to another is disruptive to existing connections through the virtual service.

SE group Availability Modes

The high availability mode of the SE group controls the behavior of the SE group in the event of an SE failure. It also controls how the load is scaled across SEs. Selecting a particular HA mode will change the settings and options that are exposed in the UI. These modes span a spectrum, from use of the fewest Service Engine resources on one end to providing the best high availability on the other.

  • Elastic HA Active/Active Mode — This HA mode distributes virtual services across a minimum of two SEs.
  • Elastic HA N+M Mode — This default mode permits up to N active SEs to deliver virtual services, with the capacity equivalent of M SEs within the group ready to absorb SE(s) failure(s).
  • Legacy HA Active/Standby Mode — This mode is primarily intended to mimic a legacy appliance load balancer for easy migration to Avi Vantage. Only two Service Engines may be created. For every virtual service active on one, there is a standby on the other, configured and ready to take over in the event of a failure of the active SE. There is no Service Engine scale out in this HA mode.

Elastic HA Active/Active is the recommended availability model. This model provides resiliency to infrastructure changes resulting in maintaining application availability and providing increased application capacity while maintaining flexibility for the Avi Vantage deployment to scale.

High Availability and Placement Settings

  • Placement across Avi Service Engines — When placement is compact, Avi Vantage prefers to spin up and fill up the minimum number of Avi Service Engines and attempts to place virtual services on Avi Service Engines which are already running. When the placement is distributed, Avi Vantage maximizes virtual service performance by avoiding placements on existing Avi Service Engines. Instead, it places virtual services on newly spun-up Avi Service Engines, up to the maximum number of Avi Service Engines.

  • Virtual Services per Service Engine — This parameter establishes the maximum number of virtual services the Avi Controller cluster can place on any one of the Avi Service Engines in the SE group.

  • SE Self-Election — Checking this option enables Avi Service Engines in the SE group to elect a primary Avi Service Engine amongst themselves in the absence of connectivity to a Controller. This ensures Avi Service Engine high availability in handling client traffic even in headless mode.

Service Engine Capacity and Limit Settings

  • Max Number of Service Engines — (Default=10, Range=0-1000) Defines the maximum Avi Service Engines that may be created within a SE group. This number, combined with the virtual services per Avi Service Engine setting, dictates the maximum number of virtual services that can be created within a SE group. If this limit is reached, it is possible new virtual services may not be able to be deployed and will show a gray, un-deployed status. This setting can be useful to prevent Avi Vantage from consuming too many virtual machines.

  • Host Geo Profile — (Default is OFF) Enabling this provides extra configuration memory to support a large geo DB configuration.

  • Connection Memory Percentage — The percentage of memory reserved to maintain connection state. It comes at the expense of memory used for HTTP in-memory cache. Sliding the bar causes the percentage devoted to connection state to range between its limits with 10% for minimum and 90% for maximum.

Advanced HA and Placement Settings

  • Buffer Service Engines — This is excess capacity provisioned for a HA failover. In elastic HA N+M mode, this capacity is expressed as M, an integer number of buffer Avi Service Engines. It actually translates into a count of potential virtual service placements. To calculate that count, Avi Vantage multiplies M by the maximum number of virtual services per Avi Service Engine.

  • Scale Per Virtual Service — A pair of integers determine the minimum and number of active Avi Service Engines onto which a single virtual service may be placed. With native Avi Service Engine scaling, the greatest value one can enter as a maximum is 4; with BGP-based Avi Service Engine scaling, the limit is much higher, governed by the ECMP support on the upstream router.

  • Dedicated dispatcher CPU — Selecting this option dedicates the function of the core that handles packet receive/transmit from/to the data network to just dispatching.

The following table summarizes the design decisions for Service Engine group design for Avi Service Engines:

Decision ID Design Decision Decision Justification Design Implication
AVI-SE-007 Configure each Workload Domain's Cloud Service Engine group for active/active HA mode
Note: If features such as preserve-client-ip are required by applications, then those applications need to be placed on a SE group that has HA mode set to legacy active/standby
Active/Active HA provides the best load balanced application resiliency by hosting the virtual service by default on two service engines HA model for load balanced applications that will be applied to the Service Engines
AVI-SE-008 Create multiple SE groups per cloud as required
Note: Some of the criteria for grouping applications in a SE group could be based on
  • Set of SE group(s) hosting applications for a line of business
  • Set of SE group(s) hosting applications in DMZ v/s non-DMZ
  • Set of SE group(s) for hosting applications in Prod v/s non-Prod
  • Hosting applications in different SE group(s) for scale and performance reasons
Isolating applications into different SE groups allows for better capacity planning and flexibility of separate life-cycle-management All applications across will be placed on the same SE group thereby sharing Service Engine resources.
All Service Engines would need to be upgraded as they are placed in the same SE group
AVI-SE-009 Enable Dedicated dispatcher CPU on SE groups that contain Avi Service Engine VMs of 4 or more vCPUs
Note: This setting should be enabled on SE groups that are servicing applications that have high network requirement
This will enable a dedicated core for packet processing enabling high packet pipeline on the Avi Service Engine VMs
Note: By default the packet processing core also processes load balancing flows
Network performance of Avi Service Engines would be sub-par as compared to those that have this setting enabled
AVI-SE-010 Use Default-Group only as a place holder for new Avi Service Engines. Do not use this SE group to host applications Newly created Avi Service Engines will be placed in the Default-Group SE group. They can then be moved to the appropriate SE groups Applications might get provisioned on unintended Avi Service Engines. This might lead to more operational overhead
AVI-SE-011 Set 'Placement across Avi Service Engines' setting to 'distributed' This allows for maximum fault tolerance and even utilization of capacity Uneven utilization of capacity

Creating Avi Service Engine - No Orchestrator Cloud

Avi Service Engines will be created by vCenter admins as the Cloud configured on Avi Controller is a No-Orchestrator cloud. In a No Orchestrator Cloud:

  • Avi Controller does not access vCenter and does not automatically deploy Avi Service Engine VMs or connect them to the networks.

  • Avi Service Engine deployment and network placement are performed by the Avi Controller and vCenter administrators.

The following table summarizes the design decisions for Avi Service Engine:

Decision ID Design Decision Decision Justification Design Implication
AVI-VI-VC-017 Do not over provision vCPUs in vCenter on the ESXi hosting Avi Service Engine VMs Avi Service Engines are a critical infrastructure component providing load-balancing services to mission critical applications.
Note : This is not required if vCPU reservation is done.
Over provisioning might induce performance penalties and might impact application SLAs.
AVI-VI-VC-018 Reserve vCPUs and Memory through vCenter for the Avi Service Engine VM Avi Service Engines are a critical infrastructure component providing load-balancing services to mission critical applications. Reserving vCPU and Memory resolves critical resource contention. Might induce performance penalties and might impact application SLAs.
AVI-VI-VC-019 On deploying the Avi Service Engine VMs make sure all vCPUs are from the same socket. vCPUs of the Avi Service Engine VMs would be allocated on the same socket providing better performance. Avi Service Engine performance might be slightly lower.
AVI-VI-VC-020 Choose 'vmxnet3' as the driver for network adapters. Avi Service Engine data-path would use DPDK on network interfaces that use the 'vmxnet3' driver. This greatly improves network packet performance. Data-path network packet performance will be limited and would impact performance.
AVI-VI-VC-021 vCenter administrators should create Avi Service Engine VMs with a consistent name prefix for instance, 'avise-xxxxx'
Note: Where xxxx can contain tenant/cloud/se-group identities.
This will allow for grouping or filtering. For instance, when used in a NSX environment, a simple NSGroup container can be created that hosts Avi Service Engine VMs Would be hard to group/filter any operations that need to be done on Avi Service Engine VMs.
AVI-SE-012 If DHCP is enabled on Port-Groups that would be used for data networks, enable DHCP for data networks in the Cloud. Having DHCP enabled for data networks would make Avi Service Engine configuration simple. Operators would have to program static IPs for the data networks once the Avi Service Engine is created.

Life Cycle Management for Avi Service Engine for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. All administrative tasks are performed at the Avi Controller (control plane), which includes Service Engine code upgrades. For more information on how to upgrade the Service Engine software, refer to Life Cycle Management for Avi Controller for the Avi Vantage Platform.

Note: Avi Service Engines will be managed by the Avi Controller cluster which manages the workload domains in which the Avi Service Engines are placed.

Logging Design for Avi Service Engine for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. All administrative tasks are performed at the Avi Controller (control plane), which includes generation and retrieval of events. For more information on Service Engine level events, refer to Alerting and Events Design for Avi Controller for the Avi Vantage Platform.

Monitoring and Alerting Design for Avi Service Engine for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. All administrative tasks are performed at the Avi Controller (control plane), this includes generation and retrieval of events. To send service engine level events, refer to Monitoring and Alerting Design for Avi Controller for the Avi Vantage Platform.

Data Protection and Back Up for Avi Service Engine for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. The Avi controller (control plane) owns, manages and determines the configurations that ultimately get applied to specific Avi Service Engines (data plane). As a result of the architecture, the Avi Service Engines can be viewed as ephemeral resources. When an Avi Service Engine reboots, it comes up initially only with its management network configured. The Controller will then determine the correct configuration to apply to the Service Engine. There is no need (or even the capability) to backup an Avi Service Engine configuration. All configuration management is done via the Avi Controllers.

To back up the Avi Vantage configuration, refer to Data Protection and Back Up for Avi Controller for the Avi Vantage Platform

Networking Design of Avi Service Engine for the Avi Vantage Platform

The Avi Service Engines require a minimum of two interfaces; one for management and one to be used for data. Internally within the Avi Service Engines, management traffic is isolated from data traffic. For additional information regarding port/protocol requirements use Networking Design for the Avi Vantage Platform for reference.

Management Interface

The management interface is used for Avi Service Engine to Avi Controller communication and the source for streaming virtual service client logs to an external server. The Avi Service Engine management network need not be same as the Avi Controllers. Layer 2 adjacency is not required and the only requirement is Layer 3 reachability between the Avi Service Engines and the Avi Controller cluster.

Data Interfaces

The Avi Service Engine data NICs are used for the load balanced traffic. The NIC will respond to ARP for the load balanced VIPs and also serve as the source for health monitoring backend pool servers. The Avi Service Engine data NIC needs to be in the same network where the subnet that is used for the VIPs reside. Additional Avi Service Engines data NICs can be configured as additional networks for up to a maximum of 9 total data NICs.

The following table summarizes design decisions for the Networking Design of Avi Service Engines for the Avi Vantage platform:

Decision ID Design Decision Decision Justification Design Implication
AVI-SE-013 Latency between Avi Controllers and Avi Service Engines should be <75ms. Required for correct operation of the Avi Service Engines. May lead to issues with heartbeats and data synchronization between Avi Controller and Avi Service Engines.
AVI-VI-VC-022 Assign the management network to the Avi Service Engine Network Adapter 1. The Avi Service Engine requires management connectivity to/from the Avi Controllers. Avi Service Engines will not connect to the Avi Controller
AVI-VI-004 Allocate a static IP address to be used for management by the Avi Service Engine. Ensures stability Requires precise IP management
AVI-VI-VC-023 Assign the data network to the Avi Service Engine to Network Adapter 2. The Avi Service Engine requires data connectivity for hosting the virtual services and to be used for health monitoring the backend servers. The assigned data network need to be in the same Layer 2 as the IP space of the load balanced VIPs that will be used.
AVI-VI-005 Allocate a static IP address to be used for data by the Avi Service Engine Ensures stability Requires precise IP management

The following diagram shows Avi Service Engines connected to management and data networks:

img1

Information Security and Access Design of Avi Service Engine for the Avi Vantage Platform

Identity Management Design of Avi Controller for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. All administrative tasks are performed at the Avi Controller (control plane), this includes authentication and authorization. To view how this is setup, refer to Identity Management Design of Avi Controller for the Avi Vantage Platform.

Service Accounts Design of Avi Service Engine for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. All administrative tasks are performed at the Avi Controller (control plane), which includes service account configuration. For more information, refer to Service Accounts Design of Avi Controller for the Avi Vantage Platform.

Password Management Design of Avi Service Engine for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. All administrative tasks are performed at the Avi Controller (control plane), which includes password management. Whenever a password is changed on the Avi Controller, it is synced to all of the Avi Service Engines.

For more information, refer to Password Management Design of Avi Controller for the Avi Vantage Platform.

Certificate Management Design of Avi Service Engine for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. All administrative tasks are performed at the Avi Controller (control plane), which includes certificate management.

For more information, refer to Certificate Management Design of Avi Controller for the Avi Vantage Platform.