Avi Service Engine Design for the Avi Vantage Platform

Logical Design of Avi Service Engine for the Avi Vantage Platform

The Avi Service Engines are VMs that provide the data path functionality and service workloads that require load-balancing. Applications that require load-balancing are also placed in the same virtual infrastructure workload domain. All Avi Service Engines belonging to a workload domain would be managed by a unique NSX-T Cloud on the Avi Controller. Avi Service Engines would be scoped to the Avi Controller cluster which maps to the NSX-T instance managing the workload domains.

Avi Vantage’s NSX-T Cloud Connector is an abstraction for an NSX-T Transport Zone. Each Avi NSX-T Cloud Connector would provide load balancing services for all Workload Domains i.e vCenter(s) that share an NSX-T Transport Zone. A new Avi NSX-T Cloud Connector would need to be created for the first Workload Domain that gets added the NSX-T domain.

Note:

  1. Avi Controller can host multiple NSX-T Cloud Connector.
  2. Multiple NSX-T Cloud Connectors can point to the same NSX-T Manager. But would require a unique Transport Zone.
  3. Each Avi NSX-T Cloud Connector could manage multiple vCenters.

The following table summarizes the design decisions for deploying Avi Service Engine for the Avi Vantage platform:

Decision ID Design Decision Decision Justification Design Implication
AVI-CTLR-018 Create an NSX-T Cloud for each unique Overlay Transport Zone managed by NSX-T that contains Transport Nodes If two or more Workload Domains share a Transport Zone, a single Avi NSX-T Cloud Connector would provide load balancing for each of these Workload Domains
If two or more Workload Domains utilize a unique Transport Zone, unique Avi NSX-T Cloud Connector would provide load balancing for each of these Workload Domains
NSX-T Cloud cannot be used to deploy Avi Service Engines

The following diagram shows VCF deployment depicting multiple workload domains managed by a single NSX-T instance. Each NSX-T instance is mapped to an Avi Controller cluster:

img2

The following diagram shows VCF deployment depicting multiple workload domains, each managed by shared NSX-T instance. The NSX-T instance is mapped to a shared Avi Controller cluster:

img3

Dedicated Edge Workload Domain

A Dedicated Edge Workload Domain is a Workload Domain created solely for the use of edge services. If the dedicated edge Workload Domain is large enough to support the capacity needs for hosting all the required Avi Service Engines across all of the Workload Domains managed by its NSX-T instance, then Avi Service Engines can be centrally deployed in this Workload Domain.

Note: Choose if you want to deploy Avi Service Engines based on per Workload Domain or centrally in the Dedicated Edge Workload Domain based on the capacity requirements and the projected growth. Avi recommends the Avi Service Engines to be hosted on a per Workload Domains basis.

The following table summarizes design decisions for deploying Avi Service Engines on a Dedicated Edge Workload Domain for Avi Vantage Platform:

Decision ID Design Decision Decision Justification Design Implication
AVI-CTLR-019 Create only one NSX-T on the Avi Controller instead of the default recommendation which is of one NSX-T per Workload Domain Allows for centralized placement of Avi Service Engines Capacity growth might be a challenge
Might not work in all cases due to scale restrictions
AVI-SE-001 Choose the correct NSX-T Cloud on the appropriate Avi Controller when creating the Virtual Services Applications would properly be scoped to the appropriate NSX-T Cloud Policies and data path entities might not be isolated across workload domains
AVI-SE-002 Create separate SE Groups to host Virtual Services from different Workload Domains Allows for application isolation
Allows for flexible life cycle management
No application isolation
Life cycle management would be challenging

Availability Zones – Stretched Cluster Deployments

A Workload Domain could have ESXi hosts within a cluster stretched between two physical locations. Applications (Workloads) requiring load balancing, might intend to use High-Availability between the two physical locations. In such a situation care must be taken to place Avi Service Engines on ESXi hosts between the two physical sites and also to carefully place load balanced applications on these Avi Service Engines.

The following table summarizes design decisions for placing applications on a stretched cluster on Avi Service Engines for the Avi Vantage Platform:

Decision ID Design Decision Decision Justification Design Implication
AVI-VI-VC-014 When using two availability zones, create two virtual machine groups for the Avi Service Engine VMs. One for each availability zone. Ensures that the Avi Service Engines VMs can be managed as a group and added to VM/Host rules You must manage each Avi Service Engine VM individually
AVI-VI-VC-015 When using two availability zones, create a should-run VM-Host affinity rule to run all Avi Service Engines on the group of hosts in Availability Zone 1 (Primary) Ensures that all Avi Service Engine VMs are located in the primary Availability Zone Avi Service Engine VM placement would be indeterministic, therefore, load balanced application placement would be indeterministic

Deployment Specification of Avi Service Engine for the Avi Vantage Platform

How to Size Avi Service Engines

Avi Networks publishes minimum and recommended resource requirements for new Avi Service Engines. However, network and application traffic may vary. This section provides some guidance on sizing. You can consult with your local Avi Sales Engineer for more recommendation that is tailored the exact requirements.

Avi Service Engines can be configured with a minimum of 1 vCPU core and 1 GB RAM up to a maximum of 64 vCPU cores and 256 GB RAM. In write access mode, Service Engine resources for newly created SEs can be configured within the SE group properties.

CPU

CPU scales very linearly as more cores are added. CPU is a primary factor in SSL handshakes (TPS), throughput, compression, and WAF inspection. For vCenter clouds, the default is 2 CPU cores, not reserved. However, CPU reservation is highly recommended.

Memory

Memory scales near linearly. It is used for concurrent connections and HTTP caching. Doubling the memory will double the ability of the Avi Service Engine to perform these tasks. The default is 2 GB memory, reserved within the hypervisor for VMware clouds.

PPS

For throughput-related metrics, the hypervisor is likely going to be the bottleneck. vSphere/ESX 6.x version supports about 1.2M PPS for a virtual machine such as Avi Service Engine.

RPS

RPS is dependent on the CPU or the PPS limits. It indicates the performance of the CPU and the limit of PPS that the SE can push. On VMware, Avi Service Engine can provide ~40k RPS per core running on Intel v3 servers. Max RPS on an Avi Service Engine VM running on ESXi would be ~160k.

Disk

Avi Service Engines may store logs locally before they are sent to the Avi Controllers for indexing. Increasing the disk will increase the log retention on the SE. SSDs are highly recommended, as they can write the log data faster. The recommended minimum size for storage is ((2 * RAM) + 5 GB) or 15 GB, whichever is greater. 15 GB is the default for Avi Service Engines deployed in VMware clouds.

Avi Service Engine Performance Guidelines

The following table provides guidelines to size a Avi Service Engine VM with regards to performance:

Metric Per core performance Max performance on a single Avi Service Engine VM
L4 connections per second 40k 80k
HTTP requests per second 50k 175k
HTTP throughput 5 Gbps 7 Gbps
SSL throughput 1 Gbps 7 Gbps
SSL new transactions per second (ECC) 2000 40K
SSL new transactions per second (RSA 2K) 750 40K

Note: If the performance required by the application is more than what a single Avi Service Engine VM can service, then active/active scale-out would need to be used.

The following table summarizes the design decisions for sizing Avi Service Engines:

Decision ID Design Decision Decision Justification Design Implication
AVI-SE-003 If performance requirements are unknown, set default sizing of Avi Service Engines to
  • 2 vCPU - with reservation
  • 4 GB Memory - with reservation
  • 20 GB Disk
This size of Avi Service Engines can handle traffic for a medium workload application while consuming few licenses Avi Service Engine sizing might have to be adjusted based on the required performance

Configure Tenants on the Avi Controller – Isolate Workload Domains

As each NSX-T instance can manage multiple workload domains, as an extension each Avi Controller cluster can also manage multiple workload domains. Avi Vantage’s default recommendation is to abstract each workload domain into a dedicated Tenant serviced by the NSX-T Cloud created in the Provider (admin) context. As workload domain is viewed as a workload separation construct in most cases. A separate Tenant per workload domain would allow for configuration isolation. SE Groups can be leveraged to provide data path level isolation.

Note: If multiple workload domains managed by the same NSX-T instance is to be consumed as a single entity for workload placement, then the collection of such workload domains can be abstracted as a single Tenant on the Avi Controller.

The following table summarizes the design decisions for creating a tenants for workload domains on the Avi Vantage platform:

Decision ID Design Decision Decision Justification Design Implication
AVI-CTLR-020 Create one Tenant in provider context per workload domain managed by the NSX-T instance Provides configuration isolation between workload domains No configuration isolation between workload domains. This would be the mode used when the Avi Controller is setup in the Basic Edition.

Configuring Clouds on the Avi Controller - NSX-T

The clouds are containers for the environment that Avi Vantage will be used to provide load-balanced services. Avi Vantage predominantly supports the following  two types of clouds:

  • No Access Clouds — This type of a cloud object will provide no ecosystem life cycle management for the Avi Service Engines created in the scope of this cloud. Avi Service Engine VM creation, deletion, network placement or assignment etc. will be done by the admin.

  • Write Access Clouds — This type of a cloud object will provide fill ecosystem life cycle management for applications and Avi Service Engines in the scope of this cloud. Life cycle management will include operation like Service Engine image upload, creation, deletion, network placement and programming, IP address assignment, etc.

For the intent and purposes of this reference architecture, Cloud deployments of type ‘NSX-T’ which is a write access Cloud will be supported.

A Cloud can contain multiple Service Engine Groups which are essentially Avi Service Engine containers and a load-balanced application is deployed in Cloud scoped to a Service Engine Group. Therefore, any given load-balanced application as well as an Avi Service Engine will be scoped to a unique Cloud on the Avi Controller.

Note: This section will only consider workload domains managed by a single Avi Controller cluster. The same design decisions will apply to each Avi Controller cluster within the VCF instance.

The following table summarizes the design decisions for creating a cloud on the Avi Vantage platform:

Decision ID Design Decision Decision Justification Design Implication
AVI-CTLR-021 Create one Avi NSX-T Cloud object per Workload Domain that requires Load Balancing services.
Note: The Cloud would be created by the Provider in the ‘admin’ Tenant.
This allows for maximum flexibility, control and isolation in terms of application deployment. If applications are provisioned across multiple workload domains and are managed by a single NSX-T instance, a single Cloud can be created to host load balancing services across these workload domains.
Note: This is supported on the Avi Vantage platform based on line of business requirement.
AVI-CTLR-022 Setup an NSX-T Cloud For VCF 4.1, an NSX-T Cloud would have to be setup to onboard Avi Service Engines Would lose fully automated application life cycle management that NSX-T cloud delivers.
Although ‘No Orchestrator’ Cloud is supported by Avi Vantage, it is not the recommended option.
AVI-VI-VC-016 Create one Content library on each of the Workload Domains to store Avi Service Engine OVAs. This is a requirement for the NSX-T Cloud NSX-T Cloud cannot be used to deploy Avi Service Engines
AVI-VI-SDN-003 Provide a Tier-1 Logical Router This is a requirement for the NSX-T Cloud.
Avi Service Engines would be placed on Overlay Segments created on this Tier-1 Logical Router.
NSX-T Cloud cannot be used to deploy Avi Service Engines
AVI-VI-SDN-004 Provide an Overlay Logical Segment connected to a Tier-1 Logical Router for Avi Service Engine Management This is a requirement for the NSX-T Cloud
This network would be used for Avi Controller to Avi Service Engine connectivity.
NSX-T Cloud cannot be used to deploy Avi Service Engines
AVI-VI-SDN-005 Provide one or more Overlay Logical Segment(s) connected to Tier-1 Logical Router(s) for Avi Service Engine Data path This is a requirement for the NSX-T Cloud
These networks would be used for Avi Service Engines for data plane connectivity and servicing application traffic
NSX-T Cloud cannot be used to deploy Avi Service Engines
AVI-CTLR-023 Provide an object name prefix when creating the NSX-T Cloud Connector on the Avi Controller This is a requirement for the NSX-T Cloud
Used for uniquely identifying NSX-T Cloud Connector created resources on NSX-T Manager and vCenter.
NSX-T Cloud cannot be used to deploy Avi Service Engines

Configuring Avi Service Engine Groups

Avi Service Engines (SEs) are created within a group called Service Engine Group known as SE group, which contains the definition of how the Avi Service Engines should be sized, placed, and made highly available. Each cloud will have at least one SE group. The options within an SE group may vary based on the type of cloud within which they exist and its settings, such as no access versus write access mode. Avi Service Engines may only exist within one group. Each SE group acts as an isolation domain. Service Engine resources within an SE group may be moved around to accommodate virtual services, but Service Engine resources are never shared between SE groups.

Depending on the change, a change made to an SE group:

  • May be applied immediately,
  • Only applied to Avi Service Engines created after the changes are made, or
  • Require existing Avi Service Engines to be rebooted before the changes can take effect

Multiple SE groups may exist within a cloud. A newly created virtual service will be placed on the ‘default’ SE group, though this can be changed via the virtual service > Advanced page while creating a virtual service via the advanced wizard. SE groups provide data plane isolation. Therefore, moving a virtual service from one SE group to another is disruptive to existing connections through the virtual service.

SE group Availability Modes

The high availability mode of the SE group controls the behavior of the SE group in the event of an SE failure. It also controls how the load is scaled across SEs. Selecting a particular HA mode will change the settings and options that are exposed in the UI. These modes span a spectrum, from use of the fewest Service Engine resources on one end to providing the best high availability on the other.

  • Elastic HA Active/Active Mode — This HA mode distributes virtual services across a minimum of two SEs.
  • Elastic HA N+M Mode — This default mode permits up to N active SEs to deliver virtual services, with the capacity equivalent of M SEs within the group ready to absorb SE(s) failure(s).
  • Legacy HA Active/Standby Mode — This mode is primarily intended to mimic a legacy appliance load balancer for easy migration to Avi Vantage. Only two Service Engines may be created. For every virtual service active on one, there is a standby on the other, configured and ready to take over in the event of a failure of the active SE. There is no Service Engine scale out in this HA mode.

Elastic HA Active/Active is the recommended availability model. This model provides resiliency to infrastructure changes resulting in maintaining application availability and providing increased application capacity while maintaining flexibility for the Avi Vantage deployment to scale.

High Availability and Placement Settings

  • Placement across Avi Service Engines — When placement is compact, Avi Vantage prefers to spin up and fill up the minimum number of Avi Service Engines and attempts to place virtual services on Avi Service Engines which are already running. When the placement is distributed, Avi Vantage maximizes virtual service performance by avoiding placements on existing Avi Service Engines. Instead, it places virtual services on newly spun-up Avi Service Engines, up to the maximum number of Avi Service Engines.

  • Virtual Services per Service Engine — This parameter establishes the maximum number of virtual services the Avi Controller cluster can place on any one of the Avi Service Engines in the SE group.

  • SE Self-Election — Checking this option enables Avi Service Engines in the SE group to elect a primary Avi Service Engine amongst themselves in the absence of connectivity to a Controller. This ensures Avi Service Engine high availability in handling client traffic even in headless mode.

Service Engine Capacity and Limit Settings

  • Max Number of Service Engines — (Default=10, Range=0-1000) Defines the maximum Avi Service Engines that may be created within a SE group. This number, combined with the virtual services per Avi Service Engine setting, dictates the maximum number of virtual services that can be created within a SE group. If this limit is reached, it is possible new virtual services may not be able to be deployed and will show a gray, un-deployed status. This setting can be useful to prevent Avi Vantage from consuming too many virtual machines.

  • Host Geo Profile — (Default is OFF) Enabling this provides extra configuration memory to support a large geo DB configuration.

  • Connection Memory Percentage — The percentage of memory reserved to maintain connection state. It comes at the expense of memory used for HTTP in-memory cache. Sliding the bar causes the percentage devoted to connection state to range between its limits with 10% for minimum and 90% for maximum.

Advanced HA and Placement Settings

  • Buffer Service Engines — This is excess capacity provisioned for a HA failover. In elastic HA N+M mode, this capacity is expressed as M, an integer number of buffer Avi Service Engines. It actually translates into a count of potential virtual service placements. To calculate that count, Avi Vantage multiplies M by the maximum number of virtual services per Avi Service Engine.

  • Scale Per Virtual Service — A pair of integers determine the minimum and number of active Avi Service Engines onto which a single virtual service may be placed. With native Avi Service Engine scaling, the greatest value one can enter as a maximum is 4; with BGP-based Avi Service Engine scaling, the limit is much higher, governed by the ECMP support on the upstream router.

  • Dedicated dispatcher CPU — Selecting this option dedicates the function of the core that handles packet receive/transmit from/to the data network to just dispatching.

Host and Data Store Scope

  • Host Scope Service Engine — SEs may be deployed on any host that most closely matches the resources and reachability criteria for placement. This setting directs the placement of SEs. Select the ESXi hosts of the Workload Domain where Avi Service Engines should be placed.

  • Datastore Scope Service Engine — Set the storage location for SEs. Select the appropriate shared storage for the Workload Domain.

Advanced Settings – Sizing and Naming

  • Memory per Service Engine — [Default = 2 GB, min = 1 GB] Specify the amount of RAM, in multiples of 1024 MB, to allocate to all new SEs. Changes to this field will only affect newly created SEs. Allocating more memory to an SE will allow larger HTTP cache sizes, more concurrent TCP connections, better protection against certain DDoS attacks, and increased storage of un-indexed logs.

  • Memory Reserve — [Default = ON] Reserving memory ensures an SE will not have contention issues with over-provisioned host hardware. Reserving memory makes that memory unavailable for use by another virtual machine, even when the virtual machine that reserved those resources is powered down. Avi strongly recommends reserving memory, as memory contention may randomly overwrite part of the SE memory, destabilizing the system.

  • vCPU per Service Engine — [Default = 1, range=1-64] Enter the number of virtual CPU cores to allocate to new SEs. Changes to this setting do not affect existing SEs.

  • CPU Reserve — [Default = OFF] Reserving CPU capacity with a virtualization orchestrator ensures an SE will not have issues with over-provisioned host hardware. Reserving CPU cores makes those cores unavailable for use by another virtual machine, even when the virtual machine that reserved those resources is powered down. Avi strongly recommends reserving CPU.

  • Disk per Service Engine — [min = 15GB] Specify an integral number of GB of disk to allocate to all new SEs.

The following table summarizes the design decisions for Service Engine group design for Avi Service Engines:

Decision ID Design Decision Decision Justification Design Implication
AVI-SE-004 Configure each Workload Domain's Cloud Service Engine group for active/active HA mode
Note: Legacy Active/Standby HA mode might be required for certain applications.
Active/Active HA provides the best load balanced application resiliency by hosting the virtual service by default on two service engines Possible underutilization of capacity if Legacy Active/Standby HA mode is used for applications that do not require it.
AVI-SE-005 Create multiple SE groups per cloud as required
Note: Some of the criteria for grouping applications in a SE group could be based on
  • Set of SE group(s) hosting applications for a line of business
  • Set of SE group(s) hosting applications in DMZ v/s non-DMZ
  • Set of SE group(s) for hosting applications in Prod v/s non-Prod
  • Hosting applications in different SE group(s) for scale and performance reasons
Isolating applications into different SE groups allows for better capacity planning and flexibility of separate life-cycle-management All applications across will be placed on the same SE group thereby sharing Service Engine resources.
All Service Engines would need to be upgraded as they are placed in the same SE group
AVI-SE-006 Enable Dedicated dispatcher CPU on SE groups that contain Avi Service Engine VMs of 4 or more vCPUs
Note: This setting should be enabled on SE groups that are servicing applications that have high network requirement
This will enable a dedicated core for packet processing enabling high packet pipeline on the Avi Service Engine VMs
Note: By default the packet processing core also processes load balancing flows
Network performance of Avi Service Engines would be sub-par as compared to those that have this setting enabled
AVI-SE-007 Set 'Placement across Avi Service Engines' setting to 'distributed' This allows for maximum fault tolerance and even utilization of capacity Uneven utilization of capacity
AVI-SE-008 Enable CPU and Memory reservation on the Service Engine Group Avi Service Engines are a critical infrastructure component providing load-balancing services to mission critical applications. Might induce performance penalties and might impact application SLAs
AVI-SE-009 Configure a consistent Service Engine Name Prefix that would indicate a Avi Service Engine VM for instance, 'avise-xxxx'
Note: Where ‘xxxx’ could contain tenant/cloud/se-group identities.
This would allow for grouping / filtering. Would be hard to group/filter any operations that need to be done on Avi Service Engine VMs
AVI-SE-010 Choose the SE Group mode as Legacy HA Active/Standby if the Avi Controller is set to use Basic Edition Avi Controller in Basic Edition only supports Legacy HA Active/Standby mode Applications would not be deployed in an Active/Active fashion, thereby losing out on elastic capacity management.
Avi Enterprise Edition would allow for Active/Active as well as Legacy Active/Standby deployments.

Creating Avi Service Engine - NSX-T Cloud

Avi Service Engines will be created by the Avi Controller’s NSX-T Cloud. In an NSX-T Cloud Avi Controller managed the complete life cycle of the Avi Service Engines:

  • Avi Controller accesses vCenter and automatically creates Avi Service Engine VMs when required.

  • Avi Controller accesses vCenter and automatically attaches the appropriate networks to the Avi Service Engine VMs.

  • Avi Controller accesses vCenter and automatically deletes Avi Service Engine VMs when required.

  • Avi Controller accesses NSX-T and adds the Service Engines to NSGroups for ease of DFW rule creation and maintenance.

Note: Avi Service Engine creation is triggered when a Virtual Service is created and requires load balancing capacity to service it. Avi Controller will create and program the required number of Avi Service Engines of the configured size and make them available for the Virtual Service.

Life Cycle Management for Avi Service Engine for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. All administrative tasks are performed at the Avi Controller (control plane), which includes Service Engine code upgrades. For more information on how to upgrade the Service Engine software, refer to Life Cycle Management for Avi Controller for the Avi Vantage Platform.

Note: Avi Service Engines will be managed by the Avi Controller cluster which manages the workload domains in which the Avi Service Engines are placed.

Logging Design for Avi Service Engine for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. All administrative tasks are performed at the Avi Controller (control plane), which includes generation and retrieval of events. For more information on Service Engine level events, refer to Alerting and Events Design for Avi Controller for the Avi Vantage Platform.

Monitoring and Alerting Design for Avi Service Engine for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. All administrative tasks are performed at the Avi Controller (control plane), this includes generation and retrieval of events. To send service engine level events, refer to Monitoring and Alerting Design for Avi Controller for the Avi Vantage Platform.

Data Protection and Back Up for Avi Service Engine for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. The Avi controller (control plane) owns, manages and determines the configurations that ultimately get applied to specific Avi Service Engines (data plane). As a result of the architecture, the Avi Service Engines can be viewed as ephemeral resources. When an Avi Service Engine reboots, it comes up initially only with its management network configured. The Controller will then determine the correct configuration to apply to the Service Engine. There is no need (or even the capability) to backup an Avi Service Engine configuration. All configuration management is done via the Avi Controllers.

To back up the Avi Vantage configuration, refer to Data Protection and Back Up for Avi Controller for the Avi Vantage Platform

Networking Design of Avi Service Engine for the Avi Vantage Platform

The Avi Service Engines require a minimum of two interfaces; one for management and one to be used for data. Internally within the Avi Service Engines, management traffic is isolated from data traffic. For additional information regarding port/protocol requirements use Networking Design for the Avi Vantage Platform for reference.

Management Interface

The management interface is used for Avi Service Engine to Avi Controller communication and the source for streaming virtual service client logs to an external server. The Avi Service Engine management network is not required to be the same as the Avi Controllers. Layer 2 adjacency is not required, and the only requirement is layer 3 reachability between the Avi Service Engines and the Avi Controller cluster. Avi Service Engine management interface should be placed on an Overlay Logical Segment connected to a Tier-1 Logical Router on NSX-T.

Data Interfaces

The Avi Service Engine data nic(s) are used for the load balanced traffic. The Avi Controller will inject a /32 static route for the VIP into the Tier-1 Logical Router with the next hop as the Avi Service Engine data NIC. The data interface will also respond to ARP for the load balanced VIPs. It also serves as the source for health monitoring backend pool servers. The Avi Service Engine data nic(s) does not need to be in the same network where the VIPs reside. Avi Controller could configure additional data nic(s) on an Avi Service Engine if required for up to a maximum of 9 total data NICs. Avi Service Engine data interfaces should be placed on an Overlay Logical Segment connected to a Tier-1 Logical Router on NSX-T.

The following table summarizes design decisions for the Networking Design of Avi Service Engines for the Avi Vantage platform:

Decision ID Design Decision Decision Justification Design Implication
AVI-SE-011 Latency between Avi Controllers and Avi Service Engines should be <75ms. Required for correct operation of the Avi Service Engines. May lead to issues with heartbeats and data synchronization between Avi Controller and Avi Service Engines.
AVI-VI-VC-017 Create an Overlay Logical Segment connected to a Tier-1 Logical Router for the management network of the Avi Service Engines. The Avi Service Engine requires management connectivity to/from the Avi Controllers. NSX-T Cloud cannot be used to deploy Avi Service Engines
AVI-VI-005 Enable DHCP on the management Overlay Logical Segment to be used for management by the Avi Service Engine. Ensures ease of use Requires IP management on the Avi Controller
AVI-VI-VC-018 Create one Overlay Logical Segment connected to a Tier-1 Logical Router for the data network of the Avi Service Engines.
Note: One logical segment is required per Tier-1 Logical Router.
The Avi Service Engine requires data connectivity for hosting the Virtual Services and to be used for health monitoring of the backend servers NSX-T Cloud cannot be used to deploy Avi Service Engines
AVI-VI-006 Enable DHCP on the data Overlay Logical Segments to be used for data by the Avi Service Engine Ensures ease of use Requires IP management on the Avi Controller

The following diagram shows Avi Vantage Platform Network Design for Management Connectivity in the Workload Domain:

img4

Information Security and Access Design of Avi Service Engine for the Avi Vantage Platform

Identity Management Design of Avi Controller for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. All administrative tasks are performed at the Avi Controller (control plane), this includes authentication and authorization. To view how this is setup, refer to Identity Management Design of Avi Controller for the Avi Vantage Platform.

Service Accounts Design of Avi Service Engine for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. All administrative tasks are performed at the Avi Controller (control plane), which includes service account configuration. For more information, refer to Service Accounts Design of Avi Controller for the Avi Vantage Platform.

Password Management Design of Avi Service Engine for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. All administrative tasks are performed at the Avi Controller (control plane), which includes password management. Whenever a password is changed on the Avi Controller, it is synced to all of the Avi Service Engines.

For more information, refer to Password Management Design of Avi Controller for the Avi Vantage Platform.

Certificate Management Design of Avi Service Engine for the Avi Vantage Platform

The Avi Vantage architecture separates the control plane and data plane. All administrative tasks are performed at the Avi Controller (control plane), which includes certificate management.

For more information, refer to Certificate Management Design of Avi Controller for the Avi Vantage Platform.