Avi Vantage Platform Planning and Preparation

About Avi Vantage Platform Planning and Preparation

The planning and preparation for the Avi Vantage platform documentation provides detailed information about the hardware, and external services that are required to implement NSX advanced load balancer.

Before you start deploying the components of this design, ensure that the environment has specific compute, storage, and network configuration, and that provides services to the components of the SDDC.

You need to review the  planning and preparation for the Avi Vantage platform documentation ahead of deployment to avoid costly rework and delays.

Intended Audience

The planning and preparation for the Avi Vantage platform documentation is intended for cloud architects, infrastructure administrators, and cloud administrators who are familiar with and want to use VMware software to deploy advanced networking services.

Required VMware Software

The planning and preparation for the Avi Vantage platform documentation is compliant and validated with certain product versions.

Before You Apply this Guidance

To use planning and preparation for the Avi Vantage platform, you must be acquainted with the following guidance:

Hardware Requirements

To implement the Avi Vantage platform from this design, your hardware must meet certain requirements.

Component Requirement per Region
Servers BIOS Configuration: Advanced Encryption Standard-New Instructions (AES-NI) Enabled
Network Interfaces Minimum of 10GB

Software Requirements

To implement the Avi Vantage platform from this design, the following are the software requirements:

Product Description
VMware Cloud Foundation 4.1 Support version of VMware Cloud Foundation where the Avi Vantage Platform has been validated against
Avi Vantage v20.1.3 or greater Versions of Avi Vantage that can be used with VCF v4.1

External Services

You must provide a set of external services before you deploy the components of this design.

External Services Overview

External services include Active Directory (AD), Domain Name Services (DNS), Network Time Protocol (NTP), Secure Connection Protocol (SCP), and Certificate Authority (CA).

Active Directory

This validated design uses Active Directory (AD) for authentication and authorization to resources in the rainpole.io domain.

For information about the Active Directory (AD) versions supported by the vSphere version in this design, refer to Versions of Active Directory supported in vCenter Server .

For a multi-region deployment, you use a domain and forest structure to store and manage Active Directory objects per region.

Requirement Domain Instance DNS Zone Description
Active Directory configuration Parent Active Directory rainpole.io Contains Domain Name System (DNS) server, time server, and universal groups that contain global groups from the child domains and are members of local groups in the child domains.
Active Directory configuration Region-A child Active Directory sfo.rainpole.io Contains DNS records that replicate to all DNS servers in the forest. This child domain contains all SDDC users, and global and local groups.
Active Directory configuration Region-B child Active Directory lax.rainpole.io Contains DNS records that replicate to all DNS servers in the forest. This child domain contains all SDDC users, and global and local groups.
Active Directory users and groups - - All user accounts and groups from the Active Directory Users and Groups documentation must exist in the Active Directory before installing and configuring the SDDC
Active Directory connectivity - - All Active Directory domain controllers must be accessible by all management components within the SDDC.

DNS

For a multi-region deployment, you must provide a root and child domains that contain separate DNS records.

Requirement Domain Instance Description
DNS host entries rainpole.io Resides in the rainpole.io domain.
DNS host entries sfo.rainpole.io and lax.rainpole.io Reside in the sfo.rainpole.io and lax.rainpole.io domains.
Configure both DNS servers with the following settings:
  • Dynamic updates for the domain set to Nonsecure and secure
  • Zone replication scope for the domain set to All DNS server in this forest
  • Create all hosts listed in the Host Names and IP Addresses in Region A documentation

If you configure the DNS servers properly, all nodes from the validated design are resolvable by FQDN and by IP address.

NTP

All components in the SDDC must be synchronized against a common time by using the Network Time Protocol (NTP) on all nodes.

Requirement Description
NTP An NTP source, for instance, on a Layer 3 switch or router, must be available and accessible from all nodes of the SDDC.
Use the ToR switches in the Management Workload Domain as the NTP servers or the upstream physical router. These switches must synchronize with different upstream NTP servers and provide time synchronization capabilities in the SDDC. As a best practice, make the NTP servers available under a friendly FQDN, for instance, ntp.sfo01.rainpole.local.

Certificate Authority

Most components of the SDDC require SSL certificates for secure operation. The certificates must be signed by an internal enterprise CA or by a third-party commercial CA. In either case, the CA must be able to sign a Certificate Signing Request (CSR) and return the signed certificate. All endpoints within the enterprise must also trust the root CA of the CA.

Certificate Authority Requirements

Requirement Description
Certificate Authority CA must be able to ingest a Certificate Signing Request (CSR) from the SDDC components and issue a signed certificate. For this validated design, use the Microsoft Windows Enterprise CA that is available in the Windows Server 2016 operating system of a root domain controller. The domain controller must be configured with the Certificate Authority Service and the Certificate Authority Web Enrolment roles.

SCP Backup Target

Dedicate space on a remote server to save data backups for Avi Vantage Platform over SCP.

Requirement Description
Backup Target A backup target for the Avi Controller VMs in the SDDC. The server must support SCP connections.

VLANs and IP Subnets

This validated design requires that you allocate certain VLAN IDs and IP subnets for the traffic types in the SDDC.

Cluster in Region A VLAN Function VLAN ID Portgroup Name Subnet Gateway
Management Cluster Avi Management for Avi Controllers sfo-m01-cl01-vds01-pg-avimgmt

Overlay Logical Segments and IP Subnets

This validated design requires that you allocate certain Overlay Logical Segments connected to a Tier-1 Logical Router and IP subnets for the Avi Service Engine traffic in the SDDC.

Cluster in Region A Overlay Logical Segment Function Logical Segment Name Subnet
VI Workload Domain Avi Management for Avi SEs sfo-w01-cl01-vds01-pg-avimgmt
VI Workload Domain Data Network for Avi SEs sfo-w01-cl01-vds01-pg-avidata01

Host Names and IP Addresses

Before you deploy the Avi Vantage Platform following this design, you must define the host names and IP addresses for each of the components deployed. Some of these host names must also be configured in DNS with fully qualified domain names (FQDN) that map the host names to their IP addresses.

Component Host Name DNS Zone IP Address Description
Avi Controller Cluster sfo-m01-avic01 sfo.rainpole.io Avi Controller Cluster VIP Interface
sfo-m01-avic01a sfo.rainpole.io Avi Controller instances for the management cluster
sfo-m01-avic01b sfo.rainpole.io Avi Controller instances for the management cluster
sfo-m01-avic01c sfo.rainpole.io Avi Controller instances for the management cluster
Avi Service Engines sfo-w01-avise01 sfo.rainpole.io
Avi Service Engines sfo-w01-avise02 sfo.rainpole.io
Avi Service Engines sfo-w01-avise03 sfo.rainpole.io

Note: This table depicts a VCF instance with two workload domains. One NSX-T instance manages the compute workload domain. One Avi Controller cluster will be hosted in the management workload domain to manage Avi Service Engines spawned in the compute workload domain.

Workload Footprint

Before you deploy the Avi Vantage Platform, you must provide sufficient compute and storage resources to meet the footprint requirements of the Avi Controller Cluster and Avi Service Engines.

Workload Footprint for Management Domain

Workload vCPUs vRAM (GB) Storage (GB)
Avi Controller Cluster
Total
Total with 30% free storage capacity

Workload Footprint for VI Workload Domain

Workload vCPUs vRAM (GB) Storage (GB)
Avi Service Engines