Installing Avi Vantage for a Linux Server Cloud
This article describes how to install Avi Vantage in a Linux cloud. Avi Vantage is a software-based solution that provides real-time analytics and elastic application delivery services, including user-to-application timing, SSL termination, and load balancing. Installing Avi Vantage directly onto Linux servers leverages the raw horsepower of the underlying hardware without the overhead added by a virtualization layer. For example, installing Avi Vantage directly onto Linux servers that support Data Plane Development Kit (DPDK) allows the feature’s optimized packet processing to be leveraged for virtual service traffic.
- If installing Avi SEs directly onto Linux servers that include DPDK, make sure to enable the option in Avi Vantage when adding the host for the Avi SE.
- Avi Networks recommends that users disable hyperthreading (HT) in the BIOS of the Linux servers upon which Avi runs prior to installing Avi Vantage on them. It doesn’t get changed often, but RHEL, OEL and CentOS may map physical and hyperthreaded cores differently. Rather than basing its decision on the behaviour or characteristics of a core, Avi Vantage has a predictive map of the host OS via which it skips or ignores hyperthreaded cores. When an OS gets upgraded, this map might change, which means we might be utilizing a HT core instead of a virtual core, which in turn will impact performance.
Other articles of potential interest: VLAN Configuration on Bare Metal, VRF Support for Service Engine Deployment on Bare-Metal Servers
The Avi Vantage Linux server cloud solution uses containerization provided by Docker for support across operating systems and for easy installation.
Avi Vantage can be deployed onto a Linux server cloud in the following topologies. The minimum number of Linux servers required for deployment depends on the deployment topology. A three-Controller cluster is strongly recommended for production environments.
|Deployment Topology||Min Linux Servers Required||Description|
|Single host||1||Avi Controller and Avi SE both run on a single host.|
|Separate hosts||2||Avi Controller and Avi SE run on separate hosts. The Avi Controller is deployed on one of the hosts. The Avi SE is deployed on the other host.|
|3-host cluster||3||Provides high availability for the Avi Controller.|
A single instance of the Avi Controller is deployed on each host. At any given time, one of the Avi Controllers is the leader and the other 2 are followers.
Single-host deployment runs the Avi Controller and Avi SE on the same Linux server. This is the simplest topology to deploy. However, this topology does not provide high availability for either the Avi Controller or Avi SE.
Note: In single-host mode, in-band management is not supported.
Two-host deployment runs the Avi Controller on one Linux server and the Avi SE on another Linux server.
Three-host Cluster Deployment
In a 3-host cluster deployment, one of the Avi Controller instances is the leader. The other 2 instances are followers. If the leader goes down, one of the followers takes over so that control plane functionality for users is continued.
This section lists the minimum requirements for installation.
Each Linux server to be managed by Avi Vantage must meet at least the following physical requirements:
Docker local storage (default /var/lib/docker) should contain at least 18 GB to run Avi containers. If the Avi SE is instantiated through the cloud UI, add 5 GB to run the Avi SE.
|CPU||Intel Xeon with 8 cores|
|Memory||24 GB RAM|
|Network Interface Controller (NIC)||1 x 10 Gbps Intel NIC
Note: For DPDK mode, 82599, X520, X540, X550, X552, X710, and XL710 are the supported NICs.
Starting with 17.2.12, and 18.1.3, Avi Vantage supports Mellanox ConnectX-4 (25G and 40G) NICs.
Installation of Avi Vantage for a Linux server cloud also requires the following software:
|Avi Vantage (distributed by Avi Networks as Docker image)||16.2 or greater|
|Docker (image management service that runs on Linux)||1.6.1 or greater|
|Operating system (OS)||Refer to the Ecosystem Support article|
Note: You can place the Avi Controller and Service Engine containers on the same host starting only from RHEL version 7.4. If co-located on the same host, restarting either container will fail for RHEL versions lesser than 7.4.
Supported Kernel versions to enable DPDK:
|Oracle Enterprise Linux||3.10|
|Red Hat Enterprise Linux||3.10|
Default port assignments are as shown below. If these are in use, chose alternative ports for the purposes listed.
|8443||Secure bootstrap communication between SE and Controller (
|80, 443||Web server ports (
|161||SNP MIB walkthrough|
To install Avi Vantage, some installation tasks are performed on each of the Linux hosts:
- Avi Controller host: The installation wizard for the Avi Controller must be run on the Linux server that will host the Avi Controller. If deploying a 3-host cluster of Avi Controllers, run the wizard only on the host that will be the cluster leader. (The cluster can be configured at any time after installation is complete.)
- Avi SE hosts: On each Linux server that will host an Avi SE, configuration of some SSH settings is required. At a minimum, an SSH user account must be added to the Avi Controller, and the public key for the account must be installed in the authorized keys store on each of the Avi SE hosts. If an SSH user name other than “root” will be used, some additional steps are required.
Avi Vantage deployment for a Linux server cloud consists of the following:
- Install the Docker platform (if not already installed). On Ubuntu, the command line would be:
apt-get install docker.io
- Install NTP server on host OS.
- Install the Avi Controller image onto a Linux server.
- Use the setup wizard to perform initial configuration of the Avi Controller:
- Avi Vantage user account creation (your Avi Vantage administrator account)
- DNS and NTP servers
- Infrastructure type (Linux)
- SSH account information (required for installation and access to the Avi SE instance on each of the Linux servers that will host an Avi SE)
- Avi SE host information (IP address, DPDK, CPUs, memory)
- Multitenancy support
The SSH, Avi SE host, and multitenancy selection can be configured either using the wizard or later, after completing it. (The wizard times out after a while.) This article provides links for configuring these objects using the Avi Controller web interface.
Detailed steps are provided below.
1. Install Docker
Refer to the Docker Installation article, which covers
- Docker Editions
- Getting the installation images for various Linux variants
- Selecting a storage driver for Docker
- Verifying your Docker installation
2. Install NTP Server on Host Operating System
Install NTP server using the following command:
sudo yum install ntp
3. Install Avi Controller Image
- Use SCP to copy the .tgz package onto the Linux server that will host the Avi Controller:
scp docker_install.tar.gz root@Host-IP:/tmp/
- Use SSH to log into the host:
- Change to the /tmp directory:
- Unzip the .tgz package:
sudo tar -xvf docker_install.tar.gz
- Run the setup.py script. The setup script can be run in interactive mode or as a single command string.
- If entered as a command string, the script sets the options that are included in the command string to the specified values, and leaves the other values set to their defaults. Go to Step 6.
- In interactive mode, the script displays a prompt for configuring each option. Go to Step 7.
- To run the setup script as a single command, enter a command string such as the following:
./avi_baremetal_setup.py -c -cc 8 -cm 24 -i 10.120.0.39
The options are explained in the CLI help:
avi_baremetal_setup.py [-h] [-d] [-s] [-sc SE_CORES] [-sm SE_MEMORY_MB] [-c] [-cc CON_CORES] [-cm CON_MEMORY_GB] -i CONTROLLER_IP -m MASTER_CTL_IP-h, --help show this help message and exit -d, --dpdk_mode Run SE in DPDK Mode. Default is False -s, --run_se Run SE locally. Default is False -sc SE_CORES, --se_cores SE_CORES Cores to be used for AVI SE. Default is 1 -sm SE_MEMORY_MB, --se_memory_mb SE_MEMORY_MB Memory to be used for AVI SE. Default is 2048 -c, --run_controller Run Controller locally. Default is No -cc CON_CORES, --con_cores CON_CORES Cores to be used for AVI Controller. Default is 4 -cm CON_MEMORY_GB, --con_memory_gb CON_MEMORY_GB Memory to be used for AVI Controller. Default is 12 -i CONTROLLER_IP, --controller_ip CONTROLLER_IP Controller IP Address -m MASTER_CTL_IP, --master_ctl_ip MASTER_CTL_IP Master controller IP Address
- To run in interactive mode, start by entering "avi_baremetal_setup.py". Here is an example:
./avi_baremetal_setup.pyWelcome to AVI Initialization ScriptDPDK Mode: Pre-requisites(DPDK): This script assumes the below utilities are installed: docker (yum -y install docker) Supported Nics(DPDK): Intel 82599/82598 Series of Ethernet Controllers Supported Vers(DPDK): OEL/CentOS/RHEL - 7.0,7.1,7.2Non-DPDK Mode: Pre-requisites: This script assumes the below utilities are installed: docker (yum -y install docker) Supported Vers: OEL/CentOS/RHEL - 7.0,7.1,7.2Caution : This script deletes existing AVI docker containers & images.Do you want to proceed in DPDK Mode [y/n] y Do you want to run AVI Controller on this Host [y/n] y Do you want to run AVI SE on this Host [n] n Enter The Number Of Cores For AVI Controller. Range [4, 39] 8 Please Enter Memory (in GB) for AVI Controller. Range [12, 125] 24 Please Enter directory path for Avi Controller Config (Default [/opt/avi/controller/data/]) Please Enter disk (in GB) for Avi Controller config (Default [30G]) Do you have separate partition for Avi Controller Metrics? If yes, please enter directory path, else leave it blank Do you have separate partition for Avi Controller Client Log? If yes, please enter directory path, else leave it blank Please Enter Controller IP 10.120.0.39 Run SE : No Run Controller : Yes Controller Cores : 8 Memory(mb) : 24 Controller IP : 10.120.0.39Disabling AVI Services... Loading AVI CONTROLLER Image. Please Wait.. kernel.core_pattern = /var/crash/%e.%p.%t.coreInstallation Successful. Starting Services...
- Start Avi Controller on the host to complete installation:
sudo systemctl start avicontroller
- If deploying a 3-host cluster, repeat the steps above on the hosts for each of the other 2 Controllers.
Note: Following reboot, it takes about 3-5 minutes before the web interface become available. Until the reboot is complete, web interface access will appear to be frozen. This is normal. Starting with Avi Vantage release 16.3 reboot is not required.
4. Perform Initial Setup of Avi Controller
Use a web browser to navigate to the Avi Controller and start the setup wizard to configure basic system settings, i.e., create the administrator account, provide DNS and NTP server information, email/SMTP information, and choose Linux as the infrastructure type, as shown below.
- SSH user and keys: To use the “root” account (simpler option), select Create SSH User, enter the name, select Generate SSH Key Value Pair and click Generate SSH Key Pair. Then click Copy to clipboard, and save the key in a text file. (This file will be useful soon.)
- Avi SE hosts: After SSH access is set up on each Avi SE host, the hosts can be added to the Avi Controller. For now, click Complete.
Multitenancy support: For now, select No. This can be configured at any time later, if needed. After the wizard closes, see the following articles to complete the installation and create virtual services:
5. Set up SSH Access to the Avi SE Hosts
If you are continuing with the wizard, this section describes how to add the SSH account information to the Avi Controller, and to then copy the SSH public key to each of the Avi SE hosts.
Note: If the wizard has timed out or you have decided to click through the rest of the wizard and do the SSH setup later, go here instead, when ready. See the same link if using an account other than “root.” This section assumes that “root” will be used.
On the Avi Controller:
- When the SSH User wizard page appears, click Create SSH User.
- Enter the username ("root").
- Click Generate, then click Copy to clipboard.
- Click Save.
- Open a text editor, paste the key from the clipboard, and save the file.
On each Avi SE Host:
Leaving the wizard open, use another window or device to open a CLI session in the Linux shell on one of the Avi SE hosts.
- Log into the Linux shell on the Avi SE host (in this example, 10.130.164.76):
ssh email@example.com password:
- Prepare the Avi SE host for adding the key from the Avi Controller:
mkdir .ssh && chmod 700 .ssh && cd .ssh
- Add the Avi Controller's public key to the authorized key file by pasting the key copied from the Avi Controller by clicking Copy to clipboard into the following command line:
echo "paste-key-file-copied-from-Controller" > authorized_keys chmod 644 authorized_keys
Use quotation marks to delimit the pasted key string. (If the authorized_keys file does not already exist, the command string also creates the file.)
- Repeat these steps on each Avi SE host.
mkdir .ssh && chmod 700 .ssh && cd .ssh echo "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmizdHAyNTYAAAAlbmlzdHAyNTYAAABBBAHjOSUo8AVTISniFZ05UwOsce8/CxMhZ0myWFeRJJSnEC/T09EwOj+z6uMbnTEC+AHrYAEMgVCkdlhYfmWlrCg=root@Avi-Controller" > authorized_keys chmod 644 authorized_keys
Note: Make sure to paste the public key for the Avi SE in your deployment. The key shown here is only an example and will not work with your Avi SEs.
6. Add the Avi SE Hosts to the Avi Controller
If you are continuing with the wizard, this section describes how to add the Avi SE hosts to the Avi Controller.
Note: This step will not succeed unless SSH setup steps has been completed on the Avi Controller and Avi SE hosts.
- For each Avi SE host, enter the values and click Add New Host. After all the Avi SE hosts are added, click Complete.
- In the Support Multiple Tenants window, click No:
In the Avi Controller web interface login popup, enter the user name and password added when using the setup wizard. If you clicked through the SSH or Avi SE host pages of the wizard, see the following articles to complete installation: