Clustering Avi Controllers of Different Networks
Avi Controller clusters provide high availability (HA) and redundancy, as well as increased analytic workload scale.
Avi Controllers communicate with each other over a single management IP address, the cluster IP address. They also use this path to communicate with all Avi Service Engines (SE) within the fabric.
Avi Controllers are not required to exist within the same IP network, but here are a few generic limitations to be considered:
- Avi Controllers must be within the same region (ideally the same data center). This helps in quickly synchronizing the databases and perform actions such as, log indexing and data retrieval.
- Avi Controllers have the option of sharing a cluster IP address. The cluster IP address is owned by the primary Avi Controller within the cluster. In order to share an IP address, all Avi Controllers must have a NIC in the same network.
- Each Avi Controller must have access to the IP addresses of other Avi Controllers through configured network routes.
AWS Availability Zones (AZs) provide redundancy and separate fault domains. All AWS regions support a minimum of two AZs. To leverage high availability provided by AWS AZs, it is recommended to deploy different Avi Controller instance of a cluster in different AZs.
The Controller cluster should be running inside the Azure cloud. Additionally, consider the following information:
- Azure credentials (username/password or application ID) which have contributor privilege access over the Controller cluster VMs and AviController role access over the virtual network that is hosting the Controller cluster.
- Subscription_id of the subscription where the Controller VMs are running.
OpenStack requires Avi Vantage to maintain a cluster IP address. So, Avi Vantage deployed into an OpenStack cloud does not support clustering of Avi Controllers present in different networks.