Connecting SEs to Controllers When Their Networks Are Isolated

In an Avi Vantage deployment, special consideration must be given to Avi Service Engines instantiated on a network isolated from the network of the Controller nodes. Classic examples of this include:

  1. The Controller cluster is protected behind a firewall, while its SEs are on the public Internet.
  2. In a public-private cloud deployment, Controllers reside in the public cloud (e.g., AWS), while SEs reside in the customer’s private cloud.

This article explains how SE-Controller communication can be established, starting with the very first communication a freshly-instantiated SE sends to its parent Controller.


Starting with Avi Vantage 17.2.4, in addition to the management node addresses that Controllers in the cluster can mutually see, one can specify for each Controller a second management IP address or a DNS-resolvable FQDN that is addressable by SEs connected to an isolated network. It is this second IP address or FQDN that is incorporated by the Controller into the SE image used to spawn SEs. Avi has added the public-ip-or-name parameter to support this capability.

Setting the Parameter via the Avi CLI

In the initial release, the parameter is accessible only via the REST API and Avi CLI. Refer to the below CLI example, which employs a single-node cluster.

[admin:my-controller-aws]: > configure cluster
Updating an existing object. Currently, the object is:
| Field         | Value                                        |
| uuid          | cluster-223cc977-f0de-4c5e-9612-7b0254b3057d |
| name          | cluster-0-1                                  |
| nodes[1]      |                                              |
|   name        |                                 |
|   ip          |                                 |
|   vm_uuid     | 005056b02776                                 |
|   vm_mor      | vm-222393                                    |
|   vm_hostname | node1.controller.local                       |
[admin:my-controller-aws]: cluster> nodes index 1
[admin:my-controller-aws]: cluster:nodes> public_ip_or_name


  • It is known that from their network, the SEs can’t address (route to) the Controller by using the address
  • Administrative staff know something the Controller can’t possibly know, namely, that a NAT-enabled firewall is in place and programmed to translate to
  • The string parameter public_ip_or_name in the object definition of the first (and only) node of the cluster is set to Thus, Controller “cluster-0-1” knows it must embed — not — into the SE image it creates for spawning SEs.
  • When an SE comes alive for the first time, it therefore addresses its parent Controller at IP address
  • Completely transparent to that SE, and thanks to the firewall’s NAT’ing ability, that initial communication is passed on to IP address
  • Subsequent Controller-SE communications proceed as normal, as if the Controller and SEs were on the same network.

Important Notes

  1. The public_ip_or_name field needs to be configured either for all the nodes in the cluster or none of the nodes. A subset of nodes in the cluster cannot be configured.
  2. When this configuration is enabled, SEs from all clouds will always use the public_ip_or_name to attempt to talk to the Controller. It is not currently possible to have SEs from one cloud use the private network while SEs from another cloud use the NATed network.
  3. It is recommended to enable this feature while configuring the cluster before SEs are created and to not modify this setting while SEs exist.