Tunables for AKO
The values.yaml in AKO affects a configmap that AKO’s deployment reads to make adjustments as per user needs. This article lists all the fields specified in values.yaml and helps you make the right choices when deploying AKO with the configurable settings.
Note: If a field is marked as editable, it implies that the field can be edited without an AKO POD restart.
This field is used to set a frequency of consistency checks in AKO. Typically inconsistent states can arise if users make changes out of band with respect to AKO. For example, a pool is deleted by the user from the UI of the Avi Controller. The full sync frequency is used to ensure that the models are reconciled and the corresponding Avi objects are restored to the original state.
This flag provides the ability to enable/disable Event broadcasting from AKO. The value specified here gets populated in the ConfigMap and can be edited at any time while AKO is running. AKO picks up the change in the param value and enables/disables Event broadcasting in the cluster at runtime, so AKO pod restart is not required.
This flag defines the logLevel for logging and can be set to one of DEBUG, INFO, WARN, ERROR (case sensitive). The logLevel value specified here gets populated in the ConfigMap and can be edited at any time while AKO is running. AKO picks up the change in the param value and sets the logLevel at runtime, so AKO pod restart is not required.
This flag is intended to be used for deletion of objects in the Avi Controller. The default value is
false. If the value is set to
true while booting up, AKO will not process any Kubernetes object and stop regular operations.
While AKO is running, this value can be edited to true in the AKO configmap to delete all objects created by AKO in Avi. After that, if the value is set to false, AKO resumes processing Kubernetes objects and recreates all the objects in Avi.
This flag can be used in two scenarios:
- If your POD CIDRs are routable either through an internal implementation or by default.
- If you are working with multiple NICs on your kubernetes worker nodes and the default gateway is not from the same subnet as your VRF’s PG network.
clusterName field primarily identifies your running AKO instance. AKO internally uses this field to tag all the objects it creates on Avi Controller. All objects created by a particular AKO instance have a prefix of <clusterName> in their names and also populates the created_by like so ako-<clusterName>.
Each AKO instance mapped to a given Avi cloud should have a unique
clusterName parameter. This maintains uniqueness of the object naming across Kubernetes clusters.
apiServerPort field is used to run the API server within the AKO pod. The Kubernetes API server uses the /api/status API to verify the health of the AKO pod on the pod:port where the port is defined by this field. This is configurable, because some environments can block usage of the default 8080 port. This field is purely used for AKO’s internal API server and must not be confused with a Kubernetes pod port.
Use this flag only if you are using
cilium as a CNI and you are looking to sync your static route configurations automatically.
For cilium CNI, setting this flag is only required when using Cluster Scope mode for IPAM. With Cilium CNI, there are two ways to configure the per-node PodCIDRs. In the default cluster scope mode, the podCIDRs range are made available via the CiliumNode (cilium.io/v2.CiliumNode) CRD and AKO reads this CRD to determine the Pod CIDR to Node IP mappings when the flag is set as
cilium. In Kubernetes host scope mode, podCIDRs are allocated out of the PodCIDR range associated to each node by Kubernetes. Since AKO determines the Pod CIDR to Node IP mappings from Node Spec by default, the
cniPlugin flag is not required to be set exclusively.
Once enabled, for
calico, this flag is used to read the
blockaffinity CRD to determine the Pod CIDR to Node IP mappings. If you are on an older version of calico where
blockaffinity is not present, then leave this field as blank.
For openshift hostsubnet CRD is used to determine the Pod CIDR to Node IP mapping.
For ovn-kubernetes the
k8s.ovn.org/node-subnets annotation in the Node metadata is used to determine the Pod CIDR to Node IP mapping.
AKO determines the static routes based on the Kubernetes Nodes object as done with other CNIs.
In case of NCP CNI, AKO automatically disables the configuration of static routes.
There are certain scenarios where AKO cannot determine the Pod CIDRs being used in the Kubernetes Nodes, for instance, when deploying calico using etcd as the datastore. In such cases, AKO provides its own interface to feed in Pod CIDR to Node mappings, using an annotation in the Node object. While keeping the
cniPlugin value to be empty, add the following annotation in the Node object to provide Pod CIDRs being used in the Node. Note that for multiple Pod CIDRs that are being used in the Node, simply provide the entries as a comma separated string.
Use this flag for AKO to act as a pure layer 7 Ingress Controller. AKO has to be rebooted for this flag change to take effect. If the configmap is edited while AKO is running, then the change will not take effect. If AKO was working for both L4-L7 prior to this change and then this flag is set to true, then AKO will delete the layer 4 LB virtual services from the Avi Controller and keep only the Layer 7 virtual services. If the flag is set to false the service of type Loadbalancers would be synced and Layer 4 virtual services would be created.
This knob is used to specify the T1 logical router’s name in the format of /infra/tier-1s/<name-of-t1>.
Preconfigure this T1 router with a logical segment in the NSX-T cloud as a
data network segment.
AKO uses this information to populate the virtual service’s and pool’s T1Lr attribute.
Use this flag if you want to create an Enhanced Virtual Hosting model for virtual service objects in Avi. It is disabled by default. Set the flag to true to enable the flag.
Before enabling the flag in the existing deployment delete the config and then enable the flag. This will ensure SNI based virtual services are deleted before creating EVH virtual services.
AKO allows ingresses/routes from specific namespace/s to be synced to Avi Controller. This key-value pair represent a label that is used by AKO to filter out namespace/s. If one of key/values specified are empty, then ingresses/routes from all namespaces will be synched to the Avi Controller.
Use this flag to enable AKO to watch over Gateway API CRDs i.e. GatewayClasses and Gateways. AKO only supports Gateway APIs with Layer 4 Services. Setting this to
true enables users to configure GatewayClass and
Gateway CRDs to aggregate multiple Layer 4 Services and create one virtual service per Gateway Object.
blockedNamespaceList lists the Kubernetes/OpenShift namespaces blocked by AKO. AKO will not process any Kubernetes/Openshift object updates from these namespaces. The default value is empty list.
blockedNamespaceList: - kube-system - kube-public
AKOSettings.istioEnabled (Tech Preview)
AKOSettings.ipFamily (Tech Preview)
IPv6 is currently supported only for vCenter cloud with calico CNI.
AKO can be deployed with ipFamily as V4 or V6. When the ipFamily is set to V6, AKO looks for V6 IP for nodes from the Calico annotation and creates routes on the Controller. Only servers with V6 IP will get added to Pools. This setting is for the backend pools to use IPv6 or IPv4. For frontend virtual services, use
AKOSettings.useDefaultSecretsOnly restricts the secret handling to default secrets present in the namespace where AKO is installed in the OpenShift clusters if set to true.
By default, this flag is set to False.
nodeNetworkList lists the Networks (specified using either
networkUUID) and Node CIDR’s where the Kubernetes nodes are created. This is only used in the
ClusterIP deployment of AKO and in vCenter cloud and only when
disableStaticRouteSync is set to false.
If two Kubernetes clusters have overlapping Pod CIDRs, the Service Engine needs to identify the right gateway for each of the overlapping CIDR groups. This is achieved by specifying the right placement network for the pools that helps the Service Engine place the pools appropriately.
NetworkSettings.subnetIP and NetworkSettings.subnetPrefix
AKO supports dual arm deployment where the Virtual IP network can be on a different subnet than the actual Port Groups on which the kubernetes nodes are deployed.
These fields are used to specify the Virtual IP network details on which the user wants to place the Avi virtual services on.
List of VIP Networks can be specified through vipNetworkList with key as
networkUUID. Except AWS cloud, for all other cloud types, only one networkName is supported. For example in
vipNetworkList: - networkName: net1
vipNetworkList: - networkUUID: dvportgroup-4167-cloud-d4b24fc7-a435-408d-af9f-150229a6fea6f
In addition to the
networkUUID, we can also provide CIDR information that allows us to specify the Virtual IP network details on which the user wants to place the Avi virtual services on.
vipNetworkLists: - networkName: net1 cidr: 10.1.1.0/24 v6cidr: 2002::1234:abcd:ffff:c0a8:101/64
vipNetworkLists: - networkUUID: dvportgroup-4167-cloud-d4b24fc7-a435-408d-af9f-150229a6fea6f cidr: 10.1.1.0/24 v6cidr: 2002::1234:abcd:ffff:c0a8:101/64
v6cidr may only work for Enterprise license with Avi controller. You can provide either
v6cidr, or both.
v6cidr will enable the virtual services networks to use IPv6.
For all Public clouds,
vipNetworkList must have at least one
networkName. For other cloud types too, it is suggested that
networkName must be specified in
vipNetworkList. With Avi IPAM, if
networkName is not specified in
vipNetworkList, an IP can be allocated from the IPAM of the cloud.
In AWS cloud, multiple
networkNames are supported in
This feature allows the Avi Service Engines to publish the VIP to SE interface IP mapping to the upstream BGP peers. Using BGP, a virtual service enabled for RHI can be placed on up to 64 SEs within the SE group. Each SE uses RHI to advertise a /32 host route to the virtual service’s VIP address, and is able to accept the traffic. The upstream router uses ECMP to select a path to one of the SEs. Based on this update, the BGP peer connected to the Avi SE updates its route table to use the Avi SE as the next hop for reaching the VIP. The peer BGP router also advertises itself to its upstream BGP peers as a next hop for reaching the VIP.The BGP peer IP addresses, as well as the local Autonomous System (AS) number and a few other settings, are specified in a BGP profile on the Avi Controller.
This feature is available as a global setting in AKO which means if it’s set to true then it would apply for all virtual services created by AKO.
Since RHI is a Layer 4 construct, the settings applies to all the host FQDNs patched as pools/SNI virtualservices to the parent shared virtualservice.
AKO uses a sharding logic for Layer 7 ingress objects. A sharded virtual service involves hosting multiple insecure or secure ingresses hosted by one virtual IP or VIP. Having a shared virtual IP allows lesser IP usage since reserving IP addresses particularly in public clouds incur greater cost.
Currently http caching is not available on pool groups from the Avi controller. AKO uses pool groups for canary style deployments. If a user does not require canary deployments and they have an immediate requirement for HTTP caching then this flag can be helpful. Use of this flag is highly discouraged unless required, as it will be deprecated in future once Avi Pool Groups implement HTTP caching in the Avi Controller.
If this flag is set to true then AKO would program http policy set rules to switch between pools instead of pool groups. This feature only applies to secure FQDNs.
This is applicable only in OpenShift environment. AKO uses a sharding logic for pass through routes, these are distinct from the shared virtual services used for Layer 7 Ingress or Route objects. For all passthrough routes, a set of shared virtual services are created. The number of such virtual services is controlled by this flag.
This field is related to the ingress class support in AKO specified via kubernetes.io/ingress.class annotation specified on an ingress object.
If AKO is set as the default Ingress Controller, then it will sync everything other than the ones on which the Ingress class is specified and is not equal to avi. If Avi is not set as the default Ingress controller then AKO will sync only those Ingresses which have the ingress class set to avi. If you do not use ingress classes, then do not modify this knob and AKO will sync all your Ingress objects to Avi.
If you have multiple sub-domains configured in your Avi cloud, use this knob to specify the default sub-domain. This is used to generate the FQDN for the Service of type
loadbalancer. If unspecified, the behavior works on a sorting logic. The first sorted sub-domain in chosen, so we recommend using this parameter if you want to be in control of your DNS resolution for service of type
This knob is used to control how the layer 4 service of type
Loadbalancer’s FQDN is generated. AKO supports 3 options:
Default: In this case, the FQDN format is: where the namespace refers to the Service’s namespace. The sub-domain is picked up from the IPAM/DNS profile.
Flat: In this case, the FQDN format is: disabled. In this case, the FQDNs are not generated for service of type Loadbalancers.
This field is used to specify the Avi Controller version. While AKO is backward compatible with most of the 18.2.x Avi Controllers, the tested and preferred Controller version is 18.2.10
This field is usually not present in the values.yaml by default but can be provided with the helm install command to specify the Avi Controller’s IP address or Hostname. If you are using a containerized deployment of the controller, use a fully qualified controller IP address/FQDN. For example, if the Controller is hosted on 8443, then the Controller host should be of the format x.x.x.x:8443.
This field is used to specify the name of the IaaS cloud in Avi Controller. For example, if you have the vCenter cloud named as
Demo then specify the name of the cloud name with this field. This helps AKO determine the IaaS cloud to create the service engines on.
If this field is set to true, AKO will map each Kubernetes / OpenShift cluster uniquely to a tenant in Avi. If enabled, then the tenant should be created in Avi to map to a cluster and needs to be specified in
tenantName field is used to specify the name of the tenant where all the AKO objects will be created in Avi. This field is only required if tenantsPerCluster is set to true. The tenant in Avi needs to be created by the Avi Controller admin before the AKO bootup.
This field is used to specify the name of the IaaS cloud in Avi Controller. For example, if you have the vCenter cloud named as Demo then specify the name of the cloud name with this field. This helps AKO determine the IaaS cloud to create the service engines on.
AWS and Azure Cloud in NodePort mode of AKO
If the IaaS cloud is Azure then subnet name is specified in
vipNetworkList. Azure IaaS cloud is supported only in NodePort mode of AKO. If the IaaS cloud is AWS then subnet uuid is specified in
networkName within the
vipNetworkList. AWS IaaS cloud is supported only in the NodePort mode of AKO. The subnetIP and subnetPrefix are not required for AWS and Azure Cloud.
avicredentials.username and avicredentials.password
The username/password of the Avi Controller is specified with this flag. The username/password are base64 encoded by helm and a corresponding secret object is used to maintain the same. Editing this field requires a restart (delete/re-create) of the AKO pod.
The generated authtoken from the Avi controller can be specified with this flag as an alternative to password. The authtoken is also base64 encoded and updated regularly in a Kubernetes Secret object. The token refresh is managed by AKO. In case of token refresh failure, a new token needs to be generated and updated into the secret object.
This field allows setting the rootCA of the Avi Controller, that AKO uses to verify the server certificate provided by the Avi Controller during the TLS handshake. This also enables AKO to connect securely over SSL with the Avi Controller, which is not possible in case the field is not provided. The field can be set as follows:
certificateAuthorityData: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE-----
This option specifies the number of replicas of the AKO pod.
Note: Starting with version 1.9.1, two instances of AKO are supported.
AKOs runs in active and passive modes, respectively. Once the active AKO goes down, the passive AKO will be ready to take over.
If you are using a private container registry and you’d like to override the default Docker Hub settings, then this field can be edited with the private registry name.
This option specifies whether AKO functions in ClusterIP mode or NodePort mode. By default it is set to
ClusterIP. Allowed values are
NodePort. If CNI type for the cluster is antrea, then another
NodePortLocal is allowed.
nodeSelectorLabels.key and nodeSelectorLabels.value
It might not be desirable to have all the nodes of a Kubernetes cluster to participate in becoming server pool members, hence key/value is used as a label-based selection on the nodes in Kubernetes to participate in NodePort. If the key/value is not specified then all nodes are selected.
By default, AKO prints all the logs to standard output. Instead,
persistentVolumeClaim(PVC) can be used for publishing logs of AKO pod to a file in PVC. To use this, the create a PVC (and a persistent volume, if required) and specify the name of the PVC as the value of
SecurityContext holds a security configuration that is applied to the AKO pod. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Refer to the SecurityContext v1 core.
This can be used to set the
securityContext of AKO pod, if necessary. For example, in an OpenShift environment, if a persistent storage with hostpath is used for logging, then the securityContext must have the field privileged set to true. Refer to Persistent storage using hostPath.
Document Revision History
|August 10, 2023||Updated the Configuration for NetworkSettings.nodeNetworkList and NetworkSettings.vipNetworkList for AKO version 1.10.3|
|June 16, 2023||Updated the Configuration for AKOSettings.cniPlugin for AKO version 1.10.1|
|January 30, 2023||Added securityContext and replicaCount sections for AKO version 1.9.1|
|September 26, 2022||Published the Configuration for AKOSettings.blockedNamespaceList for AKO version 1.8.1|
|December 24, 2021||Published the Configuration for AKOSettings.enableEvents for AKO version 1.6.1|
|August 31, 2021||Published the Configuration for NetworkSettings.nsxtT1LR, AKOSettings.enableEVH, and avicredentials.authtoken for AKO version 1.5.1|
|April 28, 2021||Published the Tunables for AKO version 1.4.1|