Avi Vantage 18.2.X Release Notes
Issues Resolved in 18.2.13 Patch Releases
Issues Resolved in 18.2.13-2p2
- AV-132431: Mitigation for CVE-2021-44228. More details: https://ikb.vmware.com/s/article/87087
- AV-120361: Connection to pool server is not using the updated key/certificate in the SSLKeyAndCertificate object assigned to the pool.
- AV-117967: Static route is prioritised over connected route which can lead to incorrect routing of packets in write access environments.
- AV-115797: SE_DOWN event is not displayed under Operations -> Events -> All Events and user login events are not displayed in the Config Audit Trail.
- AV-100868: Enhanced search filter for vs-vip inventory API.
- AV-85747: In vcenter read/write access cloud HTTP health monitors may stop working after VM notification updates are processed from vcenter
Issues Resolved in 18.2.13-2p1
- AV-112420: OpenShift: Routes unable to sync with Avi because of incorrect cross-cloud reference for network
- AV-125824: If a bond exists on the management interface NICs (>=10G), it can be broken while stopping / restarting / upgrading the Service Engines in LSC deployments
Issues Resolved in 18.2.13
Release Date: 30 August 2021
-
AV-87320: In a Terraform plan with nested blocks, the Avi Terraform provider sets default values for the optional fields which were not defined in the plan
-
AV-93678: SE failure may occur when FIX library with incorrect tag is present in the Tag group
-
AV-103251: If the Controller looses connectivity to an OpenShift/Kubernetes cluster, it may delete the virtual services created for the OpenShift Routes or Kubernetes Ingresses.
-
AV-106423: After a break in Controller-SE connectivity, when the SE re-registers, adding IP on vNICs fails with the error Unable to acquire IP address
-
AV-108222: Linux Server Cloud: Upgrade may fail on SEs originally deployed in 17.2.x, when docker run variables are located in /etc/sysconfig/avise instead of /usr/sbin/avise
-
AV-108224: Editing an IP address group which is used in the SSH access list displays the error Not allowed to remove controller ips when associated with ssh access list
-
AV-108624: Virtual services in the OpenStack cloud may go down if Keystone returns 404 Not Found for a tenant
-
AV-109476: Increased memory consumption on the Controller after show tech support has been run
-
AV-109728: When the server sends a non-compliant response without a status line, the client does not detect the content/payload
-
AV-109956: OpenShift: Upgrade stuck due to SE image generation check failure
-
AV-110129: DNS: Unable to add leading underscore in FQDN for static records via Avi UI
-
AV-111295: Incorrect accounting of closed connections when using TCP Half-open health monitor, causes reporting of high/incorrect dropped connection metrics
-
AV-112248: SE.pkg is not signed with the correct Secure Channel certificate when upgraded from a version less than 18.2.6 (V1 to V2 upgrade)
-
AV-114426: Adding a policy match rule for cache-control or Pragma header might result in a Service Engine failure
-
AV-114653: Service engine fails when attempting to reuse a connection to the LDAP server that has already been closed
-
AV-116382: OpenShift cloud fails to sync a few Egress services and fails to create Egress pods
-
AV-116423: GSLB service pool might refer to deleted virtual services, when using GSLB with OpenShift cloud connector
-
AV-116974: SE may fail due to invalid memory access in local port processing
-
AV-117720: App Cookie persistence fails when used in combination with the avi.http.remove_header (“Set-Cookie”) and avi.http.add_header (“Set-Cookie”) DataScript APIs, if the app cookie persistence and DataScript are on the same virtual service
-
AV-118134: When a virtual service is configured with use_vip_as_snat or effectively using VIP IP as SNAT, consecutive migrations to the same SE may render the virtual service with that VIP inoperative
-
AV-118264: SE fails if the NAT policy is configured with source/destination port match and when a routable ICMP packet to external world lands on the SE
-
AV-119921: In a persistence profile, the
ip_mask
behaves as an inverse CIDR mask and distributes the clients across servers instead of ensuring the clients in the same subnet are connected to the same servers -
AV-119952: Fetching GSLB status is leading to high CPU usage of DBcache, which may affect the inventory processing time
-
AV-120542: Virtual Service traffic capture with GRO or TSO enabled system might lead to SE failure
-
AV-121120: ARP resolution was failing in OpenShift deployments of an Avi SE with greater than 512 interfaces (via proxy ARP)
Key Change in 18.2.13
- SSL Secure renegotiation is disabled on Avi. Avi responds with a ‘no_renegotiation’ alert to clients attempting to initiate a secure renegotiation. In cases like TLS persistence, Avi can still initiate secure renegotiation with the client.
Issues Resolved in 18.2.12 Patch Releases
Issue Resolved in 18.2.12-2p3
- AV-117720: App Cookie persistence fails when used in combination with the
avi.http.remove_header
(Set-Cookie) andavi.http.add_header
(Set-Cookie) DataScript APIs.
Issues Resolved in 18.2.12-2p2
- AV-116423: GSLB service pool might refer to deleted virtual services, when using GSLB with OpenShift cloud connector
- AV-110129: DNS: Unable to add leading underscore in FQDN for static records via Avi UI
Issue Resolved in 18.2.12-2p1
- AV-102065: Starting with Avi Vantage version 18.2.6, SSL secure renegotiation was inadvertently enabled on Avi. This would allow clients to do secure renegotiation which is not intended and should not be allowed by Avi. Unfortunately, no workarounds are available to prevent the client from initiating secure renegotiation.
What’s New in 18.2.12
Release date: 04 March 2021
EDNS
Issues Resolved in 18.2.12
- AV-63931: If multiple LDAP servers are configured in the Auth profile and the first server times out, the request is closed out, instead of trying other servers configured
- AV-97092: ARP cache entry is not cleared for deleted servers, which may cause the SE to send packets to old MAC address
- AV-98649: If a virtual service VIP is shared by some virtual services that are enabled and some virtual services that are disabled, ` auto_rebalance` does not work
- AV-98903: The warning message, Service Time-out is displayed when the WAF tab was clicked from the virtual service
- AV-98938: If upgrade fails and aborts, in certain cases, the rollback operation may not complete
- AV-99106: The Service Engine may fail to get configuration updates from the Controller due to error in the GRPC channel
- AV-99140: GCP: Static routes are deleted after upgrade
- AV-99143: A failure in deleting the tenant in the Controller can cause the entry for this tenant in the OpenShift agent tenant cache to become stale. Due to this, for any future references for this, the tenant name, the stale entry is referred which returns the stale UUID.
- AV-100005: Some virtual services on the OpenShift Cloud may get deleted if the connectivity to the OpenShift cluster is lost
- AV-100534: PUT operation fails on secure channel certificate object even if the key and certificate values are not modified
- AV-100699: Disabling the option ip6_autocfg_enabled in the controller CLI does remove the auto-configured address from the SE. However, this does not persist through reboot/upgrade. As soon as you reboot the SE, the auto-configured address returns.
- AV-100892: When a VIP is used as SNAT for a virtual service in a legacy active standby SE group, after a primary switchover, the health monitor stops working
- AV-101200: The virtual service throughput may be rate limited when many scale-in/scale-out are done at a high traffic load
- AV-102137: Under low memory conditions, memory allocation failures can cause a service engine failure in HTTP-to-HTTPS redirect scenarios
- AV-102571: Port allocation overlap between the data connection and the HM connection can cause connection errors
- AV-102886: New ingress/route created with a valid
vsvip_ref avi_proxy
annotation, creates a new dedicated virtual service VIP even though the newly created virtual service and ingress/route refers to existing virtual service VIP that is referred to in the vsvip_ref - AV-102892: In a No-Orchestrator deployment, a virtual service using a VLAN interface goes into fault state with reason Failed to add virtual service to the interface
- AV-102957: Email messages configured to be sent as part of an Alert action may error out and remain unsent
- AV-103177: The iptables rules are not programmed in the LSC PCAP when bonds are present, thus affecting the backend traffic
- AV-103456: When the Controller is running as a docker container, the memory balancer uses the total host memory for memory balancing instead of the memory allocated to the container
- AV-103912: Service Engine self election does not work properly when Infoblox IPAM is configured in a no access, vCenter, or Linux Server Cloud.
- AV-104285: While generating a self-signed RSA certificate in FIPS mode enabled SafeNet Appliance, Error when generating SafeNet HSM bound RSA key is displayed.
- AV-104837: Health monitor response is not parsed correctly if the Content-Length header is not present
- AV-106057: If a pool is configured with a file larger than 16K to be sent as local response in case the pool is down, the response recieved by the client is partial
- AV-106169: Port-channel initialisation might fail in service engine running on CSP
- AV-106362: Updating a DNS policy with site selection having a fall back site, may result in SE failure
- AV-107313: SE may fail due to incorrect route label reference, when BGP is configured
Key Change in 18.2.12
- AV-102604: Prior to Avi Vantage version 18.2.12, history of the security logs showed fixes done in the last two years. Starting with Avi Vantage version 18.2.12, the security logs display the last security fix done, regardless of the time limit.
Issues Resolved in 18.2.11 Patch Releases
Issue Resolved in 18.2.11-3p3
- AV-101735: Chunked responses from the server may not be complete when server response does not have Content-Length or Transfer-Encoding header.
Issues Resolved in 18.2.11-3p2
- AV-103912: Service Engine self election does not work properly when Infoblox IPAM is configured in no access, vCenter, or Linux Server Cloud.
Issue Resolved in 18.2.11-3p1
- AV-98925: With Avi Vantage version 18.2.6 or higher, the RSA-PSS signature algorithms take precedence by default in Avi SE and that may force compatibility issues with older SSL stacks that don’t support these algorithms
Issues Resolved in 18.2.11-2p12
-
AV-133390: Upgrade on Docker-based Controller fails with the following error if custom repo tag is used in
avicontroller
service file:
Image for <custom-repo>:<image-version> not found
. -
AV-125824: If a bond exists on the management interface NICs (>=10G), it can be broken while stopping / restarting / upgrading the Service Engines in LSC deployments
Issue Resolved in 18.2.11-2p11
- AV-120361: Connection to pool server is not using the updated key/certificate in the SSLKeyAndCertificate object assigned to the pool.
Issue Resolved in 18.2.11-2p10
- AV-117967: Static route is prioritised over connected route which can lead to incorrect routing of packets in write access environments
Issues Resolved in 18.2.11-2p9
- AV-116382: OpenShift cloud fails to sync a few Egress services and fails to create Egress pods
- AV-115797: Events for SE down are not displayed in the Events Tab and the users login in the Config Audit Trail does not show either.
Issue Resolved in 18.2.11-2p8
- AV-112420: OpenShift: Routes unable to sync with Avi because of incorrect cross-cloud reference for network
Issue Resolved in 18.2.11-2p7
- AV-112420: OpenShift: Routes unable to sync with Avi because of cross-cloud reference for network
Key Changes in 18.2.11-2p5
- AV-102065: SSL Secure renegotiation is disabled on Avi Vantage. Avi Vantage responds with a
no_renegotiation
alert to clients attempting to initiate a secure renegotiation. In cases like TLS persistence, Avi can still initiate secure renegotiation with the client.
Issues Resolved in 18.2.11-2p6
- AV-108222: Linux Server Cloud: Upgrade may fail on SEs originally deployed in 17.2.x, when docker run variables are located in /etc/sysconfig/avise instead of /usr/sbin/avise
- AV-101214: Symptoms: When auto-rebalance is enabled, SE upgrade can fail due to SE scale in/SE scale out RPCs to Resource Monitor timing out. Similarly, SE disable can fail or be stuck in the disabling state due to Resource Monitor not picking up the request.
- AV-100868: vs-vip-inventory api with search params returns data based on .contains instead if than .startswith as the search parameters
Issue Resolved in 18.2.11-2p4
- AV-108224: When editing an IP address group that is used in ssh access list, the error message
Not allowed to remove controller ips when associated with ssh access list
is displayed.
Issues Resolved in 18.2.11-2p4
- AV-109956: OpenShift: Upgrade stuck due to SE image generation check failure.
- AV-109728: When server sends a non-compliant response without a status line, client can not detect the content/payload.
- AV-108224: Not allowed to edit IP address group if it is used in ssh access list
- AV-103251: If the Controller looses connectivity to an OpenShift/Kubernetes cluster, it may delete the virtual services created for the OpenShift Routes or Kubernetes Ingresses.
Issues Resolved in 18.2.11-2p3
- AV-100005: Some virtual services on the OpenShift Cloud may get deleted if the connectivity to the OpenShift cluster is lost
- AV-99143: A failure in deleting the tenant in the Controller can cause the entry for this tenant in the OpenShift agent tenant cache to become stale. Due to this, for any future references for this, the tenant name, the stale entry is referred which returns the stale UUID.
Issues Resolved in 18.2.11-2p2
- AV-63931: If multiple LDAP servers are configured in the Auth profile and the first server times out, the request is closed out, instead of trying other servers configured
- AV-85747: In the vCenter read/write access cloud, HTTP health monitors may stop working after the VM notification updates are processed from vCenter
- AV-100534: API: PUT request to modify a secure channel certificate fails
- AV-100699: Disabling the cloud configuration
ip6_autocfg_enabled
in the Controller CLI removes the auto-configured address from the SE. However, this may not take effect after reboots/upgrades. - AV-101200: The virtual service throughput may be rate limited when many scale-in/scale-out are done at a high traffic load
Issue Resolved in 18.2.11-2p1
- AV-99036: BFD over IPv6 does not work
Issues Resolved in 18.2.11
Release date: 03 November 2020
- AV-72536: Unauthenticated GET requests create sessions in the postgres database. A high number of such session entries cause the application to become unresponsive
- AV-79236: Intermittent 400 bad request errors displayed when the Avi SE and client/server pod are on the same OpenShift node
- AV-80196: SE failure when passing
avi.http_response
as the second argument to theavi.http.get_cookie()
when it is used in the request header script.: - AV-85198: Rapid (>40 reqs / min) PATCH API calls to modify pool objects could result in 504 errors.
- AV-85558: Sessions (any API/ UI call before login, or which redirect to login) with unauthenticated requests are not cleaned up, causing session buildup.
- AV-88370: Enabling traffic capture for a virtual service may result in high memory usage on the Controller due to sshfs process retaining memory
- AV-89227: Requests result in a SAML authentication loop
- AV-89906: SE failure can happen when accessing an invalid connection entry in UDP fast path packet processing
- AV-90063: Service Engine could exhibit heartbeat failure messages and reboot, due to Service Engine - Controller communication related to large number of application request log files being transferred.
- AV-90603: Infoblox: The Usable Subnet field on the Avi UI may not get populated when large number of subnets are configured in Infoblox
- AV-92028: Unable to log in to the Avi Controller when using SAML authentication
- AV-93539: Geolocation entries are missing on the SE where the DNS virtual services for a site is placed after either of the following triggers:
- SNAT configuration on DNS virtual service
- Disable/ Enable of DNS virtual service
- AV-93714: In geo-DB files, consecutive creation or deletion operations cause inconsistencies like:
- The geo-DB files do not get downloaded to the SE
- The geo-DB files may not get replicated to the followers from the leader
- AV-93792: The rate limit configured for a virtual service using connections_rate_limit is not honored
- AV-93954: A Service Engine can fail when a virtual service has traffic consisting of file uploads, with large header files and when all the pool members are down
- AV-94045: Upgrade from Avi Vantage versions 18.2.6 - 18.2.10 to version 18.2.10 and higher via the application UI is not available
- AV-96347: The metric fields
reqs_finished_sessions
,finished_sessions
,concurrent_sessions
return the value 0 in SE metric stats message - AV-96827: Virtual service reports 503 Gateway error when server closes the connection before all the data is sent to client
- AV-96887: Static routes on the dedicated management interface are lost when SE restarts
Key Change in 18.2.11
- AV-84044: Future-dated subscription licenses cannot be issued anymore. All subscription serial keys are valid from the time of issue.
Issues Resolved in 18.2.10 Patch Releases
Issue Resolved in 18.2.10-3p1
- AV-96317: The Azure cloud goes down if there is an error with the Azure Marketplace API and the outage continues
Issue Resolved in 18.2.10-2p8
- AV-110460: Remote users getting logged out during automation
Key Changes in 18.2.10-2p7
- AV-102065: SSL Secure renegotiation is disabled on Avi Vantage. Avi Vantage responds with a
no_renegotiation
alert to clients attempting to initiate a secure renegotiation. In cases like TLS persistence, Avi can still initiate secure renegotiation with the client.
Issue Resolved in 18.2.10-2p6
- AV-108624: Virtual services in the OpenStack cloud may go down if Keystone returns 404 Not Found for a tenant.
Issue Resolved in 18.2.10-2p5
- AV-107313: The SE might fail due to incorrect route label reference
Issues Resolved in 18.2.10-2p4
- AV-103177: The iptables rules are not programmed in the LSC PCAP when bonds are present, thus affecting the backend traffic.
Issues Resolved in 18.2.10-2p3
- AV-98649: : Auto rebalance will not work if an SE group has one or more disabled VS(s) and one or more enabled VS(s) which point to the same virtual service IP
- AV-99906 : If the
ResourceMonitor-worker
process is restarted/killed, the main process does not handle it and thus the working model breaks - AV-101214 : When auto rebalance is enabled, the SE upgrade can fail due to SeScaleIn/SeScaleOut RPCs to Resource Monitor timing out. Similarly, SE disable can fail or be stuck in the disabling state since the Resource Monitor is not picking up the request.
Issues Resolved in 18.2.10-2p2
- AV-93954: A Service Engine can fail when a virtual service has traffic consisting of file uploads, with large header files and when all the pool members are down
- AV-96827: The virtual service reports 503 Gateway error when server closes the connection before all the data is sent to client.
Issues Resolved in 18.2.10-2p1
- AV-89227: Requests resulting in a SAML authentication loop
- AV-93792: The rate limit configured for the virtual service connection rate limiter is not honoured.
What’s New in 18.2.10
Release date: 31 August 2020
Key Changes in 18.2.10
- GSLB: Config messages are no longer prioritized over health status messages while sending APIs to the follower site.
- If the payload of an event (event details) is more than 128KB, the event details are discarded. ControlScripts will be executed but will not have access to the event details.
- DataScript rate limiters with no name will be rejected
- HTTP/2 can now be enabled under virtual service and pool/ pool group configuration. The option Enable HTTP2 is no longer available in the Application Profile configuration.
Issues Resolved in 18.2.10
- AV-73155: OpenStack: Scale in does not happen for SE during migration
- AV-78741: Content-Type header cannot be removed or replaced through the HTTP response policy
- AV-79847: The health score under the Health tab is marked as NA
- AV-79912: When specifying a port range, the DataScript function
avi.vs.port
returns the first port in the range specified - AV-80184: ControlScripts fail to run as an event action, when event payload is greater than 128 KB
- AV-83223: Service Engine with caching enabled and high memory utilization can fail while parsing server response
- AV-85395: When the client sends RST before a three-way handshake, dropped connections are high due to reporting issue
- AV-85680: Service Engine failure due to high memory utilization and inability to free memory
- AV-85799: Service Engine failure when child SNI Virtual Service is deleted while the virtual service is processing connections.
- AV-85800: Service Engine failure when avi.http.remove_cookie() or avi.http.replace_cookie() DataScript functions are used with large cookies, or cookes without spaces
- AV-86466: Service Engine failure due to missed heartbeat when WAF is enabled on the virtual service, the maximum client request size is set to 32 MB, and the client uploads big file
- AV-86540: Linux Server Cloud: SE initialisation fails if the datapath interfaces are not released back to Linux successfully when SE is restarted
- AV-86859: In OpenShift/Kubernetes based deployments, if the route to bravi on the host gets removed inadvertently, the subsequent creation of service engine does not create the route entry
- AV-86871: Upgrade from Avi Vantage version 17.2.x to 18.2.x or higher can result in the metrics manager using a lot of memory after upgrade (more than 50,000 backend servers. This can happen at a lower scale if the pools are shared across many virtual services.
- AV-86953: IPv6 GeoDB may contain duplicate entries depending on the order of the DB entry creation
- AV-86955: The following DNS policies do not work:
Match client location (use_edns_client_subnet_ip enabled) does not work for DNS requests with no ECS
Match client location ( use_edns_client_subnet_ip not enabled) does not work
Match client IP (use_edns_client_subnet_ip enabled) does not work for DNS requests with ECS - AV-87502: Service Engine failure when Auth Profile is disabled in virtual service configuration, while the virtual service is still processing HTTP traffic.
- AV-87593: Change in MTU of bond interface could trigger a race condition where the interface is marked faulty
- AV-87605: Intermittent Service Engine failure while removing pool configuration from Virtual Service
- AV-87886: If the Avi cloud managing a Kubernetes cluster has a cluster tag, then changes in pod, endpoint or service for an ingress backend does not update the corresponding Avi objects
- AV-88094: Azure: Service Engine failure when NIC flaps
- AV-88149: OpenShift on Azure:Cloud connector fails to allocate IP for egress on Azure due to repeated allocation and de-allocation of egress IPs
- AV-88267: Requests sent to virtual services with incorrect DataScripts in the LB
Done
event sends a 200 OK response instead of responding with a server error - AV-88692: Service Engine can fail due to incorrect rate limiter configuration in a network security policy
- AV-88795: SE Group or SE upgrade initiated when the Controller is upgraded at the system level in case of software or patch update
- AV-89578: Service Engine may fail during upgrade when a rate limiter is configured
- AV-89581: The message Unhandled error in Deferred is displayed on the terminal after upgrade
- AV-89946: HTTP Policy port match always matches to the first port in port range instead of the service port the request arrived on
- AV-90045: GSLB service replication fails on the follower site if it has a local virtual service with the same FQDN
- AV-90340: Service engine upgrade fails in Nutanix AHV environment
- AV-91369: SE failure when the cookie being encrypted is larger than 4 KB
- AV-91399: Upgrade failed when migrating large metrics DB
- AV-91550: DataScript rate limiters with no name cause the virtual service to fail
- AV-91907: Under low cache memory conditions, cache allocations might fail resulting in SE failure and/or memory leak of cache memory.
- AV-92575: A valid Avi user with write access to the Avi DataScript role may be able to gain read/write access to the Controller file system
- AV-93265: A valid Avi user with write access to the Avi DataScript role will be able to execute system commands via the Lua system functions.
Known Issues in 18.2.10
-
AV-92284: AWS: On rolling back from Avi Vantage version 20.x.x to any 18.2.x releases, the new SE creation may fail with the error, “Volume of size 10GB is smaller than snapshot ‘snap-0cf806e71417760f0’, expect size >= 15GB”. Remove the vmdk on the Controller and the AMI in the cloud. Discovery will trigger a new AMI registration which will be used for subsequent SE creations.
-
AV-94045: Upgrade from Avi Vantage versions 18.2.6 - 18.2.10 to version 18.2.10+ via the GUI is not available.
Workaround: Use the Avi CLI to upgrade. -
AV-99366: During upgrade from Avi Vantage version 18.2.10, the
SYSERR_MC_SYSTEM_CONFIGURATION_ERR
message may be incorrectly displayed if the available system resources (disk, memory, and cores) are near the minimum expected threshold. For example,
Node 10.xx.xxx.xxx has Disk: 126GB Memory: 24GB Cores: 8 Expected are Disk: 128GB Memory: 24GB Cores: 8 Node 10.xx.xxx.xxx in Default-Group in Default-Cloud under tenant admin has Disk: 15GB Memory: 2GB Cores: 1 Expected are Disk: 16GB Memory: 2GB Cores: 1
Work Around: This is a warning message to alert you about the insufficient system resources. However, this will not cause the upgrade process to fail.
Issues Resolved in 18.2.9 Patch Releases
Issue Resolved in 18.2.9-4p1
- AV-90045: GSLB service replication fails on the follower site if it has a local virtual service with the same FQDN
Issue Resolved in 18.2.9-2p20
- AV-132924: Symptoms: GeoDb IP to country code mapping is stale Workarounds: Fix the needed subnets in L7 Datascripts
Issue Resolved in 18.2.9-2p19
- AV-125824: If a bond exists on the management interface NICs (>=10G), it can be broken while stopping / restarting / upgrading the Service Engines in LSC deployments.
Issue Resolved in 18.2.9-2p18
- AV-120542: Symptoms: Virtual Service traffic capture with GRO or TSO enabled system might lead to SE failure.
Issue Resolved in 18.2.9-2p17
- AV-116738: Due to miscalculation on the Docker’s memory usage, memory balancer does not get triggered.
Issues Resolved in 18.2.9-2p16
- AV-119496: All Virtual Services in an OpenShift/ Kubernetes cloud is deleted even when all objects are still present in the Kubernetes cluster.
- AV-116974: SE may fail due to invalid memory access in local port processing.
Issue Resolved in 18.2.9-2p15
- AV-103251: If the Avi Controller looses connectivity to an OpenShift/Kubernetes cluster, it may delete the virtual services created for the OpenShift Routes or Kubernetes Ingresses.
Issues Resolved in 18.2.9-2p14
- AV-106362: Updating a DNS policy with site selection having a fall back site, may result in SE failure
- AV-103456: When the Controller is running as a docker container, the memory balancer uses the total host memory for memory balancing instead of the memory allocated to the container
Issue Resolved in 18.2.9-2p13
- AV-102571: Port allocation overlap between the data connection and the HM connection can cause connection errors.
Issue Resolved in 18.2.9-2p12
- AV-99106: The stream read() exits even though there are no failures in the underlying channel and the corresponding close blocks forever. This results in SE failure from a configuration point of view, as it does not receive any notifications. The Controller, unaware of the failure as keepalives in the context of gRPC library is keeping the stream alive.
Issues Resolved in 18.2.9-2p11
- AV-87886: If an Avi cloud managing a Kubernetes cluster has a cluster tag, then changes in pod, endpoint or service for an ingress backend does not update the corresponding Avi Objects
- AV-100892: When a VIP is used as SNAT for a virtual service in a legacy active standby SE group, after a primary switchover, health monitor stops working
- AV-102886: New ingress/route created with a valid
vsvip_ref
avi_proxy annotation, creates a new dedicated virtual service VIP even though the newly created virtual service and ingress/route refers to existing virtual service VIP that is referred to in thevsvip_ref
. - AV-101950: Service Engines do not upgrade in parallel with Avi Vantage version 18.2.9 even if the Disruptive option is used.
Issues Resolved in 18.2.9-2p10
- AV-99140: Static routes are removed from the SEs after reboot
- AV-99353: High rate of logging can cause contention on debug rings that may delay packet processing
Key Changes in 18.2.9-2p10
- AV-97104: RSS support in DPDK mode for GCE environment
Issues Resolved in 18.2.9-2p9
- AV-94053: Service Engines are not getting deleted from Avi Vantage if they are deleted out-of-band from GCP
- AV-94214: Buckets are not deleted from GCS if the bucket cleanup fails after image creation
- AV-98667: GCP cloud reconcile deletes routes for all virtual services, if a virtual service is disabled in route-aggregation mode
Issue Resolved in 18.2.9-2p8
- AV-96887: Static routes on the dedicated management interface are lost when SE restarts
Issues Resolved in 18.2.9-2p7
- AV-90603: Infoblox: The Usable Subnet field on the Avi UI may not get populated when large number of subnets are configured in Infoblox
- AV-95233: Resume will trigger back to back SE Group Upgrade RPC
Issues Resolved in 18.2.9-2p6
- AV-89653: In a large scale GSLB configuration, the DNS virtual service is not pushed to the SE on the follower site after a cluster restart on the leader site.
- AV-89906: SE failure can happen when accessing an invalid connection entry in UDP fast path packet processing
- AV-93792: The rate limit configured for the virtual service connection rate limiter is not honoured.
- AV-93954:
se_dp
failed if the virtual service received aPOST
request with large header during a server down event.
Issues Resolved in 18.2.9-2p5
- AV-85198: Rapid PATCH calls to the same pool object results in error 504
- AV-85558: Sessions that have unauthenticated requests are not cleaned up, and hence causing session buildup
- AV-87593: Change in MTU of bond interface could trigger a race condition where the interface is marked faulty
- AV-91384: SE Packet processing can be delayed due to rx/tx queue processing delays
- AV-91907: Under low cache memory conditions, cache allocations might fail and result in SE failure and/or memory leak of cache memory
- AV-92165: The DataScript API
avi.http.close_conn()
cannot be used inRESP_FAILED
event, since it is supported only for REQ and RESP events only - AV-92575: A valid Avi user with write access to the Avi DataScript role may be able to gain read/write access to the Controller filesystem
- AV-93265: A valid Avi user with write access to the Avi DataScript role will be able to execute system commands via the Lua system functions.
Issue Resolved in 18.2.9-2p4
- AV-92270: Incorrect entries in GeoDB
Issues Resolved in 18.2.9-3p1
- AV-85365: SYN flood mitigation changes
Issues Resolved in 18.2.9-2p3
- AV-91399: Upgrade prematurely stopped when migrating large metrics DB
- AV-91557: SE running in PCAP mode on Linux Server Cloud does not connect to the controller when there are bond interfaces without member links in the host
Issues Resolved in 18.2.9-2p2
- AV-89578: SE may fail during upgrade due to attached rate limiter.
- AV-89581: The message Unhandled error in Deferred is displayed on the terminal after upgrade.
- AV-90340: Service engine upgrade fails in Nutanix AHV environment.
Issues Resolved in 18.2.9-2p1
- AV-83223: Under severe memory pressure, cache processing can fail while parsing response from backend server
- AV-85800: Service Engine can fail when requests with cookies, with no spaces in between or large cookies use the
avi.http.remove_cookie
oravi.http.replace_cookie
API - AV-85993: Enable Gateway Monitoring in OCI environment
- AV-86953: IPv6 GeoDB may contain duplicate entries depending on the order of the DB entry creation
- AV-86955: DNS policy using client IP match / Geo location match behavior is not behaving as expected
- AV-87502: Service Engine failure when Auth Profile is disabled while still processing HTTP traffic is sent on old connections
- AV-87505: Service Engine failure due to a double close of LDAP connection
- AV-88692: Service Engine can fail due to incorrect rate limiter configuration in a network security policy
- AV-88795: SE Group or SE upgrade initiated when the Controller is upgraded at the system level in case of software or patch update
What’s New in 18.2.9
Release date: 8 June 2020
- OCI: DPDK and Broadcom VF support
- Support for GCP full access
- Support for GSLB IPv6 support
- Layer 7 rate limiting enhancements
- Support for vSphere 7.0 (CVDS)
Key Changes in 18.2.9
- Default pool group determination consistent with the route object configuration.
- Image uploaded event will be generated only when the image is uploaded on OpenStack. The event will be generated when either the image is uploaded for the first time, or if when the image on OpenStack is detected to be corrupt or deleted, it will be uploaded again and an event will be generated.
- For a virtual service with “traffic enabled” set to false and “use VIP as SNAT” set, SE responds to ARP for the VIP which negates the effect of “traffic enable” being set to false.
- If
use_vip_as_snat
is configured as False andsnat_ip
configured same as VIP manually, the configuration will be ignored. - Pool metrics on virtual service entities is not supported. Support only for pool entities.
- Users with
permission+_traffic_capture
permission can do a packet capture for the virtual service and view the packet capture files. - Upgrade operations (System and SE groups) with Rollback-on-error option are supported from Avi Vantage version 18.2.9 onwards.
- Kubernetes versions 1.17 and 1.18 are supported for the Kubernetes cloud connector, and are available only for the existing users of Kubernetes cloud connectors.
Issues Resolved in 18.2.9
- AV-74434: K8s: DNS resolution not working from one of the egress pods because of wrong route entry for source IP egress pod
- AV-76098: UI: Non federated persistence profiles are shown for GSLB services
- AV-77071: For OpenShift route objects with common hostname the order of default pool group determination was random on restart or during full sync, resulting in pool group selection inconsistencies on virtual service recreation
- AV-77407: OpenShift Secure Dedicated Route with multiple ports with GSLB annotation updates only host header for the first rule
- AV-79638: Application log streaming fails when specified destination server FQDN has hyphen in hostname
- AV-81198: For OpenShift routes, that have TLS termination as passthrough and Edge insecure termination set as ‘Redirect’ - Avi Vantage creates a Layer 7 virtual service blocking passthrough traffic on 443
- AV-81373: AWS: Extra VIPs on SE data NICs belonging to disabled virtual service are not getting moved to parking NIC during reconcile
- AV-81374: GSLB health monitor failing due to incorrect namespace
- AV-81456: Service Engine issues if a chunked transfer encoding cache entry is hit when
enable_chunk_merge
is configured as false with response buffer mode on - AV-81908: Some of the GSLB pool members’ FQDNs are not resolvable (as they are in a DR site). When DNS refresh interval is set to 5 minutes, this will create excessive CRUD on the system resulting in leader site not being able to send health status probes to the follower sites
- AV-81953: BGP peering is not established on using a VLAN interface that is in a different VRF than the parent interface. External health monitors that use that VLAN interface also do not work
- AV-81961: NSX-V: Avi deletes unrelated ipsets which start with the same prefix as specified in the Avi NSX-V cloud configuration field:
avi_nsx_prefix
- AV-82284: External AWS DNS profile with AWS cloud does not work if the cloud is using cross account based authentication
- AV-82288: Custom IPAM fails on upgrade to 18.2.8
- AV-82432: Virtual service unreachable when placed on Service Engines running in PCAP mode and with BGP Layer 3 scale-out configured
- AV-82459:
metrics-mgr
process fails repeatedly if an IP Group covering the range 128.0.0.0 to 255.255.255.255, or a subset, is configured on the Controller - AV-82753: LSC: Virtual service traffic failure when inband management is disabled and DPDK mode is disabled
- AV-82863: LSC: On upgrade to 18.2.8, VLAN interfaces may get deleted if the interface flaps
- AV-82965: WAF admin not able to edit WAF Policy from UI
- AV-83301: When an interface or its corresponding IP is removed the associated gateway monitor is not disabled. This will cause the gateway monitor to report a GW_DOWN to the Controller
- AV-83367: Controller users logged in via LDAP authentication may be logged out intermittently
- AV-83462: vCenter: The default “virtual machine hardware version” for the Controller is 8 and for the Service Engine is 10. If it has been manually upgraded to version 11 or above, while upgrading to 18.2.8, the VM may fail to bootup with an error “random: non blocking pool is initialized”. This is because the default serial port is not visible to the VM
- AV-83643: Service Engine fails when connection multiplexing is disabled, pool group is configured, and pool member goes down between requests on the same connection
- AV-83804: Possible Controller configuration loss due to multiple Controller node failover events involving the same leader node
- AV-83835: OpenStack: Cannot create/deploy virtual services, if Keystone v2 endpoint is used for integration and admin endpoints of nova, neutron, and glance services are not reachable or if Keystone v3 endpoint is used for integration and public endpoints of nova, neutron, and glance services are not reachable
- AV-83953: Connection reset in TCP fast path after idle timeout may send the reset with incorrect sequence number
- AV-84035: Postgres database on the follower node does not fully sync with the leader node causing it to leave the cluster and restart the full sync again
- AV-84099: Job manager updates to GSLB service fails if there are more than one unresolvable pool members
- AV-84103: While deleting GSLB pool members, wrong member is getting deleted from GUI
- AV-84396: For a virtual service with “traffic enabled” set to false and “use VIP as SNAT” set, Service Engine responds to ARP for the VIP which negates the effect of “traffic enable” being set to false
- AV-84432: On configuring
use_vip_as_snat
as false andsnat_ip
same as VIP manually, the SNAT/IP configuration will be ignored - AV-84678: Virtual services down due to SSL certificate PEM encoding read error when length of line in certificate is a multiple of 254
- AV-84679: Service Engine can fail while deleting a virtual service after it has been in fault state
- AV-85207: Clients proxying through Avi virtual service of Layer 4 SSL application type might experience intermittent TCP connection errors
- AV-86092: TCP DNS queries over IPv6 network incorrectly load balanced
- AV-86518: Service Engine becomes unresponsive when time is set backwards on the SE by a large range of hours
Known Issues in 18.2.9
- AV-89581: The message Unhandled error in Deferred is displayed on the terminal after upgrade. The workaround is set all cluster nodes to active.
- AV-90340: Nutanix: Service engine upgrade fails in Nutanix AHV environment
Issues Resolved in 18.2.8 Patch Releases
Issue Resolved in 18.2.8-2p13
- AV-118475: In a Kubernetes/OpenShift cloud, the SE may go into disabled state due to stale SE entry processing done by the cloud connector
Issues Resolved in 18.2.8-2p10
- AV-98469: Service engine hits an assert if PCAP initialization fails
Issues Resolved in 18.2.8-2p9
- AV-97035: Large size downloads from slow clients with statically configured high receive window size might put the system in a prolonged, low available buffer state leading to system-wide connection throttling.
Issues Resolved in 18.2.8-2p8
-
AV-90063: The SE gets blocked on heavy logging volume when files are getting rotated out fast and the Controller has been consistently asking for re-syncing the files. The error message
File not found
is being printed to the debug file in a high frequency which is blocking the log agent thread that could schedule the queue debug message from the ring. - AV-93714: In geo-DB files, consecutive creation or deletion operations cause inconsistencies like:
- The geo-DB files do not get downloaded to the SE
- The geo-DB files may not get replicated to the followers from the leader
- AV-95412: Added the following fields:
- Pool server description in the pool modal
- An optional description column to the list of servers page under pool details
-
AV-95630: The
SERVER_UP
andSERVER_DOWN
events will now display a Description field, which is taken from the description field in the server entry of the pool configuration -
AV-96787: Support for DHCP on datapath interfaces in the Linux Server Cloud
- AV-97352: The
Description
field to the config part of pool server inventory API response has to be included since this is used by the server list UI page
Issues Resolved in 18.2.8-2p7
- AV-87605: When deleting pools incorrect reference count may lead to SE failure
- AV-92575: A valid Avi user with write access to the Avi DataScript role may be able to gain read/write access to the Controller file system
- AV-93265: A valid Avi user with write access to the Avi DataScript role will be able to execute system commands via the Lua system functions
Issue Resolved in 18.2.8-5p1
- AV-85993: Enable Gateway Monitoring in OCI environment
Issues Resolved in 18.2.8-2p5
- AV-61819: Service Engine failure when a request with cookie header size greater than 4K is sent in a SAML authenticated session
- AV-84284: L4 DataScript stalls with TCP request event. The virtual service having a TCP request DataScript event rejects requests after 57,000 connections. This is specific to TCP request events only.
- AV-85680: Service Engine processes may hold up freed memory that may cause memory being unavailable for other system process leading to Service Engine failure
- AV-85800: Service Engine can fail when requests with cookies with no spaces in between or large cookies use the
avi.http.remove_cookie
oravi.http.replace_cookie
API - AV-86782: Control plane health status not marking GSLB member down
- AV-87502: Service Engine failure at
se_hash_lookup
- AV-87505: SE_DP failure
@ngx_http_auth_ldap_close_connection
- AV-87886: If the Avi cloud managing a Kubernetes cluster has a cluster tag, then changes in pod, endpoint or service for an ingress backend does not update the corresponding Avi objects
- AV-88692: Service Engine may fail due to incorrect rate limiter configuration in a network security policy
Issues Resolved in 18.2.8-2p4
- AV-82284: External AWS DNS profile with AWS cloud does not work if cloud is using cross account based authentication
- AV-82965: WAF admin not able to edit WAF policy from UI
- AV-83804: Some configuration changes may be missing after a leader failover, due to incorrect database replication after a previous cluster failover event
- AV-84092: Traffic to GSLB FQDN does not work when GSLB is enabled for OpenShift routes
- AV-84099: JobManager updates to GSLB service fails if there are more than one unresolvable pool members
- AV-84103: While deleting GSLB pool members, wrong member is getting deleted from the UI
- AV-84678: Virtual services down due to SSL certificate PEM encoding read error
- AV-85036: LUA VM configuration failure might result in invalid memory access in Datapath
- AV-85283: ‘show upgrade status’ continues to show blank entries of previously deleted SE groups causing Tech support collection to fail with an “Invalid value ‘0’!”
Issues Resolved in 18.2.8-2p3
- AV-76098: UI: Non federated persistence profiles are shown for GSLB services
- AV-81908: Some of the GSLB pool members’ FQDNs are not resolvable (as they are in a DR site). When DNS refresh interval is set to 5 minutes, this will create excessive CRUD on the system resulting in leader site not being able to send health status probes to the follower sites
- AV-83301: When an interface or its corresponding IP is removed, the associated gateway monitor is not disabled causing the gateway monitor to report a GW_DOWN to the Controller
- AV-83367: Misconfiguration in authentication backends could cause remote users to logout intermittently
- AV-83643: Service Engine fails when connection multiplexing is disabled on using pool groups and server goes down between requests on the same connection
Issues Resolved in 18.2.8-2p2
- AV-77407: OpenShift secure dedicated route with multiple ports with GSLB annotation updates only host header for the first rule
- AV-78942: Performance degradation because of retransmissions when TSO is enabled
- AV-82432: BGP virtual service not reachable via PCAP Service Engine
- AV-82753: Virtual service traffic does not work on Linux server cloud when inband management and DPDK is off
- AV-82863: When link flaps VLAN interfaces get removed
Issues Resolved in 18.2.8-2p1
- AV-81953: BGP peering is not established on using a VLAN interface that is in a different VRF than the parent interface. External health monitors that use that VLAN interface also do not work. The workaround is to manually use
ifconfig <ifname> up
to bring up the KNI interface of the VLAN in the corresponding namespace
What’s New in 18.2.8
- Support for jumbo frames in Avi Datapath
- DPDK support for Broadcom 574xx 10G NIC
- Infoblox: Support for IPv6 allocation
- Infoblox: Support for external attribute
- (Tech Preview) DPDK support for OpenStack
- (Tech Preview) DPDK support for AWS
Key Changes in 18.2.8
Containers
- K8s: Hostnames are not part of the HTTP Policy rules’ match criteria if
enable_route_ingress_hardening
flag is set to False. - K8s: Kubernetes versions 1.17 and 1.18 are supported for the Kubernetes cloud connector, and are available only for the existing users of Kubernetes cloud connectors.
HTTP Applications
- SPDY is not supported.
spdy_enabled field
has been deprecated and a GET for this field will not return any value.
WAF
- Entries in the fields for ‘restricted_extensions’, ‘static_extensions’, and ‘restricted_headers’ in a WAF profile are now case insensitive.
vCenter Cloud
- If the network has any user configured subnets, it will not be automatically deleted when the
VIMgrNwRuntime
object is deleted. It must be manually deleted from the network.
SE Dataplane
-
The SE group config
enable_pcap_tx_ring
is deprecated. Instead usepcap_tx_mode
config to switch between PCAP packet transmission methods of memory mapped TX ring and PCAP socket. -
distribute_queues
config in SE group is deprecated.
To enable RSS usemax_queues_per_vnic
config instead. This new config is used to both enable RSS and configure the number of queue pairs on the interface. The default value formax_queues_per_vnic
is ‘1’. It can also be configured to ‘0’ which enables - Auto mode, which deduces optimal number of queue pairs per dispatcher based on the NIC and operating environment. Optionally an integer value to configure a specific number of queue pairs.
Multiple queues per dispatcher is currently supported for OpenStack, KVM, and AWS.
Controller Security
-
On importing a partial system secure channel configuration, the currently configured and the default key and certificates on the target system will not be overwritten.
-
Secure channel key and certificate in the System configuration can only be updated if the following conditions are met:
- No Service Engines in the system
- Only no access clouds in the system
- Only single node cluster
In addition the system default objectSystem-Default-Secure-Channel-Cert
can not be updated. To change the certificate, create a new one and update the SystemConfiguration to refer to it.
Infoblox IPAM
- Currently, usable_subnet in the Infoblox profile is used to hold all the subnets discovered in Infoblox to be used for VIP IP Allocation. This field has been deprecated.
The new field to be used isusable_alloc_subnets
which can hold either a V4 or V6 or both V4 and V6 together for supporting dual-stack IP allocation for VIPs.
Known Issues and Workarounds in 18.2.8
- AV-81946: CSP: Packet transmission stalls on i40e interfaces
- AV-81953: BGP peering is not established on using a VLAN interface that is in a different VRF than the parent interface. External health monitors that use that VLAN interface also do not work. The workaround is to manually use
ifconfig <ifname> up
to bring up the KNI interface of the VLAN in the corresponding namespace - AV-83462: vCenter: The default “virtual machine hardware version” for the Controller is 8 and for the Service Engine is 10. If it has been manually upgraded to version 11 or above, while upgrading to 18.2.8 the VM may fail to bootup with an error “random: non blocking pool is initialized”. This is because the default serial port is not visible to the VM. See https://kb.vmware.com/s/article/52683 for more details. The suggested workaround is to add a serial port to the VM as follows:
- Power off the virtual machine.
- Edit the virtual machine setting.
- Add the serial port.
- Ensure the serial port is in disconnected mode.
- Power on the virtual machine.
Issues Resolved in 18.2.8
- AV-63425: Cipher-suites in IANA format are translated to OpenSSL format
- AV-70085:
disruptive
andsuspend_on_failure
options are not visible in the CLI - AV-71059: Upgrade from Avi Vantage version 17.2.7 fails in the
migrate_config
step if a separate partition is used for metrics - AV-74309: GCP: Avi Controller makes redundant calls to GCP to add or remove routes logging a lot of failures in GCP console
- AV-74430: Enable the support of multiple dispatchers in DPDK mode on specific environments
- AV-74439: K8s / OpenShift : Certain scenarios lead to egress source IPs not being freed resulting in re-use of these IPs for other egress services
- AV-75325: Avi Controller polls the vCenter for changes in ESX, VM, and network objects. Error in version number used for depicting this change
- AV-75610: Connection closed when a reset frame is received on a half-closed HTTP/2 stream
- AV-75685: When Avi Vantage adds the “Secure” flag cookie on a server response back to the client, two semi-colons get appended
- AV-75832: Collecting Tech support via API might stall
- AV-75965: Upgrading the system with
se_patch
does not work - AV-76003: K8s/OpenShift: On using multiple destinations for egress service, egress pod may not get created due to destination information size
- AV-76031: OpenShift: Two Service Engines are responding to ARP after SEs were vmotioned
- AV-76037: HTTP cookie persistence does not work when connection multiplexing is disabled
- AV-76038: In case of multiple networks associated with north-south Avi IPAM profile, egress service creation can lead to IPs from multiple networks getting allocated, thus depleting the static pool of IPs faster than usual
- AV-76140: AWS: Virtual service placement on SEs may fail due to cloud connector timeout if there are more than 500 virtual services
- AV-76301: AWS/OpenStack: Connections are dropped due to lack of memory in non-DPDK environments supporting hot plug
- AV-76836: OpenShift: Issue with OAPI used for accessing the OpenShift routes. The OAPI routes were deprecated since 4.0. The fix is to use an uniform API across old and new OpenShift versions which will require changes in the cluster role of current OpenShift 3.x clusters as well on an upgrade to version 18.2.8
- AV-76924: AWS: If the cloud health check fails, the cloud fails to come back to placement ready state again
- AV-77027: Fix for No 3DES ciphers supported with the new OpenSSL stack in Avi Vantage version 18.2.6 and 18.2.7
- AV-77039:
se_dp
andse_agent
Service Engine failure while bringing up the SE with DPDK, multiQ, distributed dispatcher and GRO disabled - AV-77076: Service Engine failure on adding or deleting session cache entry with session-id length 0
- AV-77077: K8s: Starting Service Engine modifies Linux kernel memory over-commit configuration on the K8s node
- AV-77138: When a client requests host header that does not match the FQDN configured on the child virtual service, the request fails with an application log on the child instead of being proxied using the parent virtual service’s default pool
- AV-77141: When WAF debug logs are enabled (which is generally not recommended in production) and se_dp runs out of memory while processing a JSON POST request for WAF, the Service Engine could fail
- AV-77154: K8s/OpenShift: Routes sharing the same hostname can cause unintended updates to the common virtual service and HTTP policy object on changing either the route or its dependent objects like service and pods
- AV-77280: DataScript payload may be invalid while using
avi.l4.modify
- AV-77400: When DPDK is enabled on OpenStack, nova attach NIC failure on SE can lead to SE fail or hang
- AV-77480: Service Engine may fail if the file /etc/vm_uuid cannot be opened
- AV-77715: Service Engine failure due to uninitialized variable for SAML response
- AV-77729: Remove SSH access for non-superuser users
- AV-77740: Graceful disable timeout update not working if any server attribute is changed
- AV-77821:
avi.http.get_req_body()
does not work without explicit buffer size parameter - AV-78382: OpenShift: Service Engine may fail in host IP discovery in OpenShift environments
- AV-78399: LDAP auth profile allowed configuring FQDN addresses even when attached to a virtual service
- AV-78587: Service Engine fails when absoluteURI is used in a request to an SNI virtual service
- AV-78886: VRF context column does not enumerate in Avi UI under Infrastructure/Networks tab
- AV-79051: VRF context collection dropdown is not displayed in network create modal
- AV-79130: Using HTTP policy set rules to match an HTTP request header could cause a Service Engine to fail if the length of the matched header is equal to the length of certain Avi-added request headers
- AV-79138: Failover of scaled-out virtual service from its primary place on a bonded NIC may turn it non-responsive on some secondary Service Engine. Back-to-back failovers may turn the virtual service entirely unresponsive
- AV-79190: AWS autoscale groups (ASG): Server does not get removed from the pool, even if the ASG is removed from the pool
- AV-79230: AWS: Calls to AWS cloud may hang with proxy in place, causing other virtual services to not get placed
- AV-79246: In certain environments where the Controller nodes are deployed as containers, the cluster quorum can fail due to heartbeat loss between the cluster nodes
- AV-79477: Rate limit settings in Application profile are not set on using the UI
- AV-79513: When
enable_chunk_merge
is disabled in the application profile, if a server sends a response without the Content-Length header, it can cause the client to timeout - AV-79654: WAF configuration with learning enabled is rejected when the “Name” is too long
- AV-79680: UI:
dns_error_response_error
is not present as on option for “Respond to Unhandled DNS requests” - AV-79752: Fix for fastpath timeout dependency on the time of the last received packet
- AV-79770: API may timeout due to delays with virtual service having many service pool selector objects
- AV-80052: If a Service Engine has been running for more than 397 days, a spurious debug message causes high CPU utilization for
se_log_agent
process causing potential heartbeat failures - AV-80740: Fix for client connections failing due to CRL expiration. The workaround is to remove CRL in the PKI profile
- AV-81337: Fix for Service Engine disk filling up. The workaround is to add a hard limit for log system. When disk usage is greater or equal to 90%, cleanup script will be invoked even if the size limit for log system has not been reached
Issues Resolved in 18.2.7 Patch Releases
Issues Resolved in 18.2.7-4p1
- AV-75610: Connection closed when a reset frame is received on a half-closed HTTP/2 stream
- AV-75832: Collecting Tech support via API might stall
- AV-75965: Upgrading the system with
se_patch
does not work - AV-76037: HTTP cookie persistence does not work when connection multiplexing is disabled
- AV-76270: Support to specify header fields to be logged in json format App logs while streaming
- AV-76301: In non DPDK environments supporting hot plug, adding and removing multiple interfaces owing to incorrect interface memory accounting causes connection memory depletion. This leads to errors for new connection request processing
- AV-77729: Remove SSH access for non-superuser users
- AV-77842: Using DataScripts to add connection header results in multiple connection headers
- AV-78260: Adding connection header in HTTP response policy results in sending multiple connection headers providing an ability to use
Add Header/Replace
header to add connection headers withkeep-alive
orclose
values - AV-78307: GeoDB update for latest MaxMind
Issues Resolved in 18.2.7-3p1
- AV-67634: Add multi queue support for OpenStack SE image
- AV-73155: OpenStack: Scale in did not happen for SE during migration
- AV-74430: Enable support for multiple dispatchers in DPDK mode on specific environments
- AV-75832: Collecting Tech support via API might stall
- AV-75965: Upgrading the system with se_patch does not work
- AV-76301: In non DPDK environments supporting hot plug, adding and removing multiple interfaces owing to incorrect interface memory accounting causes connection memory depletion. This leads to errors for new connection request processing
- AV-76632: Use multiple queue pairs per dispatcher in the datapath
- AV-77400: When DPDK is enabled on OpenStack nova attach NIC failure on SE can lead to SE fail or hang
- AV-77729: Remove SSH access for non-superuser users
Issue Resolved in 18.2.7-2p16
- AV-98655: In port channel cases, owing to a race condition TSO does not work due to improper initialization of the interface flags. This happens when VLANs are created from the Controller and there is a reboot/ upgrade.
Issues Resolved in 18.2.7-2p15
- AV-76924: AWS: If the cloud health check fails, the cloud fails to come back to the status Cloud ready for Virtual Service placement.
Issues Resolved in 18.2.7-2p14
- AV-96347 : The metric fields
reqs_finished_sessions
,finished_sessions
,concurrent_sessions
return the value 0 in SE metric stats message - AV-96827: The virtual service reports 503 Gateway error when server closes the connection before all the data is sent to client.
Issues Resolved in 18.2.7-2p13
- AV-91384: SE packet processing is delayed due to rx/tx queue processing delays
- AV-92137: Trailing backend packets gets delivered to KNI
- AV-92139: Trailing client packets leads to unnecessary flow-probes
- AV-92140: Occasionally, inter-core traffic leads to aggressive backoff from hw queue polling
- AV-92141: IOCTLs meant for KNI thread wake-ups are performance intensive
- AV-92142: Proxies on SEs with high number of cores ends up yielding the CPU upon contention on inter-core dispatcher queue
Issue Resolved in 18.2.7-2p12
- AV-85198: Rapid PATCH calls to same pool object results in 504s
Issue Resolved in 18.2.7-2p11
- AV-72536: Unauthenticated requests create sessions on the database
- AV-85198: Rapid (>40 reqs / min) PATCH API calls to modify pool objects could result in 504 errors.
- AV-85558: Sessions that have unauthenticated requests are not cleaned up causing session buildup.
Issues Resolved in 18.2.7-2p10
- AV-85198: Rapid PATCH calls to the same pool object resulting in error 504
Issues Resolved in 18.2.7-2p9
- AV-85207: Clients proxying through Avi virtual service of Layer 4 SSL application type might experience intermittent TCP connection errors
- AV-86518: Service Engine becomes unresponsive when time is set backwards on the SE by a large range of hours
Issues Resolved in 18.2.7-2p8
- AV-74599: Objects having non-ascii characters in names fail during full_system export or upgrade. Script
export_unicode_issue_script.py
can be used to identify such objects - AV-83953: Connection reset in TCP fast path after idle timeout may send the reset with incorrect sequence number
- AV-84396: For a virtual service with “traffic enabled” set to false and “use VIP as SNAT” set, Service Engine responds to ARP for the VIP. This negates the effect of “traffic enable” being set to false
Issues Resolved in 18.2.7-2p6
- AV-74314: IMAP client times out on using System-SSL-Application Application profile
- AV-79274: UI: Show VLAN network interfaces in SE settings modal
Issues Resolved in 18.2.7-2p5
- AV-63425: Cipher-suites in IANA format are translated to OpenSSL format to be consumed by NGINX service.
- AV-74599: Objects having non-ASCII characters in names fail during full_system export or upgrade
- AV-75685: When Avi Vantage adds the “Secure” flag cookie on a server response back to the client, two semi-colons get appended
- AV-77076: Service Engine failure on adding or deleting session cache entry with session-id length 0
- AV-77280: DataScript payload may be invalid while using avi.l4.modify
- AV-77480: Service Engine may fail if the file /etc/vm_uuid cannot be opened
- AV-77740: Graceful disable timeout update not working if any server attribute is changed
- AV-77821:
avi.http.get_req_body()
does not work without explicit buffer size parameter - AV-78960: Increased
se_agent
default start timeout to avoid failure in start section due to process_cleanup retries period - AV-79051: VRF context collection dropdown is not displayed in network create modal
- AV-79477: Rate limit settings in Application profile are not set on using the UI
- AV-79680: UI: dns_error_response_error is not present as on option for “Respond to Unhandled DNS requests”
- AV-79752: Fix for fastpath timeout dependency on the time of the last received packet
- AV-80052: If a Service Engine has been running for more than 397 days, a spurious debug message causes high CPU utilization for se_log_agent process causing potential heartbeat failures
- AV-80843: Virtual service disable/creation failing with message “VirtualServiceCheck instance has no attribute ‘poolgroup_uuids_set’”
Issues Resolved in 18.2.7-2p4
- AV-79770: API may timeout due to delays with virtual service having many service pool selector objects
Issues Resolved in 18.2.7-2p3
- AV-75325: Avi Controller polls the Vcenter for changes in ESX, VM, and network objects. Error in version number used for depicting this change
- AV-78587: Service Engine fails when absoluteURI is used in the request to an SNI virtual service
Issues Resolved in 18.2.7-2p2
- AV-77027: No 3DES ciphers supported with the new OpenSSL stack
- AV-78886: VRF context column does not enumerate in Avi UI under the Infrastructure / Networks tab
- AV-78942: Performance degradation because of retransmissions when TSO is enabled
- AV-79271: K8s: Support to skip host header and match the path to select pool group
Issues Resolved in 18.2.7-2p1
- AV-75610: Connection closed when a reset frame is received on a half-closed HTTP/2 stream
- AV-75832: Collecting Tech support via API might stall
- AV-75965: Upgrading the system with se_patch does not work
- AV-76037: HTTP cookie persistence does not work when connection multiplexing is disabled
- AV-76301: In non DPDK environments supporting hot plug, adding and removing multiple interfaces owing to incorrect interface memory accounting causes connection memory depletion. This leads to errors for new connection request processing
What’s New in 18.2.7
- Support for VMware Horizon VDI
- Support for BGP graceful restart
- Support for GCP full access with customer managed encryption keys
- Licensing: Support for VMware DLF based license
- (Tech Preview) Support for OpenShift version 4.x
Issues Resolved in 18.2.7
- AV-66745: Packet processing may be delayed due to periodic host monitoring
- AV-67995: Scheduled backups stop running if maximum number of backups are already present and the oldest backup cannot be deleted
- AV-68168: CSP: vNIC addition may fail after a reboot of the SE due to a change in the mapping between interface name and MAC address
- AV-70131: Avi Controller not able to stream logs to Kafka
- AV-70181: Underscore is not allowed in GSLB service application name
- AV-71143: UI: HTTP/2 header names in the application log overlay do not match the HTTP/2 specification
- AV-71214: OpenStack: “Use single role for all tenants” missing in UI for OpenStack cloud
- AV-71231: Large tx packets are not segmented to clone servers may cause delays in packet processing logic
- AV-71349: Service Engine process can get into infinite loop when corrupted SSL data is received from backend
- AV-71557: Virtual service in a non-admin tenant cannot be deleted if it refers to a health monitor from “admin” tenant
- AV-71880: Restrict ElasticSearch memory allocation on Avi Controller to 32 GB, in line with ElasticSearch guidelines
- AV-71935: Service Engine may fail due to a race condition when a virtual service with connection multiplex disabled has client IP persistence enabled
- AV-71988: AWS: Virtual service sharing the same VIP are placed on different vNICs on the Service Engine
- AV-72113: Health monitor does not use the correct hostname if a pool member with same IP:port has different hostname
- AV-72194: NSX-V: Incorrect distributed firewall rule populated and incorrect port for health monitor used, on disabling and re-enabling a virtual service
- AV-72196: GeoDb files are not processed in the right manner when the DNS virtual service has a lot of GSLB configuration downloaded to the SE at the same time and the GeoFiles are huge.
In some of these scenarios, the SE will not be able to sequence to Geo configuration to the SE, thus causing the Geo discrepancy in the DNS flow
AV-72565: FQDN resolution at very low
dns_refresh_interval
starves GSLB leader from issuing health status queries to follower - AV-72594: VCenter: UI does not allow configuring IP subnet and IP pool for discovered networks
- AV-72685: CLI : Command timeouts and CLI session disconnects due to shell.py running at high CPU intermittently
- AV-72886: Virtual services created via Contrail LBaaS driver may become unreachable because of incorrect security group attachment on Service Engine
- AV-72888: Service Engine runs out of memory due to large number of external health monitors being scheduled
- AV-72951: ‘all-tenant’ queries for non-admin user fail if the user does not have access to ‘admin’ tenant
- AV-73051: GSLB: Suppress alerts for event “GSLB Site Exception Status”
- AV-73189: Service Engine fails when a HTTP policy redirect action has tokens specified in path or host, which are not found in the actual request
- AV-73191: Service Engine failure when logging of all headers is enabled, along with HTTP 1.0 responses
- AV-73209: SSL clients that do not specify the elliptical curve in the handshake do not work with any PFS ciphers, resulting in ‘no shared ciphers’ error
- AV-73211: SSL handshake may fail if the client does not send the curve list in client hello while negotiating PFS ciphers, as Avi assumes “Secp256r1” curve
- AV-73323: OpenShift: Tenant deletion retries keep the WebApp busy making the portal inaccessible
- AV-73509: When “Use VIP as SNAT” is enabled for a virtual service, if a pool goes down, it does not come back up as the IP address is withdrawn from BGP
- AV-73724: Service Engine failure when “server reselect” is enabled for a pool
- AV-73764: Service Engine failure when “server reselect” is configured on a pool, which is used by a Virtual Service that is configured to process HTTP/2 requests
- AV-74217:
enable_route_ingress_hardening
flag will disable the following things in Avi OpenShift cloud:- No HTTP drop rules will be added for paths that do not match the host/path combination specified in the ingress/route object
- No HTTP headers will be added for any host/path combination. Only the path will be added as a HTTP policy set object
- A default pool group will be added that would mimic the behaviour seen in Avi Vantage version 18.2.5 or earlier
Known Issues and Workarounds in 18.2.7
- AV-74599: Objects having non-ASCII characters in names fail during full_system export or upgrade. Run the script
export_unicode_issue_script.py
at /opt/avi/scripts on the Controller to identify such objects. - AV-75832: Collecting Tech support via API might stall. Follow the steps below for the workaround:
- Login to the bash shell of the Controller with
sudo su
- Execute the command
ps -eaf | grep shell.py
- You will notice multiple entries for shell.py
- Kill the PID that does not have the
--server
as shown below:
root@admin# ps -aef | grep shell.py root 1690 1 0 2019 ? 00:00:43 avi-cliserver /opt/avi/python/bin/cli/bin/shell.py --server root 8191 8187 0 00:55 ? 00:00:59 python /opt/avi/python/bin/cli/bin/shell.py --file /var/lib/avi/tech_support/serviceengine_kh-se-rvhte.20200109-005550/serviceengine.txt_2wVJXS_temp.cli --user admin --token ff0117a1fe3d170d88483b1b06ddea60859b53bc root 22573 12725 0 22:03 pts/0 00:00:00 grep --color=auto shell.py root@admin# kill -9 8191
- Login to the bash shell of the Controller with
-
AV-75965: Upgrading the system with se_patch does not work. Follow the steps below for the workaround:
For system on 18.2.6:
1. Upload controller_patch.pkg_in_18.2.6
2. Apply the controller_patch via patch controller controller_patch.pkg_in_18.2.6
3. For an upgrade to 18.2.8, do a normal upgrade with se_patch in 18.2.8
4. For an upgrade to to 18.2.7, upgrade the system using<controller_patch_in_18.2.7> se_patch <se_patch>
For system on 18.2.7:
1. Upload controller_patch.pkg_in_18.2.7
2. Apply the controller_patch via patch controller controller_patch.pkg_in_18.2.7
3. For an upgrade to 18.2.8, do a normal upgrade with se_patch in 18.2.8 - AV-76037: HTTP cookie persistence does not work when connection multiplexing is disabled
- AV-77027: No 3DES ciphers supported with the new OpenSSL stack
- AV-77403:
ssl_everywhere_enabled
field deprecated. This field will not be available for a GET request on older API
Issues Resolved in 18.2.6 Patch Releases
Issues Resolved in 18.2.6-9p1
- AV-102036: : Connection closed abnormally error on VS logs
Issue Resolved in 18.2.6-6p4
- AV-106423: Add IP on VNICs may fail when SE re-registers with the controller after a break in controller-SE connectivity
Issues Resolved in 18.2.6-6p3
- AV-106422: OpenStack: SE failure when 25th vNIC is removed
- AV-88094: Service Engine may fail if the vNIC’s link flaps
Issues Resolved in 18.2.6-6p2
- AV-72951: For a non-admin user, all-tenant queries fail if the user does not have access to ‘admin’ tenant
- AV-76255: GSLB Services FQDN insights not displaying UDP queries
- AV-79847: Health score was marked as NA under the Health Tab although on the virtual service it shows 100
- AV-80115: Unable to clean up stale tenants using
/api/openstack-cleanup
when theuse_admin_url config
is set to False in OpenStack cloud configuration. - AV-84287: Gracefully handle vNIC additions beyond the maximum supported
- AV-84400: VIP address audit on SE ports results in VIP port relinquishing the VIP
- AV-85646: PCAP ring does not release all its memory when the interface is deleted
Issue Resolved in 18.2.6-4p14
- AV-108222: Linux Server Cloud: Upgrade fails on SE when docker run variables are located in /etc/sysconfig/avise instead of /usr/sbin/avise
Issues Resolved in 18.2.6-4p13
- AV-99143: A failure in deletion of a tenant in the Controller can cause the entry for this tenant in the OpenShift agent tenant cache to become stale. Due to this, for any future references for this tenant name, the stale entry is referred which returns the stale UUID.
- AV-100005: Some virtual services on OpenShift cloud may get deleted if the connectivity to OpenShift cluster is lost
Issue Resolved in 18.2.6-4p12
- AV-79137: Higher CPU utilization when an IP rule is used per VIP
Issue Resolved in 18.2.6-4p11
- AV-79137: Higher CPU utilization when an IP rule is used per VIP
Issues Resolved in 18.2.6-4p10
- AV-88149: OpenShift cloud connector fails to allocate IP for egress on Azure due to repeated allocation and de-allocation of egress IPs
- AV-89589: When a Service Engine is reporting large number of metrics, some updates may be dropped, that can result in gaps in metrics reporting
Issues Resolved in 18.2.6-4p9
- AV-79137 : Higher CPU utilization is noted when one IP rule per VIP is used.
- AV-79264 : Application profile with client cert validation fails to write headers in other tenants.
- AV-80052 : If a Service Engine has been running for > 397 days, a spurious debug message causes high CPU utilization for
se_log_agent
process causing potential heartbeat failures. - AV-85134 : SE SSH user sudoers file permissions are not correct.
- AV-86859 : In OpenShift/Kubernetes based deployments, if the route to bravi on the host gets removed inadvertently, the subsequent creation of service engine does not create the route entry.
Issues Resolved in 18.2.6-7p1
- AV-73191: Service Engine failure when logging of all headers is enabled, along with HTTP 1.0 responses
- AV-73209: SSL clients that do not specify the elliptical curve in the handshake do not work with any PFS ciphers, resulting in ‘no shared ciphers’ error
- AV-75603: Destination persistence support using client mask
Issues Resolved in 18.2.6-6p1
- AV-72886: Virtual services created via Contrail LBaas driver may become unreachable because of incorrect security group attachment on Service Engine
Issue Resolved in 18.2.6-4p12
- AV-79137: Higher CPU utilization when an IP rule is used per VIP.
Issues Resolved in 18.2.6-4p8
- AV-74434: DNS resolution not working from one of the egress pods because of wrong route entry for source IP egress pod
- AV-77071: For OpenShift route objects with common hostname the order of default pool group determination was random on restart or during full sync, resulting in pool group selection inconsistencies on virtual service recreation
- AV-79756: Intermittent “400 bad request” response when Avi Service Engine and client/server pod are on the same OpenShift node
- AV-80299: Long destination names or too many destinations in OpenShift egress service causes Avi Controller to deallocate the egress source IP address
- AV-81198: For OpenShift routes, that have TLS termination as Passthrough and EdgeInsecureTermination set as ‘Redirect’ - Avi Vantage creates a Layer 7 virtual service blocking passthrough traffic on 443
Issues Resolved in 18.2.6-4p7
- AV-76031: OpenShift: Two Service Engines are responding to ARP after SEs were vmotioned
Issues Resolved in 18.2.6-4p6
- AV-71059: Upgrade from Avi Vantage version 17.2.7 fails at the
migrate_config
step if a separate partition is used for metrics
Issues Resolved in 18.2.6-4p5
- AV-72951: ‘all-tenant’ queries for non-admin user fail if the user does not have access to ‘admin’ tenant
- AV-74016: Standard ports, 80 and 443, are not included in the host based routing policy match criterion leading to requests with host header as
:<80 or 443> to fail - AV-76031: OpenShift: Two Service Engines are responding to ARP after SEs were vmotioned
- AV-77154: Routes (sharing the same hostname) can cause unintended updates to the common virtual service and HTTP policy object upon changing either route or its dependent objects like service and pods
- AV-78382: Service Engine may fail during host IP discovery in OpenShift environments
Issues Resolved in 18.2.6-4p4
- AV-68893: OpenShift: Routes unable to sync with Avi Vantage due to illegal cross-cloud reference for network
Issues Resolved in 18.2.6-4p3
- AV-71582: Status field for Service type load balancer can flip between valid and null values periodically
- AV-74439: In Azure environment, certain scenarios lead to egress source IPs not being free resulting in reuse of these IPs for other egress services
- AV-76003: On using multiple destinations for egress service, egress pod may not get created due to the size of the destinations’ information
- AV-76038: If multiple networks are associated with north-south Avi IPAM profile, egress service creation can lead to IPs from multiple networks getting allocated, thus depleting the static pool of IPs faster than needed
Issues Resolved in 18.2.6-4p2
- AV-73323: OpenShift: Tenant deletion retries keep the WebApp busy making the portal inaccessible
- AV-74315: In certain OpenShift deployments, upgrade of Avi SE can fail on using hostname for the OpenShift master API server in Avi OpenShift cloud configuration
Issues Resolved in 18.2.6-4p1
- AV-66745: Packet processing may be delayed due to periodic host monitoring
- AV-73191:
se_dp
fails whenall_headers
is enabled with HTTP 1.0 responses - AV-73209: SSL clients that do not specify the elliptical curve in the handshake do not work with any PFS ciphers, resulting in “no shared ciphers” error
- AV-73323: OpenShift: Tenant deletion retries keep the WebApp busy making the portal inaccessible
- AV-74217:
enable_route_ingress_hardening
flag will disable the following things in Avi OpenShift cloud:
- No HTTP drop rules will be added for paths that do not match the host/path combination specified in the ingress/route object
- No HTTP header will be added for any host/path combination. Only the path will be added as a HTTP policy set object
- A default pool group will be added that would mimic the behaviour seen in Avi Vantage version 18.2.5 or earlier
Issues Resolved in 18.2.6-3p1
- AV-67846: Support for disabling Avi created security groups on Service Engines in OpenStack cloud
Issue Resolved in 18.2.6-2p10
- AV-97081: Symptoms: Workarounds:
Issue Resolved in 18.2.6-2p9
- AV-122280: Deleting an ingress with a GSLB Service annotation from a Kubernetes namespace prompts the deletion of GSLB Services in other namespaces as well.
Issues Resolved in 18.2.6-2p7
- AV-121120: ARP resolution is failing on OpenShift deployments of the Avi SE with greater than 512 interfaces (via proxy ARP).
- AV-108222: Linux Server Cloud: Upgrade fails on SE when docker run variables are located in /etc/sysconfig/avise instead of /usr/sbin/avise
Issues Resolved in 18.2.6-2p6
- AV-79130: Using HTTP policy set rules to match an HTTP request header could cause a Service Engine to fail if the length of the matched header is equal to the length of certain Avi-added request headers
- AV-80052: If a Service Engine has been running for more than 397 days, a spurious debug message causes high CPU utilization for
se_log_agent
process causing potential heartbeat failures - AV-76255: GSLB Services FQDN insights not displaying UDP queries
- AV-86540: SE initialisation fails if the datapath interfaces are not released back to Linux successfully when SE is restarted
- AV-89246: python exception in
pci_unbind.py
during SE initialisation - AV-89493: SE init failure due to interfaces not getting relinquished before SE startup
Issues Resolved in 18.2.6-2p5
- AV-72507: Restrict
Normal Tenant Delete
if any system default objects are referred by other objects - AV-72951: “all-tenant” queries for non-admin user fail if the user does not have access to the ‘admin’ tenant
- AV-77027: No 3DES ciphers supported with the new OpenSSL stack
- AV-77729: Remove SSH access for non-superuser users
Issues Resolved in 18.2.6-2p4
- AV-72677: Not able to access Controller UI using IE/Edge Browser
- AV-74599: Objects having non-ASCII characters in names fail during full_system export or upgrade
- AV-75965: Upgrading the system with se_patch does not work
Issues Resolved in 18.2.6-2p3
- AV-73209: SSL clients that do not specify the elliptical curve in the handshake do not work with any PFS ciphers, resulting in ‘no shared ciphers’ error
- AV-73580: WAF whitelisting might not match for mix of IP ranges and individual IP addresses
- AV-73874: WAF PSM rule might not match for case insensitive locations
Issues Resolved in 18.2.6-2p2
- AV-72888: Service Engine runs out of memory due to many scheduled external health monitors
Issues Resolved in 18.2.6-2p1
- AV-73191:
se_dp
failure whenall_headers
is enabled for HTTP 1.0 responses - AV-73209: SSL clients that do not specify the elliptical curve in the handshake do not work with any PFS ciphers, resulting in ‘no shared ciphers’ error
What’s New in 18.2.6
ADC
Networking
Security
Avi Metrics
Issues Resolved in 18.2.6
- AV-53043: The Controller
iptables
are not updated whenipaddrgroup
was modified - AV-53097: Infoblox IPAM/DNS profile features downgraded in 17.2.14
- AV-59662: After upgrade, the older metrics are not visible
- AV-60084: If multiple FQDNs are added to a virtual service, only the first one gets registered to AWS Route 53
- AV-63972: The changes in
ipaddrgroup
are not reflected in theipset
list for specific ranges - AV-65713: GSLB: Re-ordering the fallback site list in the DNS policy or topology policy rule may have no effect
- AV-65826: Automatic certificate renewal script is timing out in a specific tenant and then renewing the certificate in the admin tenant
- AV-65920: OpenShift: IP allocation from OpenStack IPAM fails in an OpenShift environment, if the network for IPAM and virtual machine for the OpenShift node are in different tenants in OpenStack
- AV-66302: Azure: Listing of Azure virtual machine scale sets fails with RPC timed out error during pool creation, if there are many virtual machine scale sets present in the resource group
- AV-66905: Handled north-south traffic originating from within the node when default gateway for outgoing traffic of the virtual service is configured, and handled the container or pod traffic by adding the routes in the container or pod
- AV-66909: Connectivity issues with the API server can cause API calls to take significant amount of time, stalling syncing of Ingresses/Apps
- AV-67000: UI: Infoblox IPAM: Creating virtual services with placement_networks selected clears subnet field in
ipam_network_subnet
API request - AV-67064: Azure: In a combination of virtual services with and without public IP addresses placed on the same SE, a virtual service scale-in causes downtime
- AV-67113: BGP route advertisement fails if an SE BGP peer is a part of /31 network
- AV-67143: Log manager is not ready when messages from the SE are received
- AV-67316: OpenShift: On Controller upgrade from Avi version below 17.2.14 (or upgraded from < 17.2.14 to a newer release) to 18.2.5, some old, inactive routes may not be updated
- AV-67377: AWS cloud configured with non-existent management network can result in reachability issues for virtual services in all clouds
- AV-67550: WAF: Intermittent corruption in response data when WAF response rules are enabled
- AV-67644: SE failure due to memory exhaustion in
se_log_agent
process - AV-67647: Child SNI virtual services does not get placed in VMware / ACI cloud
- AV-67660: Upgrade might fail from 18.2.3 to 18.2.5 during configuration import
- AV-67724: BGP profile level
keepalive
orhold
timer fails to take effect due to per peer default timers - AV-67895: Malformed packet causes policy engine to misbehave, causing SE failure
- AV-68183: The Controller based events are not getting generated as alerts and not sent as trap/syslog
- AV-68190: The SNI hostname is not sent to the back end when HTTPS monitor is bound to the pool and the SSL attributes are not enabled in the HTTPS health monitor
- AV-68191: With certain OpenShift 3.11 versions,
securitycontextconstraints
API is not backwards compatible causing route sync to fail - AV-68191: OpenShift: With certain OpenShift 3.11 versions,
securitycontextconstraints
API is not backward compatible, causing route sync to fail - AV-68319: Back-end services hosted on the Kubernetes nodes can become unreachable from the SEs hosted on the same node(s) when using RancherOS with Calico CNI
- AV-68385: Azure: VM goes into inconsistent state with the error NIC not found when the NIC is deleted during VM creation
- AV-68512: OpenShift: Service Engine running on OpenShift on RHEL 7.7 stops processing packets a few minutes after initialization
- AV-68519: Added option to close connection if plain-text HTTP request received on SSL service port
- AV-68565: Error in downloading configuration backup from Avi Controller
- AV-68971: OpenShift: Unable to create a virtual service because the application profile was referenced from the wrong tenant
- AV-68995: SE may fail with PingID policy when a user identity is set
- AV-69183: gRPC auth keys copied to wrong directory on follower nodes
- AV-69186: Application learning is not working when PSM groups are created in a different tenant
- AV-69223: No logs are displayed in the UI when the search service is down
- AV-69265: Traffic capture does not get terminated even after reaching the configured duration
- AV-69266: Azure: Creating
se_dp
processors based on number of cores - AV-69301: When a clone server is deleted, there is a possibility of an SE failure
- AV-69317: GSLB FQDN uniqueness check fails, leading to sites being out of sync
- AV-69318: A vCenter password with non-ASCII characters is not accepted due to encoding issues
- AV-69351: With connection multiplexing feature enabled for Layer 7 virtual service, traffic cloning with
preserve_client_ip
does not work as expected - AV-69577: In AWS configuration dialog, the cross account roles may not be listed when
Use cross account assume
role option is selected - AV-69630: Azure VIP handling in Avi can cause IP address pool to be shared by both regular virtual services and egress source IPs, resulting in conflicts
- AV-69715: High memory usage reported on Service Engines after upgrade to 18.2.5
- AV-70130: If the system has shared VIP virtual services, the Service Engines of these virtual services can get stuck in
admin_down_requested
state resulting in a cascading effect of errors in the upgrade process and scaling in / migration operations on the virtual service - AV-70164: Creating a GSLB service for a TLS enabled ingress object fails in a Kubernetes environment
- AV-70442: GSLB Health Monitor not functioning as expected due to incorrect namespace
- AV-70447: When Keystone token is used for authentication, tenant check validation was not performed for that user resulting in allowed access for resources in other tenants
- AV-70456: When a client sends a DNS request to an Avi DNS virtual service, and the client request gets directed to a site based on a DNS topology policy, the client location in the client logs is reported incorrectly as the IP address group used in the DNS policy
- AV-71043: Virtual services go to fault state due to SSLCert update
- AV-71117: While editing an LDAP profile, the SE fails if the information in the field Required User Group Membership (
require_user_groups
) is removed - AV-71303: If virtual service IP addresses get deleted from the Oracle cloud, virtual service placement fails
- AV-71331: When
System-DNS
application profile is used for the DNS virtual service, DNS resolution via TCP leaves TCP client connections open - AV-71471: Inbound rules are missing for the VIPs created after configuring
vip_default_gateway
, and when the OpenShift or Kubernetes cloud is updated multiple times before this configuration - AV-71490: Infoblox IPAM-only configuration fails if DNS view
default
is renamed or non-default network view is used - AV-71672: Backup of large configuration fails if the total size of objects of a given type exceeds a specific size limit
- AV-71743: GSLB: When a GSLB group name is longer than 75 characters, it may result in an SE fatal error
- AV-72190: GSLB: Updates to GSLB objects do not percolate to the follower sites if the original GSLB object had errors in the past
Key Changes in 18.2.6
- Upgrade process starting with Avi Vantage release 18.2.6, is a two-step process. It includes the following:
- Uploading an image or a patch using the image REST API.
- Initiating upgrade operations using the new REST API or Avi CLI.
- APIs for upgrade and upgrade status for the Avi Vantage release 18.2.6 are different from the APIs used before 18.2.6 release.
- Avi Controller: The default Controller OVA template should be increased to 128 GB.
- Licensing: License enforcement enabled: Service Engine capacity is restricted to the licenses available on the Controller
- UI: Tenant switching moved to a drop-down for easier operation
- UI: Application dashboard displayed automatically on switching tenants
- UI: New interface for monitoring upgrades and triggering emergency rollback
- (Tech Preview) ProjectX : Controller - Avi customer portal communication for automated case creation and tech-support upload
Known Issues in 18.2.6
- AV-72774: OpenStack: Virtual service stops working intermittently after upgrading to 18.2.6. To avoid this ensure that the TX ring size is modified to 128 and reboot the Service Engine to apply the configuration.
- AV-74599: Objects having non-ASCII characters in names fail during full_system export or upgrade. Run the script at https://github.com/avinetworks/devops/blob/master/python/export_unicode_issue_script.py on the Controller to identify such objects.
- AV-77027: No 3DES ciphers supported with the new OpenSSL stack
- AV-101464: A version incompatibility is causing Thales Luna HSM (formerly SafeNet Luna HSM) and AWS CloudHSMv2 interoperability to fail for Avi versions 18.2.6 and onwards.
Workaround: None
Issues Resolved in 18.2.5 Patch Releases
Issues Resolved in 18.2.5-4p2
- AV-71043: Virtual services go to Fault state due to SSLCert update
Issues Resolved in 18.2.5-4p1
- AV-67064: Azure: With a combination of virtual services with and without public IP addresses placed on the same Service Engine, a virtual service scale in causes downtime
- AV-67644: SE failure due to memory exhaustion in the Service Engine logging event process
Issues Resolved in 18.2.5-3p7
- AV-83643: Service Engine fails when connection multiplexing is disabled while using poolgroups and server goes down between requests on the same connection
Issues Resolved in 18.2.5-3p6
- AV-30408: Pool groups are not supported if connection multiplexing is disabled in the application profile
Issues Resolved in 18.2.5-3p5
- AV-70131: Controller not able to stream logs
- AV-72190: Updates to GSLB objects do not percolate to the follower sites if the original GSLB object had errored in the past
- AV-72196: GeoDB is not processed correctly if the SE is under configuration pressure
- AV-72384: SE process can get to infinite loop when corrupted SSL data is received from backend
- AV-72565: FQDN resolution at very low dns_refresh_interval starves GSLB Leader from issuing health status queries to follower
- AV-72951: ‘all-tenant’ queries for non-admin user fails if the user does not have access to ‘admin’ tenant
Issues Resolved in 18.2.5-3p4
- AV-70456: When a client sends a DNS request to an Avi DNS virtual service, and the client request gets directed to a site based on a DNS topology policy, the client location in the client logs is reported incorrectly as the IP address group used in the DNS policy
- AV-71331: When
System-DNS
application profile is used for the DNS virtual service, DNS resolution via TCP leaves TCP client connections open - AV-71606: A GSLB group name longer than 75 character may result in an SE fatal error
- AV-71672: Backup of large configuration fails if the total size of objects of a given type exceeds a specific size limit
- AV-72113: Health monitor does not use the correct hostname, if a pool member with same IP:port has a different hostname
Issues Resolved in 18.2.5-3p3
- AV-65216: When DNS resolution is used for pool the port number resets to inherit the default port in the pool
- AV-68565: Not able to download backup file from the Controller
- AV-70130: If the system has shared VIP virtual services, the Service Engines of these virtual services can get stuck in
admin_down_requested
state resulting in a cascading effect of errors in the upgrade process and scaling in / migration operations on the virtual service
Issues Resolved in 18.2.5-3p2
- AV-69317: GSLB FQDN uniqueness check fails leading to SITE_OUT_OF_SYNC
Issues Resolved in 18.2.5-3p1
- AV-59662: After upgrade, older metrics are not visible.
- AV-67414: Time to Live (TTL) value is zero for DNS responses for static DNS records and GSLB service. Avi Vantage does not use TTL configured in the DNS application profile.
- AV-67644: SE failure due to memory exhaustion in
se_log_agent
process. - AV-67798: Support more than 16 fallback sites for DNS policy.
- AV-67981: Connection Multiplexing is not allowed on a virtual service referencing pool groups.
Issue Resolved in 18.2.5-2p35
- AV-132431: Mitigation for CVE-2021-44228.
Issue Resolved in 18.2.5-2p34
- AV-121761: LSC: On hosts with large memory (>= 256 GB), when the Controller is also running on the same host, Service Engine may fail due to memory fragmentation.
Issues Resolved in 18.2.5-2p33
- AV-112742: Increase default disk size in AWS image to 15 G
- AV-111140: If the system has user names with “.” in them, the search functionality for those user names are broken.
Issue Resolved in 18.2.5-2p32
- AV-104285: While generating a self-signed RSA certificate in a FIPS mode enabled SafeNet Appliance, Error when generating SafeNet HSM bound RSA key is displayed.
Issues Resolved in 18.2.5-2p31
- AV-103505: Changes to support SafeNet version 7.3.3
- AV-83367: Misconfiguration in authentication backend could cause intermittent logout of remote users
Issue Resolved in 18.2.5-2p30
-
AV-94032: With more than 500 AWS auto scaling groups(ASGs) configured as pools, frequent updates to the ASGs can cause pool updates to fail with the error
Timedout in executing CloudConnectorService.cc_lookup_nw request_pb
-
AV-101200: The virtual service throughput may be rate limited when many scale-in/scale-out are done at a high traffic load
Issue Resolved in 18.2.5-2p29
- AV-66755: Controller certificate regeneration is not working under certain cluster triggers like removing nodes from a cluster. This can lead to Service Engines getting disconnected from the Controller on a cluster membership change.
Issue Resolved in 18.2.5-2p28
- AV-86782: Control plane health status is not marking GSLB member down
Issues Resolved in 18.2.5-2p27
- AV-69841: In the SE-Agent, queue build can happen due to high rate of metrics processing leading to high memory consumption
- AV-88094: Service Engine on Azure could fail if the NIC’s link flaps
Issues Resolved in 18.2.5-2p25
- AV-81373: Extra VIPs on Service Engine data NICs belonging to disabled virtual service are not getting moved to parking NIC during reconcile
Issues Resolved in 18.2.5-2p23
- AV-79230: AWS: Calls to AWS cloud may hang with proxy in place, causing other virtual services to not get placed
- AV-80052: If a Service Engine has been running for more than 397 days, a spurious debug message causes high CPU utilization for
se_log_agent
process causing potential heartbeat failures - AV-80740: Client connections failed due to CRL expiration
Issues Resolved in 18.2.5-2p22
- AV-66079: Preserve-client-IP support for TCP flows for routed backend
- AV-67665:
DNS_PERMISSION
added to the RoleService - AV-77140: L4 DataScript support to modify/insert/discard UDP payloads
- AV-77280: DataScript payload may be invalid while using
avi.l4.modify
- AV-77480: Service Engine may fail if the file /etc/vm_uuid cannot be opened
- AV-77729: Remove SSH access for non-superuser users
- AV-79191: Added Preserve-client-IP support for UDP flows for routed backend
Issues Resolved in 18.2.5-2p21
- AV-76140: AWS: Virtual service placement on Service Engines may fail due to cloud connector timeout if there are more than 500 virtual services
Issues Resolved in 18.2.5-2p20
- AV-71043: Virtual services go to fault state due to SSLCert update
Issues Resolved in 18.2.5-2p19
- AV-73983: BGP based virtual services fail to get placed in vCenter write access clouds when multiple BGP peers are configured
- AV-74134: Virtual service manager returns
SYSERR_RM_NO_SE_IN_SE_GRP_VIP_ACC
error for BGP enabled virtual service - AV-76037: HTTP cookie persistence does not work when connection multiplexing is disabled
- AV-76301: In non DPDK environments supporting hot plug, adding and removing multiple interfaces owing to incorrect interface memory accounting causes connection memory depletion. This leads to errors for new connection request processing
Issues Resolved in 18.2.5-2p18
- AV-73846: With NSX-V integration when a new virtual service is created, DFW section is created but no firewall rules are added
Issues Resolved in 18.2.5-2p17
- AV-71935: Service Engine may fail due to a race condition when a virtual service with connection multiplex disabled has client IP persistence enabled
Issues Resolved in 18.2.5-2p16
- AV-74134: Virtual service manager returns
SYSERR_RM_NO_SE_IN_SE_GRP_VIP_ACC
error for BGP enabled virtual service
Issues Resolved in 18.2.5-2p15
- AV-69317: GSLB FQDN uniqueness check fails leading to SITE_OUT_OF_SYNC
- AV-72120: GSLB followers out of sync
- AV-72325: Oracle Cloud: Virtual service placement on SE may fail for short duration after restarting Avi Controller or cloud connector
- AV-72449: Avi Controller may fail to refresh pool servers associated with AWS autoscale group
- AV-72667: Unable to access the Controller UI using IE/Edge browser
- AV-73591: UI support for IE11
Issues Resolved in 18.2.5-2p14
- AV-64159: All traffic is allowed to server security group when the virtual service is disabled
- AV-70456: When a client sends a DNS request to an Avi DNS virtual service, and the client request gets directed to a site based on a DNS topology policy, the client location in the client logs is reported incorrectly as the IP address group used in the DNS policy
- AV-71231: Large transmission packets are not segmented to clone servers causing delays in packet processing logic
- AV-71672: Large configuration backup may fail if the total size of objects of a given type exceeds an internal limit
- AV-71988: AWS: Virtual service sharing the same VIP are placed on different vNICs on the Service Engine
- AV-72113: Health monitor does not use the correct hostname if a pool member with same IP:port has a different hostname
- AV-72194: NSX distributed firewall DFW populated with incorrect rule allowing any to any access while creating or disabling a virtual service with incorrect port service to run health monitor
- AV-72539: NSX-v DFW rule creation fails with NSX-v 6.4.5 and above due to API change
Issues Resolved in 18.2.5-2p13
- AV-71059: Upgrade from 17.2.7 fails in the
migrate_config
step if a separate partition is used for metrics - AV-71349: Service Engine process can get to infinite loop when corrupted SSL data is received from the backend
Issues Resolved in 18.2.5-2p12
- AV-67550: Intermittent corruption in response data when WAF response rules are enabled
- AV-67600: Azure: Connectivity issues to Azure APIs can cause some operations to fail with an error message:
unsupported operand type(s) for -=: 'Retry' and 'int'
- AV-70707: WAF learning: Flagged or erroneous requests are used for learning
- AV-71994: SE occasionally skips sending application learning data to the Controller
- AV-72042: WAF learning does not create PSM rules automatically
- AV-72360: WAF learning messages do not reach the correct Controller
Issues Resolved in 18.2.5-2p11
- AV-70442: When a DNS virtual service is placed on an SE that contains multiple name spaces, and the interface on which the DNS VS is placed is a port-channel, the VRF chosen by the DNS VS for health monitoring GSLB services may not be the right one resulting in health monitors staying down
- AV-71303: If virtual service IP addresses get deleted from the cloud, virtual service placement fails
- AV-71331: When
System-DNS
application profile is used for the DNS virutal service, DNS resolution via TCP leaves TCP client connections open - AV-71490: Infoblox IPAM-only configuration fails if DNS view
default
is renamed or non-default network view is used - AV-71606: A GSLB group name longer than 75 character may result in an SE fatal error
Issues Resolved in 18.2.5-2p10
- AV-67892: Upgrade taking longer than expected due to SeScaleOutReady time out
- AV-69186: Application learning is not working when PSM groups are created in different tenant
- AV-69211: Event verification failed with
percent_remaining is not 0.0
error
Issues Resolved in 18.2.5-2p9
- AV-68512: Service Engine running on OpenShift RHEL 7.7 stops processing packets in a few minutes after initialization
- AV-69577: In AWS configuration dialog, the cross account roles may not be listed when use cross account assume role option is selected
Issues Resolved in 18.2.5-2p8
- AV-65216: When DNS resolution is used for pool the port number resets to inherit the default port in the pool
- AV-69578: Update GeoDB to latest MaxMind GeoLite2
- AV-69715: High memory usage reported after upgrading to 18.2.5
- AV-70130: If the system has shared VIP virtual services, the Service Engines of these virtual services can get stuck in
admin_down_requested
state resulting in a cascading effect of errors in the upgrade process and scaling in / migration operations on the virtual service
Issues Resolved in 18.2.5-2p7
- AV-67918: TCP-Proxy idle timeout range needs to be enhanced
- AV-68183: The Controller based events are not getting generated as alerts and not sent as Trap/syslog
- AV-68512: Service engine running on OpenShift on RHEL 7.7 stops processing packets in a few minutes after initialization
- AV-69223: No logs in the UI as search service is down
- AV-69301: When a clone server is deleted, there is a possibility of the SE fails due to invalid clone server indexing
Issues Resolved in 18.2.5-2p6
- AV-60084: If multiple FQDNs are added to a virtual service, only the first one gets registered to AWS Route 53
- AV-66909: Connectivity issues with the API server can cause API calls to take significant amount of time, stalling syncing of ingresses/apps
- AV-67000: UI: Infoblox IPAM: Virtual service create with placement_networks selected clears subnet field in
ipam_network_subnet
API request - AV-68191: With certain OpenShift 3.11 versions,
securitycontextconstraints
API is not backwards compatible causing route sync to fail - AV-68565: Not able to download backup file from the Controller
- AV-68949: UI: Subnet for VIP allocation is removed once allocation IP type is removed and then selected again
- AV-69360: Traffic to scaled out virtual service fails on RancherOS based K8s
Issues Resolved in 18.2.5-2p5
- AV-67723: DataScript API to get latitude and longitude co-ordinates for an IPv4 address
Issues Resolved in 18.2.5-2p4
- AV-67723: DataScript API to get latitude and longitude co-ordinates for an IPv4 address
Issues Resolved in 18.2.5-2p3
- AV-67113: BGP route advertisement fails if Service Engine and BGP peer are part of the /31 network
- AV-67644: SE failure due to memory exhaustion in the Service Engine logging event process
- AV-67895: Service Engine failure due to malformed packet causing policy engine to misbehave
Issues Resolved in 18.2.5-2p2
- AV-66551: Virtual service is not placed on a Service Engine in VMware write access cloud, if ID networks are configured for static IP allocation under race conditions
- AV-67647: Child SNI virtual services do not get placed on VMware / ACI cloud
- AV-67798: Support more than 16 fallback sites in DNS policy
Issues Resolved in 18.2.5-2p1
- AV-59662: Post upgrade, old metrics are not visible on Avi Vantage
- AV-67316: Upgrade from Avi OpenShift deployment versions of (<17.2.14 to 18.2.5) may cause certain old inactive routes to not get updated. This version list also includes 17.2.10 -> 17.2.x(14+) -> 18.2.5, 17.2.10 -> 18.2.x(2+) -> 18.2.5
- AV-67414: Time to Live (TTL) value is zero for DNS responses for static DNS records and GSLB service. Avi Vantage does not use TTL configured in the DNS application profile. Workaround is to configure TTL in the GSLB service and for the static records
What’s New in 18.2.5
ADC
- Support for HTTP/2 Metrics
- Support for standalone Infoblox IPAM
- Support for L2/L3 Direct Server Return
- Support for RADIUS/DHCP/RDP attribute based load balancing
- Increased maximum configurable range of session idle timeout to 7200 in TCP fastpath profile)
Analytics
- Support for multiple external log analytics streaming endpoints
- Support for trigger events based on Controller node performance metrics
- Support for IPv6 external health monitor
Automation
- Support for Avi Terraform
- SAML support for Python SDK and Ansible
- Ansible: Support for Avi Objects lookup plugin
- Go SDK Enhancements: Support for advanced API options like cloud, tenant, and parameters
- Support for fileservice in Ansible and Terraform
- Support for Ansible credentials to obfuscate only sensitive data
DataScript
DNS
- Support for DNS SOA query
- Support for Avi DNS virtual service to resolve CNAMEs
GSLB
- Support to retrieve specific number of records from multiple GSLB pools
- Support for GSLB DNS virtual service on SE Group in active/standby HA mode
- Support for viewing GSLB configuration synchronization status
- SNI support for custom host header for HTTPS health monitor
Layer 7 Proxy
- Support for IP to ASID mapping
- Whitelisting support for SAML authentication
Logging
Networking
- Support for Outbound NAT
- Support for default gateway per VRF
- Support for SNI name based pool switching in Layer 4 proxy
- Support for routing per VRF: Remove deprecated fields from SE group
Public Cloud
- Azure: Multi-LB support for increased VS scale per SE-Group
- Azure: Support for static virtual service IP address
- Azure: Support for multiple VIPs in a virtual service
- Azure: Support for dedicated management NIC for VMs
- GCP: Support for full access
- GCP: Support for Cloud Router integration
Security
- WAF: Support for positive security model
- WAF: Support for whitelisting
- WAF: Support for for auto-generating positive model rule based on the traffic patterns
- WAF: Update to CRS 3.1
- Support for certificates installed in admin tenant to be made available in non-admin tenant
- SafeNet HSM Version 7 is supported
System
- Support to access and manage multiple Controller clusters from the Avi UI
- Support for various tech support generation using Avi UI
Issues Resolved in 18.2.5
- AV-56238: Stale NIC offload flags in mbufs were stalling NIC transmit queues
- AV-58188: DNS health monitor does not allow querying AAAA record
- AV-59904: Support for using port-security option for Neutron OpFlex plugin
- AV-60072: OpenShift: If a pod goes into “not_ready_addresses” state temporarily, it may be removed from the pool in Avi causing traffic disruption to the route
- AV-60897: Update-pciids hangs when there is no internet connectivity
- AV-61057: AWS Autoscale groups with target groups attached in the environment causes polling of autoscale groups to fail
- AV-62259: Multiple dispatchers are not in effect even when enabled for Intel 25G NIC
- AV-63248: OpenStack: Virtual services may become unavailable during an upgrade for upto 10 minutes in OpenStack environment with Nuage SDN integration
- AV-63282: OpenStack: Virtual service with references to missing networks in OpenStack can cause other virtual services to go down
- AV-63405: Listing of AWS Autoscaling groups in the pool configuration UI can fail and cause AWS_ASG_FAIILURE event
- AV-63454: Support for Syslog over TLS
- AV-63632: Health monitor fails even on a successful response if the response has a header size that is > 2048 bytes
- AV-63829: OpenStack: Glance image upload fails
- AV-64025: Service Engine may fail during metrics reporting for a DNS virtual service
- AV-64167: OpenStack: Avi deletes OpenStack port that was created for IP reservation
- AV-64198: When GSLB site cookie persistence is enabled , the corresponding SP pool gets created in default cloud instead of actual cloud where the virtual service (GSLB pool member) is present
- AV-64256: Service Engine fails if a virtual service with connection multiplexing disabled in the application profile refers to a pool group
- AV-64306: With HTTP1.0, non-KeepAlive TCP connection can linger even after the request is served causing clients to slowdown
- AV-64643: Azure: Payload can be truncated if multiple smaller packets are coalesced to a single packet of size 64K because of GRO
- AV-64656:
avi.http.redirect()
in datascript does not keep virtual service in up state - AV-64674: SACK related vulnerabilities identified by CVE-2019-11477, CVE-2019-11478, and CVE-2019-11479
- AV-64858:
show serviceengine <se> bgp debug
in a highly scaled out system causes SE agent to stall leading to SE disconnection - AV-64896: Disabling
debug_vrf_all
flag under debugvrfcontext fails to disable the debugs - AV-65152: AWS: Clone server configuration causes VIPs to go down if preserve_client_ip is not used
- AV-65212: Using IP instead of DNS Name in CSR, results in SAN being populated with
DNS:x.x.x.x
instead ofIP:x.x.x.x
Known Issues and Workarounds in 18.2.5
- AV-64852: Upgrade fails if object names contain URI reserved characters
- AV-67414: Time to Live (TTL) value is zero for DNS responses for static DNS records and GSLB service. Avi Vantage does not use TTL configured in the DNS application profile. Workaround is to configure TTL in the GSLB service and for the static records.
Key Changes in 18.2.5
- For container environment, the NTP and DNS settings need to be configured on the host. The existing system configuration on the Controller will not be applicable.
Issues Resolved in 18.2.4 Patch Releases
Issue Resolved in 18.2.4-13p2
- AV-91369: SE failure when the cookie being encrypted is larger than 4K
Issue Resolved in 18.2.4-13p1
- AV-90217: Requests resulting in a SAML authentication loop
Issues Resolved in 18.2.4-12p1
- AV-72685: If multiple CLI sessions are running simultaneously and a heavy object is loading in the memory, then the CLI usage increases leading to command timeouts
Issues Resolved in 18.2.4-11p1
- AV-71043: Virtual services go to Fault state due to SSLCert update
Issues Resolved in 18.2.4-10p1
- AV-68995: Service Engine might fail with PingID policy when user identity is set
Issues Resolved in 18.2.4-9p1
- AV-68505: Azure: SE creation with PAYG license may fail
Issues Resolved in 18.2.4-8p3
- AV-79170: Avi portal needs manual restart after renewing the Controller certificate
Issues Resolved in 18.2.4-8p2
- AV-70447: When Keystone token is used for authentication, tenant check validation was not performed for the user that allowed access to resources in other tenants
Issues Resolved in 18.2.4-8p1
- AV-65826: Automatic certificate renewal script is timing out in specific tenant and then renewing the certificate in admin tenant
Issues Resolved in 18.2.4-7p2
- AV-65216: When DNS resolution is used for pool the port number resets to inherit the default port in the pool
Issues Resolved in 18.2.4-7p1
- AV-59662: Post upgrade, old metrics are not visible on Avi Vantage.
- AV-65483: Under some race conditions, an Avi Controller node can regenerate the ssh keys that are used by other Avi Controllers or Service Engines to connect to this Avi Controller node, leading to loss of connectivity between them.
Issues Resolved in 18.2.4-5p2
- AV-65408: AWS cloud connector may fail to attach VIPs to Service Engines if the number of VIPs are more than 300
Issues Resolved in 18.2.4-5p1
- AV-65026: AWS: Security group rules allowing all traffic from 0.0.0.0/0 get added to the Service Engines even if SG_INGRESS_DATA option is set to None
Issues Resolved in 18.2.4-4p3
- AV-65216: When DNS resolution is used for pool the port number resets to inherit the default port in the pool
- AV-65483: Under some race conditions, a Controller node can regenerate its SSH keys that are used by other Controllers/Service Engines to connect to this Controller node, leading to loss of connectivity between them
- AV-65826: Automatic certificate renewal script is timing out in specific tenant and then renewing the certificate in admin tenant
- AV-67892: Upgrade taking longer than expected due to SeScaleOutReady time out
Issues Resolved in 18.2.4-4p2
- AV-64372: System patch does not get applied after a Controller reboot when the Controller is running as a docker container
- AV-65216: When DNS resolution is used for pool the port number resets to inherit the default port in the pool
Issues Resolved in 18.2.4-4p1
- AV-64092: Unable to bind the “Placement Network” to virtual service from the Controller UI
- AV-64351: Upgrade fails if there is an orphaned SNI child virtualservice in the configuration
- AV-64556: SNI child virtual service placement is not in sync after upgrade when “ignore-failure” option is used to resume the upgrade
- AV-64988: On multi VIP based setup in AWS where virtual services are scaled out across AZs each SE upgrade can take about 11-12 minutes (or more)
- AV-65026: AWS: Security group rules allowing all traffic from 0.0.0.0/0 get added to the Service Engines even if SG_INGRESS_DATA option is set to None
- AV-65408: AWS cloud connector may fail to attach VIPs to SEs if the number of VIPs are more than 300
Issues Resolved in 18.2.4-3p1
- AV-63777: Unable to list networks while creating a virtual service in UI for AWS cloud
Issues Resolved in 18.2.4-2p4
- AV-66026: In Avi Vantage version 18.2.4, based on the selinux status, if not in privileged mode Avi egress pods may not come up
Issues Resolved in 18.2.4-2p3
- AV-64092: Unable to bind the “Placement Network” to virtual service from the Controller UI
- AV-66143: Support for SafeNet 7.x
Issues Resolved in 18.2.4-2p2
- AV-65219: Automatic deletion and recovery of GSLB service
Issues Resolved in 18.2.4-2p1
- AV-62309: Allow SSL key and certificate object to be shared from admin tenant
What’s New in 18.2.4
- DataScript: The new avi.http.saml_session_decrypt() function decrypts the SAML session cookie
Issues Resolved in 18.2.4
- AV-59538: Service Engine unable to connect back to the Controller after an upgrade from an Avi Vantage version prior to 17.2.8
- AV-60128: GSLB not marking pool member down
- AV-61294: Uploads to HTTP/2 VIPs can fail
- AV-61300: HTTP/2 POST requests with no “Content-Length” header gets a “400 Bad request” response
- AV-61769: Duplicate IPs obtained from Infoblox for VIPs with the same name/port
- AV-61819: Service Engine fails when a request with a cookie header size > 4k is sent in a SAML-authenticated session
- AV-61875: Few Service Engines remain in partitioned state if both the leader Controller node and a follower Controller node are rebooted at the same time
- AV-61948: Service Engine fails during HTTP/2 upload, when connectivity to the back-end servers is down
- AV-62053: Configuring SSL profile selectors is not possible for SNI child virtual services when the child virtual service does not have a default SSL profile
- AV-62163: Health status syncing between GSLB sites fail after upgrading to 18.2.3 due to a deprecated field
- AV-62198: The
session_id
field is missing in the Avi REST API response, causing API failures - AV-62203: UI: Connector lines were not rendering between the tree-view components on the virtual services dashboard
- AV-62256: Limit request and connection memory pool usage
- AV-62436: Service Engine fails while parsing decoded arguments in an HTTP URI, under memory pressure
- AV-62702: Virtual service creation or update fails in public clouds if
enable_rhi
flag is set to False - AV-62744: Virtual service configured with PingAccess Agent integration does not support HTTP/2
- AV-62830: Service Engine fails when configuring PingAccess authentication profile
- AV-62836: Failure of HTTP/2 POST requests initiated via the Chrome browser
- AV-62852: API call to filter event logs gets stuck at
percent_remaining:78
after upgrade - AV-62916: GSLB health monitoring fails in AWS due to a mismatch of the VRF UUID between the Avi Controller and Service Engine
- AV-62960: HTTP POST requests from client without the
Expect
header can fail with a 400 response - AV-62966: Licensing statistics might account for deleted Service Engines and prevent further Service Engines from getting created
- AV-62967: AWS: Moving from access-key or secret-key-based authentication to IAM role-based authentication retained stale access key, causing permission-related failures attached to the keys and subsequent virtual service downtime
- AV-63025: GSLB may fail to consider geolocation configuration when DNS virtual service state is toggled
- AV-63213: Memory leak due to PingAccess-Agent-specific application logs
- AV-63226: Certificates are not being renewed with the intended SANs through the certificate management profile
- AV-63296: Some HTTP/2 POST requests get a
503
response - AV-63407: Memory leak when PingAccess Agent is configured
- AV-63471: Failure in API calls to
sslkeyandcertificate
- AV-63472: Updating a virtual service, using
PATCH
method on/virtualservice
endpoint results in {“error”: “Mandatory key not found: vip_id”} - AV-63480: Avi RUM (client insight) requests do not complete, hogging memory of data-path objects on Service Engine(s)
- AV-63588: Updating the VIP of a virtual service in OpenStack fails with an
invalid subnet
error - AV-63802: Upgrade from 17.2.14 to 18.2.3 aborted due to error in
config_migrate
- AV-63928: After installing a Service Engine patch, newly created SEs are still instantiated without the patch
Issues Resolved in 18.2.3 Patch Releases
Issues Resolved in 18.2.3-4p1
- AV-62198: Avi Controller will send both avi_session_id and session_id again in the REST API response
- AV-62702: Virtual service creation or update fails in public clouds if enable_rhi flag is set to False
Issues Resolved in 18.2.3-3p1
- AV-61720: vCenter discovery not proceeding when a VM’s vNIC was attached to a portgroup which did not have read permission for the user
- AV-61769: Infoblox issued duplicate IPs for VIPs with the same name/port
- AV-61875: Some of the Service Engines remain in partitioned state if both the leader and follower Controller nodes are rebooted at the same time
- AV-62309: Allow SSL key and certificate object to be shared from the admin tenant
Issues Resolved in 18.2.3-2p1
- AV-61875: Some of the Service Engines can remain in partitioned state if both the leader and a follower Controller node are rebooted at the same time
- AV-62163: Health status sync between GSLB Sites fails after upgrading to 18.2.3 as the upgrade site is unable to parse the response because of deprecated fields
- AV-62309: Allow SSL key and certificate object to be shared from admin tenant
Issues Resolved in 18.2.3-1p5
- AV-69266: Azure: Creation of se_dp processors based on number of cores
Issues Resolved in 18.2.3-1p4
- AV-63480: Client insight requests not completed on the Service Engine hogging data path objects memory
Issues Resolved in 18.2.3-1p3
- AV-63226: Certificates not renewed with the intended SANs through the certificate management profile
Issues Resolved in 18.2.3-1p2
- AV-61294: Uploads to HTTP/2 VIPs can fail
- AV-61948: Service Engine fails during a HTTP/2 upload, when connectivity to the back-end servers is down
- AV-62198: Avi Controller will send both
avi_session_id
andsession_id
again in the REST API response - AV-62203: The connector lines not rendering between the tree view components
- AV-62436: Service Engine failure while parsing decoded arguments in an HTTP URI, under memory pressure
- AV-62702: Virtual service creation or update fails in public clouds if
enable_rhi
flag is set to False - AV-62744: Virtual service configured with ping access auth profile does not support HTTP/2
- AV-62830: Service Engine failure while configuring ping access profile
- AV-62916: GSLB health monitoring fails in AWS environment due to a mismatch of the VRF UUID between the Controller and SE, causing route lookup failure while sending out health monitoring packets from incorrect VRF, leading to health monitor failing
- AV-62960: HTTP POST requests from client without
Expect Header
can fail with a 400 error - AV-62966: Licensing statistics might account for deleted Service Engines and prevent further Service Engines from getting created
- AV-62967: Virtual services on AWS in down state after an upgrade from version 17.2.2 to 18.2.3
Issues Resolved in 18.2.3-1p1
- AV-61787: DataScript API
avi.http.saml_session_decrypt()
to decrypt SAML session cookie - AV-61819: Service Engine failure when request with Cookie Header size greater than 4K is sent, in a SAML authenticated session
- AV-61875: Some of the Service Engines can remain in partitioned state if both the leader and a follower Controller node are rebooted at the same time
- AV-62053: Configuring SSL profile selectors is not possible for SNI child virtual services when the child virtual service does not have a default SSL profile
- AV-62163: Health status sync between GSLB Sites fails after upgrading to 18.2.3 as the upgrade site is unable to parse the response because of deprecated fields
- AV-62256: Limit request and connection memory pool usage
What’s New in 18.2.3
Release date: 2May2019
ADC
- Support to export session pre-master key for pcap decryption
- Support for HTTP policy reuse
- Support to include SNI extension as a part of HTTPS health monitor
- Support for JSON content in an Avi HTTP response policy
- Increase in the maximum configurable value for virtual service rate limiter
- Support for UDP health monitor with IPv6 server
- Support for configuring proxy protocol via Avi Vantage UI
- HTTP health monitor request size boosted to 2048 bytes
Analytics
DataScript
- Ability to reference the Avi request ID in a DataScript
- DataScript function
datascript-avi-ssl-client-cert-validation
checks for client cert validation status - Support for DataScript event for LB Done state
GSLB
- Support for a different default LB algorithm, in case geolocation fails
- Support for topology-based load balancing (primary/fallback sites) as a GSLB algorithm, instead of a DNS policy
Security
- CRS: Support update of system default WAF policy with new rules
- WAF: Support for standard HTTP methods
- WAF: Support for excluding match elements via regex
- WAF: Support for all WAF exception options in UI
- Support to switch SSL profiles or control ciphers to be used based on client IP address
- PingAccess Agent support
Containers
- Kubernetes: Support for Egress Taints and Tolerations in Egress pod scheduling
- OpenShift: Option to assign FQDNs automatically to a VS in OpenShift cloud
- Support OpenShift/Kubernetes on top of OpenStack
- Support for allocating floating/elastic IP in AWS/OpenStack via annotation
- Containers on Azure: Support for static IP assignment to egress pod
Public Cloud
- AWS: Support for c5n instances for Service Engines
- AWS: Support Amazon S3 for Controller configuration backups
- AWS: Support in pool for an autoscaling group which has been created with Launch Template
- Azure: Support for multiple VIPs in a single virtual service
- Azure: Ability to override the Service Engine management network specified in cloud configuration, on a per-SE-group basis
- Azure: Option to select ALB type at SE group level
- Azure: Optimizations to VM scale set polling mechanism, to reduce API calls to Microsoft Azure
- Support for log in using an SSH key installed while creating the Avi Controller in AWS clouds and set the password
OpenStack
- Support for multiple networks with same CIDR
- Support for using port-security option for Neutron OpFlex plugin
Other Ecosystems
- Support for Cisco Cloud Services Platform (CSP) 5000 series appliances
- Support for Controller and Service Engine on KVM/QEMU
System
- Enhancement to limit frequency of
License Expiry
emails - Support for rotating log files in the
/var/log/
directory on the Controller
Issues Resolved in 18.2.3
- AV-46453: Kubernetes: External IP is not updated when K8s service type is set to
LoadBalancer
- AV-47046: End-to-End timing graphs not displayed
- AV-47080: Linux server cloud: Service Engine may fail on using multiple bond interfaces to advertise VIP via BGP
- AV-47181: On logging in as an administrator, default tenant is not set to
admin
- AV-51499: Avi Vantage not caching javascript query URI when
*/javascript
is in string group - AV-51582: VIP connectivity is lost when host key-value pair is configured in SE group settings
- AV-51693: In case of a failure, GSLB health checks are not performed on newly spawned Service Engines
- AV-52075: Reduction in Service Engine health score due to increased SE disk usage
- AV-52588: Server inventory response pages not paginated
- AV-52716: Service Engine failure on pool server reselect if the server is marked down at the same time
- AV-52722: NSX security groups are not populated in the UI
- AV-53119: Azure: Controller cluster goes down when the Controller VMs do not get scheduled for some time
- AV-53365: Incorrect handling of Nagios health monitor requests
- AV-53395: Azure: Service Engine CPU utilization reported by Avi Vantage is incorrect
- AV-53448: OpenStack: Neutron APIs timeout in a large deployment
- AV-53552: Unable to add an
exclude_list
to the rules for a crs_group in WAF Policy - AV-53563: Intermittent requests to AWS pool members fail with “connection closed abnormally: conn deleted due to config update”
- AV-53816: Incorrect RBAC dependency causes error in Roles edited via the UI
- AV-53899: SE OVA download failure from the Controller if the Controller is running as a docker container
- AV-53914: SE failure when Response event DataScript runs in the context of HTTP Response generated by a request event DataScript
- AV-54003: Autorebalance configuration does not take effect for some service engine groups
- AV-54008: While using HTTP/2 with caching enabled, application page does not load properly
- AV-54081: Access to the Controller fails even after ACL preventing the access is removed
- AV-54109: Unable to update systemconfig with CLI scripting mode
- AV-54186: Service Engine failure when certificate expires
- AV-54752: Avi Vantage not acknowledging FIN packets, causing delays
- AV-54922: Linux server cloud: Failure when IPv6 is configured on the VIP and IPv4 on the pool
- AV-54931: Service Engine may fail when caching and WAF are enabled on a virtual service
- AV-55185: Kubernetes in AWS: Virtual service failed to start due to private IP address limit on the SE
- AV-55343: SE failure when a pool group is configured with redirect fail action with no destination
- AV-55410: Unexpected BGP flap due to BFD timing out
- AV-55454: SE Failure for VS with App Type System-SSL-Application when Network Profile type is set to TCP Fast
- AV-55686:
SE_HM_EVENT_SHM_UP
events in the logs not preceded by any corresponding DOWN events - AV-55775: OpenShift: Multiple SE include/exclude attributes do not work
- AV-56113: OpenShift on Azure: One SE stuck in OPER_DISABLED mode even though Kubernetes node is Ready state
- AV-56197: Zone transfer through Avi DNS VS fails after a certain number of records are present
- AV-56236: Metrics: End-to-end timing graph in Virtual Service Analytics overlay not displayed
- AV-56495: Modifying the application’s domain name is not propagated to Infoblox DNS/IPAM
- AV-56528: Avi Vantage UI not showing all the pages ‘select servers from network’ view
- AV-56625: Fix for high Service Engine Persistence Table Usage
- AV-56660: Service Engine restarts when applying an Avi Controller patch
- AV-56674: AWS: Adding more than 200 servers to a pool fails
- AV-56697: SNMP trap for CONTROLLER_NODE_LEFT is generated as
aviSystemAlert
rather thanaviControllerStatusChanged
- AV-56734: GSLB: Round robin behavior fails when num_dns_ip is set to
0
and multiple pools have the same priority - AV-57344: VIP traffic from an external client fails when OpenShift/Kubernetes clusters have more than 1 NIC and the VIP NIC is not the default gateway interface
- AV-57616: Failure in metrics APIs for user-defined/custom metrics
- AV-58101: Service Engine failure due to BGP peer monitoring blocking data path for more than 60 seconds
- AV-58121: Kubernetes: Any non-error egress pod log also gets dumped to the screen
- AV-58181: Handle application of IPv6 routes with /48 mask properly
- AV-58426: Service Engines can fail to connect to the Controller due to a race condition that triggers the cluster services watcher process on the leader node to go into an inconsistent state
- AV-58446: When the link of physical function flaps, the virtual functions need to send a reset to recover network connectivity
- AV-58483: HTTP Response Policy is not displayed correctly in Avi Vantage UI
- AV-58530: External Health Monitor using
ldapsearch
fails - AV-58537: Service Engine fails on GSLB follower site when the leader site pushes an incompatible TCP health monitor
- AV-58660: Polling for Azure VM scalesets stops if a scaleset is deleted from Azure, without removing it from the Avi Pool
- AV-58831: SNAT sharing between VSes does not work for legacy HA
- AV-58886: Service Engine thread gets stuck when momentary access fails in the check for a specific SE pod, causing the SE’s IP resolution to fail and potentially the extra SE object not getting cleaned up
- AV-58900: AZURE_ACCESS_FAILURE event is not generated if access to Azure APIs fails after the cloud is up
- AV-58901: Auth Profile cannot be configured using FQDN in System configuration
- AV-58954: DataScript transform fails when the name of a stringgroup object referred by the DataScript is changed after creation
- AV-58986: After a Service Engine failure due to a kernel panic, the SE fails to reconnect to the Controller
- AV-59039: Replication issues between GSLB sites
- AV-59049: Using underscore in Service Engine group name causes daemonset creation failure in K8s/OC cloud
- AV-59053: GCP: Malformed URL error when adding route
- AV-59159: OpenShift: Attribute list in K8s/OC cloud configuration with additional SE groups causes excessive SEs to be spawned
- AV-59202: Unable to set maintenance code to HTTP health monitor
- AV-59255: All nodes in Controller markes as “initializing” with service temporarily unavailable
- AV-59279: Existing Routes/Ingresses can get deleted if there are K8s API server connectivity issues in rare scenarios
- AV-59388:
avi_proxy gslb
annotation to update content switch httppolicyset rule under child virtual service with created GSLB FQDN - AV-59497: After upgrade to 18.2.2 OpenShift Routes with no Host/Path will not work without explicitly sending a Host Header in the HTTP request as Avi programs a default 404 rule
- AV-59502: Service Engines stuck in disabled state upon changing SE group CPU/Memory/Disk Size
- AV-59530: Stale PCI ID-to-name mapping in Linux prevents release of NIC to kernel
- AV-59542: SE may fail with UDP per pkt virtual service preserving client IP and client port if client reuses the port
- AV-59639: AWS deployment fails if userdata is not provided
- AV-59642: VS Placement fails to follow legacy HA tags for VS with shared VIPs sometimes, when all such VSes were disabled and are enabled in any order
- AV-59647: AWS: When servers are moved to standby in autoscale groups and then terminated, it can cause polling of ASGs to stop
- AV-59658: While integrating with OpenStack Queens or higher releases, image upload might fail if interoperable image-import feature is enabled in glance service
- AV-59699: Cisco ACI: Secondary SE may directly send a RST packet instead of tunneling it to the primary causing wrong MAC learning for the VIP
- AV-59736: Process se_dp on Service Engine fails when a Virtual Service referencing a shared pool is deleted
- AV-59922: Updating an ingress annotation with invalid JSON causes the Virtual Service to be deleted
- AV-60068: Service Engine failure when a parent VS is disabled while there is an existing connection to the child VS and connection multiplexing is disabled
- AV-60201: Kubernetes ingress annotation does not respect specified
version
field - AV-60256: SE data NIC does not inherit configured security groups on AWS
- AV-60304: On config restore to new Controller, Service Engines unable to connect back to Controller
- AV-60460: When connection multiplexing is turned off, the requests coming on the client connection are sent on the back-end connection
- AV-60527: Controller with ipset rules configured does not bring up the eth0 as /etc/network/pre-up.d script is failing
- AV-60591: Egress pod replication Controller requires additional rights and
initContainers
in 18.2.2 - AV-61073: Azure: Update of the pool fails when same IP is being used by another server in different scale set
Known Issues and Workarounds in 18.2.3
- AV-61294: Uploads to HTTP/2 VIPs can fail in some cases, especially with a combination of a fast client and slow server. It is recommended to disable HTTP/2 on VIPs. This does not affect any file uploads to HTTP/1 VIPs.
- AV-61380: When Avi Vantage is upgraded from 17.2.x to 18.2.3 on GCP in DPDK mode, the Service Engine loses its management interface when it comes up after the upgrade. The SE can be recovered by rebooting the SE VM after the upgrade.
- AV-61787: Unable to decrypt SAML session cookie due to the error in the
avi.crypto.decrypt
API - AV-61819: Service Engine fails when a request with cookie header size > 4KB is sent in a SAML-authenticated session
- AV-61875: Some of Service Engines can remain in partitioned state if both the leader and a follower Controller node are rebooted at the same time
- AV-62053: Configuring SSL profile selectors is not possible for SNI child virtual services when the child VS does not have a default SSL profile
- AV-62163: Health status syncing between GSLB sites fails as the upgrade site is unable to parse the response because of deprecated fields
- AV-62256: Disabled check for the request and connection memory pool usage causes SE fail
- AV-62702: Virtual service creation or update fails in public clouds if
enable_rhi
flag is set to False - AV-62262: Traffic loss on virtual service caused due to an unsupported user-defined metric in the DataScript
- AV-62821: For geo load-balancing at GSLB service level, when the distance between the members is smaller compared to the number of members in the pool, then some of the pools are considered to be equi-distant from the client, and a different pool than the desired one could be picked
Issues Resolved in 18.2.2 Patch Releases
Issues Resolved in 18.2.2-9p1
- AV-61345: Add GRATARP support for BGP virtual service
Issues Resolved in 18.2.2-8p2
- AV-61355: SAML: Service Engine fails when request on an old connection comes in after SSO has been disabled
- AV-61787: DataScript API
avi.http.saml_session_decrypt()
to decrypt SAML session cookie - AV-61819: Service Engine failure when request with Cookie Header size greater than 4K is sent, in a SAML authenticated session
Issues Resolved in 18.2.2-8p1
- AV-60068: Service Engine failure when a parent virtual service is disabled while there is an existing connection to the child virtual service and the connection multiplexing is disabled
Issues Resolved in 18.2.2-7p1
- AV-55775: OpenShift: Multiple SE include/exclude attributes do not work
- AV-57344: VIP traffic from an external client fails when OpenShift/K8S clusters have more than 1 NIC and the VIP NIC is not the default gateway interface
- AV-58121: Any non error egress pod log also gets dumped to the screen
- AV-58886: SE thread gets stuck when momentary access fails in the check for a specific SE pod, causing the SE’s IP resolution to fail and potentially the extra SE object not getting cleaned up
- AV-59279: Existing routes/ingresses can get deleted if there are K8S API server connectivity issues in rare scenarios
- AV-59378: Default drop rule for host matching results in 404 for traffic for a route with no host defined
- AV-59497: After upgrade to 18.2.2 OpenShift routes with no host/path will not work without explicitly sending a host header in the HTTP request as Avi programs a default 404 rule
- AV-59502: SEs can be stuck in disabled state upon changing SE group CPU/memory/disksize
Issues Resolved in 18.2.2-6p3
- AV-71043: Virtual services go to Fault state due to SSLCert update
Issues Resolved in 18.2.2-6p2
- AV-67064: Azure: With a combination of virtual services with and without public IP addresses placed on the same Service Engine, a virtual service scale-in can cause down time
Issues Resolved in 18.2.2-6p1
- AV-58900: AZURE_ACCESS_FAILURE event is not generated if access to Azure APIs fails after the cloud is up
Issues Resolved in 18.2.2-5p1
- AV-58426: Service Engine fails to connect to the Controller triggering issues with cluster service watcher process
Issues Resolved in 18.2.2-4p1
- AV-59394: Reset connection when client certification validation fails
Issues Resolved in 18.2.2-3p2
- AV-61073: Azure: Update of the pool fails when same IP is used by another server in different scale set
Issues Resolved in 18.2.2-3p1
- AV-58660: Polling for Azure VM scalesets stops if a scaleset is deleted from Azure, without being removed from Avi pool
Issues Resolved in 18.2.2-2p1
- AV-57344: VIP traffic from an external client fails when OpenShift/K8S clusters have more than 1 NIC and the VIP NIC is not the default gateway interface
- AV-58886: SE thread stuck when momentary access fails for a specific SE pod check causing the SE’s IP resolution to fail and potentially the extra SE object is not cleaned up
Issues Resolved in 18.2.2-1p3
- AV-61051: Disable PCAP look-ahead logic to bring down CPU utilisation in dispatcher
- AV-58426: Service Engines can fail to connect to the Controller due to a race condition that triggers the cluster services watcher process on the leader node to go into an inconsistent state that responds to the Service Engine with no active members in the cluster
Issues Resolved in 18.2.2-1p1
- AV-56674: Adding more than 200 servers to a pool fails on AWS
What’s New in 18.2.2
Release date: 6Mar2019
ADC
- Support for configuring the number of SIP messages logged per SIP transaction and codes to ignore for better error classification
- Support for Mellanox ConnectX-5 with DPDK
Containers
- OpenShift: Configuration knob to assign FQDNs automatically to a virtual service in OpenShift clouds
- Kubernetes: Support for egress taints and tolerances in egress pod scheduling
OpenStack
Public Cloud
- Azure: Support for user-configured polling interval for Azure virtual machine scale sets
Security
- Support for selecting
include subdomains
under HSTS in SSL - Support for client authentication with SAML 2.0, integration with third-party IDP as a service provider initiating SSO
- WAF: Support for excluding match elements via regex
- Support added to Avi SDK to use IdP credentials for it as well as a REST API login
- Support for Microsoft Active Directory Federation Services as an IdP
System
- Configuration knob for enabling and disabling session key capture when debugging a virtual service
- Support for storing tech-support in Controller and link for downloading the file on the UI
UI
- Support for displaying pool connection properties parameters in the pool wizard
- Support for displaying bandwidth license entitlement and usage
- Support for scaleout ECMP flag under the virtual service page
Key Changes in 18.2.2
Issues Resolved in 18.2.2
- AV-46453: Kubernetes: External IP is not updated when k8s service type is set to LoadBalancer
- AV-51499: Avi Vantage not caching javascript query URI when ‘*/javascript’ is in the string group
- AV-52075: Post-upgrade Service Engine health score reduced due to increased disk usage
- AV-52588: Server inventory response pages not paginated
- AV-53119: Controller cluster HA: Fixes for better reconvergence
- AV-53301: Virtual Service -> Security overlay graphs missing data
- AV-53365: Incorrect handling of Nagios health monitor requests
- AV-53395: Azure: Rectify Service Engine CPU utilization values reported by Avi Vantage
- AV-53448: OpenStack: Fix timeout issues with cloud connector RPC requests
- AV-53547: Reduction of max SE per virtual service in the SE group does not take effect even after virtual service is disabled/enabled
- AV-53552: Allow addition of an exclude_list to the rules for a crs_group in WAF policy
- AV-53899: Service Engine OVA download failure from the Controller
- AV-53902: Configuring proxy protocol in UI does not work
- AV-53914: Service Engine failure when response event DataScript runs in the context of HTTP response generated by a request event DataScript
- AV-53966: Controller services may restart on Controller instances that have a large number of CPUs
- AV-53972: Metrics database usage increases on using client insights
- AV-54003: Autorebalance configuration did not take effect for some Service Engine groups
- AV-54008: On using HTTP/2 with caching enabled, application page does not load properly
- AV-54081: Access to the Controller fails even after ACL preventing the access is removed
- AV-54109: Unable to update system configuration with CLI scripting mode
- AV-54186: Virtual service goes into fault state when certificate expiry warning is generated
- AV-54302: Avi with Infoblox DNS profile: DNS PTR record created in forward lookup zone instead of reverse lookup zone
- AV-54379: Service Engine failure after bond VLAN interface was deleted on bonded VLAN interface
- AV-54752: Increase in latency with Avi not acknowledging TCP FIN packets for few flows
- AV-54922: Linux server cloud: IPv6 on the VIP and IPv4 on the pool fails
- AV-54931: Intermittent Service Engine failure when caching and WAF are enabled on a virtual service
- AV-54964: SQL injection possible while using some APIs
- AV-55142: Unable to configure a pool with autoscaling configuration if autoscale group is created with Launch Template
- AV-55185: K8s in AWS: Virtual service failed to start due to private IP address limit on the Service Engine
- AV-55343: Service Engine failure when a pool group is configured with redirect fail action with no destination
- AV-55454: Service Engine failure for virtual service with application type System-SSL-Application when network profile type is set to TCP Fast
- AV-55686: SE_HM_EVENT_SHM_UP events in the logs not preceded by any corresponding DOWN events
- AV-55850: License: Fix in workflow for creating a new cloud with Bandwidth license
- AV-55941: Azure: Pool members not deleted despite deleting servers from the corresponding Azure virtual machine scale set
- AV-56113: OpenShift on Azure: One Service Engine keeps entering OPER_DISABLED mode even though K8S node is in Ready state
- AV-56128: Support rotation of log files in /var/log/
- AV-56197: Zone transfer through Avi DNS virtual service fails after a certain number of records are present
- AV-56291: OpenShift: Performance degradation for large packets when the flow is handled by the secondary SE
- AV-56495: Modifying the application’s domain name is not propogated to Infoblox DNS/IPAM
- AV-56625: Over a period of few days SE Persistence table usage increased to 99%
- AV-56660: Service Engine restarts on applying Controller patch that requires a Controller reboot
- AV-56745: Enhancement to reduce frequency of license expiry emails
- AV-57619: User-defined metrics are incrementing even after the DataScript referencing the metrics is deleted
- AV-58867: Fix for cloud configuration failure when Keystone V2 is used. Restrict the OpenStack flavor listing to public flavors in the UI SE group settings
Known Issues in 18.2.2
- AV-59656: Log screen for few virtual services may never load and spin indefinitely
- AV-56674: Adding more than 200 servers to a pool fails on AWS
- AV-58537: Service Engine fails on GSLB follower site when the leader site pushes an incompatible TCP health monitor
- AV-58867: Keystone V2 endpoint configured for OpenStack is not supported
- AV-62821: For geo load-balancing at GSLB service level, when the distance between the members is smaller compared to the number of members in the pool, then some of the pools are considered to be “equi-distant” from the client, and a different pool than the desired one could be picked
What’s New in 18.2.1
Release date: 21Dec2018
ADC
- Support minimum health monitors to indicate an active server
- Service Engine selection based on a consistent hash of the client-ip[, port]
- Ability to add a request header with the location of the originating client IP, via a DataScript or HTTP policy
- SNI support with HTTP/2
- Certain pool-connection properties are now configurable at the pool level
- Support for MCX4121A-ACAT ConnectX-4 Lx EN 25G NICs
Containers
- Opt-in or opt-out for a load-balancing deployment in conformance with Kubernetes standards
- Ability to use alternate ingress provider
GSLB
- Ability to disable a GSLB pool
Logging
- Support for large trap payload in
aviSystemAlert
trap
Networking
- Visibility for status of bond interfaces
- Ability to use HTTP server reselect to select an available back-end server when connection to another has failed
Private Cloud
- Avi supports VMware hardware versions 10 and above. Support for hardware versions 8/9, corresponding to ESX5.0/5.1, has been deprecated.
Issues Resolved in 18.2.1
- AV-32521:
traceroute
within the namespace does not show the hops - AV-33959: URL invalid encoding for redirect action
- AV-41861: Memory leak during RSS scaleout
- AV-42759: Azure: Latency increases after some time
- AV-43980: Secure channel flapping between the Controller and SE when GRO is enabled
- AV-44473: Import configuration fails if string contains Unicode character
- AV-44659: Error message on saving HTTP security policy with rate-limit and local response HTML file
- AV-45040: Unable to update the virtual service name to have () parentheses from UI, but can change from REST API and CLI
- AV-45221: Virtual service placement stuck at “AWAITING_VNIC_IP” for SNI parent
- AV-45496: Service Engine may fail if TLS persistence is used for a non-SSL pool
- AV-45852: OpenShift: Delay in creating Avi routes
- AV-45943: Health monitor fails if there is a \r\n\r\n before the HTTP/x.x in the send string
- AV-46045: Linux server cloud: Service Engine may fail when DPDK is enabled on Mellanox NICs in a port channel
- AV-46061: Third-party GSLB sites are not shown in the list of DNS policy primary and fallback sites
- AV-46169: Syslog message with invalid PRI 324
- AV-46742: SE stuck at OPER_DISABLING while the cluster and SEs are having intermittent network partitioning issues
- AV-46899: OpenShift: Stale Avi bridge ports are not being cleaned up
- AV-47080: Linux server cloud: Service Engine may fail on using multiple bond interfaces to advertise VIP via BGP
- AV-47140: SMTP error while running email test
- AV-47333: Upgrade hung on remote task when the time is not synced between Service Engine and the Controller
- AV-47437: Linux server cloud: Default route may not take effect on using Mellanox NICs in in-band mode
- AV-47568: Service Engine failure due to a corrupted persistence cookie
- AV-47574: vCenter API version 6.7U1 is not supported by Avi Controller
- AV-47600: Service Engine may stop processing packets if it has been up for more than 392 days
- AV-47650: Service Engine advertising routes to BGP for virtual service that are not placed
- AV-47797: When RSS is enabled, connections to pool servers delayed due to dropped SYN+ACK packets causing retransmits
- AV-47800: When VIP to SNAT is enabled, changing non-critical fields (e.g.,
name
) causes virtual service to detach and reattach to Service Engines - AV-50783: Virtual service cannot be enabled due to IP address exhaustion
- AV-50784: Microsoft Azure: HTTP health monitor fails for VMs added to a pool from a scale set because of underscore (“_”) in the hostname
Performing the Upgrade
Upgrade prerequisites:
- The current version of the Avi Controller must be 17.2 or later.
- In case of Service Engine upgrade in a Nutanix Acropolis Hypervisor (AHV) environment, refer to the pre-upgrade changes.
Protocol Ports Used by Avi Vantage for Management Communication
Supported Platforms
Refer to System Requirements: Ecosystem
Product Documentation
For more information, please see the following documents, also available within this Knowledge Base.
Installation Guides
Open Source Package Information
Avi Networks software, Copyright © 2013-2019 by Avi Networks, Inc. All rights reserved. The copyrights to certain works contained in this software are owned by other third parties and used and distributed under license. Certain components of this software are licensed under the GNU General Public License (GPL) version 2.0 or the GNU Lesser General Public License (LGPL) Version 2.1. A copy of each such license is available at http://www.opensource.org/licenses/gpl-2.0.php and http://www.opensource.org/licenses/lgpl-2.1.php