BGP Support in Avi Vantage for OpenShift and Kubernetes

Overview

warning icon Note: Avi Controller must be outside the OpenShift/K8S cluster and cannot be run as a container alongside the Avi SE container.

Border Gateway Protocol (BGP) Route Health Injection (RHI) can be used to advertise the Virtual IPs (VIPs) assigned to north-south services in a Kubernetes or an OpenShift cluster.

This feature is particularly useful in the following scenarios:

  • To support elastic scaling using ECMP as described in BGP Support for Scaling Virtual Services.
  • To allow north-south VIPs to be allocated from a subnet other than that in which the cluster nodes’ external interface resides.

Enabling BGP Features in Avi Vantage for Kubernetes and OpenShift

Configuring BGP features in Avi Vantage is accomplished by configuring a BGP profile, and through an annotation in the Kubernetes/OpenShift service or route/ingress definition. The BGP profile specifies the local Autonomous System (AS) ID that the Avi Service Engine and each of the peer BGP routers is in, and the IP address of each peer BGP router.

Configuring a BGP profile (via the Web Interface)

To configure a BGP profile, from the Avi Vantage UI,

  1. Navigate to Infrastructure > Routing.
  2. Click on the cloud name. If the cloud is the one that was set up during initial installation of the Avi Controller using the setup wizard, the cloud name is “Default-Cloud,” as shown in the image. configure-bgp
  3. Click on the BGP Peering tab, and then click the edit icon to reveal more fields.
  4. Enter the following information:
    • Enter a value between 1 and 4294967295 as Local Autonomous System ID
    • Select either iBGP or eBGP as the BGP type.
  5. Click on Add New Peer to reveal a set of fields appropriate to iBGP or eBGP.
    • Enter the SE placement network
    • Enter the subnet providing reachability for peer
    • Enter the Peer BGP router’s IP address
    • Enter a value between 1 and 4294967295 in the field Remote AS.
      Note: The field Remote AS is applicable only to eBGP.
    • Enter the Peer Autonomous System MD5 digest secret key.
    • Set Multihop to 0.
    • Click on the following options to enable them.
      • BFD (by default enables very fast link failure-detection via BFD).
        Note: Only async mode is supported.
      • Advertise VIP
    • Advertise SNAT can be turned off as advertisement of SNAT is not relevant for Kubernetes/OpenShift environments.

The Edit BGP Peering screen for eBGP type is as shown in the image.

edit_bgp

Note: eBGP multihop is not supported in Kubernetes/OpenShift environments.

Configuring a BGP profile (via CLI)


: > configure vrfcontext global
Multiple objects found for this query.
        [0]: vrfcontext-f834cafa-b572-4ec3-9559-db0573f26d2f#global in tenant admin, Cloud OpenShift-Cloud
        [1]: vrfcontext-6d6ec0dd-0aaf-4b73-9d86-37569b505494#global in tenant admin, Cloud Default-Cloud
Select one: 0
Updating an existing object. Currently, the object is:
+----------------------------+-------------------------------------------------+
| Field                      | Value                                           |
+----------------------------+-------------------------------------------------+
| uuid                       | vrfcontext-f834cafa-b572-4ec3-9559-db0573f26d2f |
| name                       | global                                          |
| system_default             | True                                            |
| tenant_ref                 | admin                                           |
| cloud_ref                  | OpenShift-Cloud                                 |
+----------------------------+-------------------------------------------------+
: vrfcontext > bgp_profile
: vrfcontext:bgp_profile > local_as 65536
: vrfcontext:bgp_profile > ebgp
: vrfcontext:bgp_profile > peers peer_ip 10.115.0.1 subnet 10.115.0.0/16 md5_secret abcd remote_as 65537
: vrfcontext:bgp_profile:peers > save
: vrfcontext:bgp_profile > save
: vrfcontext > save
: >

Enabling a north-south service to use BGP RHI

To enable a specific north-south service, route or ingress to have its VIP advertised via BGP RHI, use the annotation avi_proxy: {"enable_rhi"}. For example, to enable BGP RHI for a north-south service, use the following Kubernetes/OpenShift service definition:


apiVersion: v1
kind: Service
metadata:
  name: avisvc
  labels:
    svc: avisvc
  annotations:
    avi_proxy: '{"virtualservice":{"enable_rhi": true, "east_west_placement": false}}'
spec:
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: http
  selector:
    name: avitest

Specifying the placement subnet for a VIP

By default, the VIP will be allocated from one of the “Usable Networks” listed in the north-south IPAM object configured in the Kubernetes/OpenShift cloud. In some instances, it may be desirable to specify that the VIP be allocated from a specific named subnet. This can be achieved by defining the network in Avi Vantage and then referencing the network by name in the service annotation as follows:


apiVersion: v1
kind: Service
metadata:
  name: avisvc
  labels:
    svc: avisvc
  annotations:
    avi_proxy: >-
      {"virtualservice":{"enable_rhi": true, "east_west_placement": false, "auto_allocate_ip": true,
      "ipam_network_subnet": {"network_ref": "/api/network/?name=ns-cluster-network-bgp"}}}
spec:
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: http
  selector:
    name: avitest
 

When explicitly referencing a network in this way, it is not necessary to include that network in the Usable Networks list in the north-south IPAM object.

  • Network created in the Avi Vantage “admin” tenant may be referenced in any Kubernetes namespace/OpenShift project.
  • Networks created in a specific Avi Vantage tenant may be referenced only in the corresponding namespace/project.
  • Networks with the same name, defining different subnets, can be created in different tenants.

Combining these capabilities allows for great flexibility in the allocation of VIPs in different subnets, for example:

  • Global default subnet(s) for unannotated services
    • Add network(s) defined in the “admin” tenant to the north-south IPAM configuration.
  • Per-namespace default subnet(s) for unannotated services
    • Add network(s) defined in the non-admin tenants only to the north-south IPAM configuration.
  • Allow application owners to place services in specific subnet(s) through annotations
    • Define networks in the “admin” tenant
    • May or may not be added to the north-south IPAM configuration
  • Allow application owners to place services in namespace/project specific subnet(s) through annotations
    • Define networks in the tenant corresponding to the namespace/project.
    • May or may not be added to the north-south IPAM configuration.