Installing Avi Vantage in OpenShift/Kubernetes

Overview

This guide describes how to integrate Avi Vantage into an OpenShift v3 or Kubernetes cloud. The instructions in this guide can be used for installing Avi Vantage 16.3 and subsequent.

Avi Vantage is a software-based solution that provides real-time analytics as well as elastic application delivery services. Avi Vantage optimizes core web-site functions, including SSL termination and load balancing. It also provides access to network analytics, including end-to-end latency information for traffic between end-users and the load-balanced applications.

When deployed into an OpenShift/Kubernetes cloud, Avi Vantage performs as a fully distributed, virtualized system consisting of the Avi Controller and Avi Service Engines (SEs), each running as a separate container on OpenShift slave nodes.

Note: IPv6 is not supported on OpenShift v3 or Kubernetes cloud as yet in Avi Vantage.

Deployment Prerequisites

Physical Node Requirements

The main components of the Avi Vantage solution, Avi Controllers and Service Engines (SEs), run as containers on OpenShift/Kubernetes minion nodes. For production deployment, a 3-instance Avi Controller cluster is recommended, with the each of the Avi Controller instances running in containers on separate physical nodes. After configuring the Avi Controller cluster for OpensShift/Kubernetes cloud, it deploys one Avi SE container on OpenShift/Kubernetes nodes. The nodes on which an Avi Controller runs must meet at least the minimum system requirements as defined in System Requirements: Hardware article.

System Time (NTP) Requirements

The system time on all nodes must be synchronized. Use of a Network Time Protocol (NTP) server is recommended.

Software Requirements

For deployment of SEs using SSH method, the following system-level software is required:

  • Each node host OS must be a Linux distribution running systemd.

  • The Avi Controller uses password-less sudo SSH to access all the OpenShift nodes in the cluster and create SEs on those nodes. The SSH user must have password-less sudo access to all three OpenShift nodes hosting the Avi Vantage cluster. The SSH method requires a public-private key pair. You can import an existing private key onto the Avi Controller or generate a new key pair. In either case, the public key must be in the /home/ssh_user/.ssh/authorized_keys file, where ssh_user is the SSH username on all OpenShift nodes. The Avi Controller setup wizard automatically stores the private key on the Avi Controller node when you import or generate the key.

Setting up the Avi Controller

Installing the Controller

Follow the steps below to install Avi Controller:

  • Copy the .tgz package onto a node that will host the Avi Controller leader (for a Controller cluster, two followers run on separate nodes).

    
      scp controller_docker.tgz username@remotehost.com:~/
      

    Note: Replace username@remotehost.com with your write-access username and password and the IP address or hostname for the host node.

  • Log onto the OpenShift node.

    
      ssh username@remotehost.com
      
  • Load the Avi Controller image into the host’s local Docker repository.

    
      sudo docker load < controller_docker.tgz
      
  • As a best practice, clean up any data that may be lingering from a previous run.

    
      sudo rm -rf /var/lib/controller/*
      
  • Use the vi editor to create a new file for spawning the Avi Controller service.

    
      sudo vi /etc/systemd/system/avicontroller.service
      
  • Copy the following lines into the file:

    
      [Unit]
      Description=AviController
      After=docker.service
      Requires=docker.service
        
      [Service]
      Restart=always
      RestartSec=0
      TimeoutStartSec=0
      TimeoutStopSec=120
      StartLimitInterval=0
      ExecStartPre=-/usr/bin/docker kill avicontroller
      ExecStartPre=-/usr/bin/docker rm avicontroller
      ExecStartPre=/usr/bin/bash -c "/usr/bin/docker run --name=avicontroller --privileged=true -p 5098:5098 -p 9080:9080 -p 9443:9443 -p 7443:7443 -p 5054:5054 -p 161:161 -d -t -e NUM_CPU=8 -e NUM_MEMG=24 -e DISK_GB=64 -e HTTP_PORT=9080 -e HTTPS_PORT=9443 -e SYSINT_PORT=7443 -e MANAGEMENT_IP=$$HOST_MANAGEMENT_IP -v /:/hostroot -v /var/lib/controller:/vol -v /var/run/fleet.sock:/var/run/fleet.sock -v /var/run/docker.sock:/var/run/docker.sock avinetworks/controller:$$TAG"
      ExecStart=/usr/bin/docker logs -f avicontroller
      ExecStop=/usr/bin/docker stop avicontroller
        
      [Install]
      WantedBy=multi-user.target
      

    Note: If any of the port numbers for HTTP (9080), HTTPS (9443) or SystemInternal (7443) are already being used by other services on the host, please use alternate port numbers for the Docker port-mappings and update the appropriate environment variable names.

  • Edit the following values in the file:
    • NUM_CPU – Sets the number of CPU cores/threads used by the Controller (8 in this example).
    • NUM_MEMG – Sets the memory allocation (24 GB in this example).
    • DISK_GB – Sets the disk allocation (64 GB in this example).
    • MANAGEMENT_IP– Replace $$HOST_MANAGEMENT_IP with the management IP of current OpenShift Node
    • $$TAG – Replace with tag value of the Avi Vantage image in the Docker repository. For example, “16.3-5079-20160814.122257”.
  • Save and close the file.

Initializing the Controller

To start the Avi Controller, enter the following command on the node on which you created the Avi Controller:

    sudo systemctl enable avicontroller && sudo systemctl start avicontroller
</code>

Initial startup and full system initialization takes around 10 minutes.

Accessing the Controller Web Interface

To access the Avi Controller web interface, navigate to the following URL:https://avicontroller-node-ip:9443

Note: avicontroller-node-ip is the management IP of the node on which the Controller is installed.

Configuring the Controller

This section shows how to perform initial configuration of the Avi Controller using its deployment wizard. You will configure the following settings.

Access Avi Controller UI from a browser and follow below six steps:

  1. Set a password for the admin user

    Fig1

  2. Set DNS and NTP server information.

    Fig2

  3. Email and SMTP information.

    email_SMTP_settings

  4. Select No Orchestrator as infrastructure type.

    Fig3

  5. Click Next.

    Fig4-1

  6. Respond No to the multiple tenants question.

    Fig5

Configuring the Network

Configure a subnet and IP address pool for intra-cluster/east-west traffic and a subnet and IP address pool for external/north-south traffic. These IP addresses will be used as service virtual IPs (VIPs) or cluster IPs. The east-west subnet is an overlay or virtual subnet. The north-south subnet is the underlay subnet to which all the nodes/minions are connected. Use unused or spare IP addresses from the underlay subnet for the north-south VIP address pool.

Configure east-west networks and subnet for virtual services handling east-west traffic and NorthSouth subnet for virtual services handling client / north-south traffic as follows:

Navigate to Infrastructure > Networks and click Create.

Fig6

Create east-west network and add subnet with static IP range for IPs to be used by east-west virtual services.

Fig7-1

Avi provides a drop-in replacement for kube-proxy for east-west services. There are 2 options for the subnet from which virtual IPs for east-west services are allocated.

OpenShift/Kubernetes allocates cluster IPs for east-west services from a virtual subnet. Avi can use the same cluster IPs allocated by OpenShift/Kubernetes and provide east-west proxy services. Standard tools such as oc/kubernetes display cluster IPs for services, so display and troubleshooting become easier. However, this requires that kube-proxy be disabled in the cluster on all nodes.

Alternately, Avi can be configured to provide east-west services on a non-overlapping virtual subnet different from the cluster IP subnet.

  • kube-proxy is enabled: You must use a different subnet than the kube-proxy’s cluster IP subnet. Please choose a /16 CIDR from the IPv4 private address space (172.16.0.0/16-172.31.0.0/16, 10.0.0.0/16 or 192.168.0.0/24) that doesn’t overlap with any address space that is already in use in your OpenShift nodes.

  • kube-proxy is disabled: Replace kube-proxy in OpenShift Environment With Avi Vantage explains how to disable kube-proxy. With kube-proxy disabled, there’s a choice of either using a separate subnet for east-west VIPs or using the same VIPs as cluster IPs allocated by OpenShift/Kubernetes.

To use the same VIPs as cluster IPs – Enter the same subnet as the cluster IP subnet e.g. 172.30.0.0/16 with no static IP address pool. East-west services simply use the allocated cluster IP as VIPs.

To use a different subnet for VIPs for east-west services – Enter the subnet information and create a IP address pool from the subnet. East-west services will be allocated VIPs from this IPAM pool.

Create NorthSouth network and add subnet with static IP range for IPs to be used by north-south virtual services.

Fig8

Configuring IPAM/DNS profile

The Avi Controller provides internal IPAM and DNS services for VIP allocation and service discovery. Configure the IPAM/DNS profile as follows:

Navigate to Templates > Profile > IPAM/DNS Profile and click Create.

Fig9

Create the EastWest Profile: Give the profile the name EastWest. Select Type: Avi Vantage DNS. Fill in the required Domain Name field. Change the default TTL for all domains or just for this particular domain if desired. Click on Save.

Fig10

Create the NorthSouth profile: Give the profile the name NorthSouth. Select Type: Avi Vantage DNS. Fill in the required Domain Name field. Change the default TTL for all domains or just for this particular domain if desired. Click on Save.

Fig12-1

Note: Since Avi Vantage 17.1.1, a DNS record is picked automatically if the DNS profile is Avi Vantage or AWS. With 17.2.13, Avi also supports Infoblox as the DNS provider. For more information, refer to the IPAM and DNS Provider (Infoblox) article.

Configuring SSH User

Skip this step if you want to deploy SE as Pod. The Avi Controller needs to be configured with an SSH key pair that provides passwordless sudo access to all the nodes. On OpenShift, this key pair can be the same private key which is used to install OpenShift. These keys are used by the Avi Controller to SSH to OpenShift nodes and deploy Avi Service Engines. The private key is usually located at ~/.ssh/id_rsa, for example, to copy out the default key from the OpenShift master:

  • SSH to Master node
 
ssh username@os_master_ip
  • Run below command and copy the contents of key file (id_rsa).
 
cat ~/.ssh/id_rsa
  • On Avi Controller, navigate to Administration > Settings > SSH Key Settings and click on Create.

    Note: This is applicable from version 17.2.10 onwards. On the Avi Controller, navigate to Infrastructure > Credentials > and click on Create.

  • Enter the SSH username.

  • Select Import Private Key.

  • As shown below, paste the key copied in step above.

Fig16-1

  • Click Save.

Setting Up Authentication

Avi Vantage supports two means by which to authenticate, certificates and service account tokens.

Service Account Tokens

Please refer to the guide below, corresponding to the orchestrator.

Certificates

  • Use scp to securely copy SSL client OpenShift certificate files from the master node. On OpenShift master nodes, the certificates are installed at /etc/origin/master. On Kubernetes master nodes, the certificates are installed at /etc/kubernetes/pki. If the Kubernetes API server is unauthenticated, this step can be skipped.
 
scp username@os_master_ip:/etc/origin/master/admin.crt
scp username@os_master_ip:/etc/origin/master/admin.key
scp username@os_master_ip:/etc/origin/master/ca.crt 
  • On the Avi Controller, navigate to Templates > Security > SSL/TLS Certificates.

Fig17-2

  • Click Create and select Root/Intermediate CA.
    • Name the cert and upload ca.crt file.
    • Click Validate.

    Fig18-1

    • Click on Import to save.

Configuring OpenShift/Kubernetes Cloud

Note: For the configuration explained here, it is assumed that kube-proxy is disabled on OpenShift/Kubernetes nodes and you are using Avi’s internal DNS/IPAM.

  • Navigate to Infrastructure > Clouds.

  • Edit Default-Cloud.

    Fig14

  • Select OpenShift as infrastructure type and click Next.  

    Fig15

  • Ensure that ‘Enable Event Subscription’ is selected.

  • Select ‘Token’ and paste the Service Account Token for authentication.

  • Enter the OpenShift/Kubernetes API URL as shown below.

  • Click Next.

  • Select ‘Create SE as a Pod’ deployment method.

  • Check “Cluster user overlay SDN” for overlay-based networking for the cluster, such as OpenShift(ovs), Nuage, Flannel and Weave. Uncheck for routed containers.

  • Click Next.

  • Keep ‘Proxy Service Placement Subnet’ same as ‘Avi Bridge Subnet’ in previous tab (default is 172.18.0.1/16).

  • If kube-proxy is disabled, check the “Use Cluster IP of service as VIP for East/West”. If kube-proxy is enabled, uncheck “Use Cluster IP of service as VIP for East/West”. This KB describes how kube-proxy can be disabled.

  • Check “Always use Layer4 Health Monitoring” to use TCP health monitoring even for HTTP/HTTPS applications. Enable this setting if there are HTTP/HTTPS applications that do not respond to the default health monitor of “GET /” and it is inconvenient to override with custom health monitors using annotations.

  • Set IPAM and DNS Profiles from the dropdown menu as shown.

  • Click Save.

The cloud status will show green (placement ready).

Cloud_Green

It will take around 5 min for the Avi Controller to download the SE docker image and start the containers.

SE_Green

Additional References

Refer to Replace kube-proxy in a OpenShift Environment with Avi Vantage to learn how to disable kube-proxy in a OpenShift environment

Refer to OpenShift/Kubernetes Service Configuration on Vantage to learn how to create services and test traffic.

Refer to OpenShift Routes Virtual Service configuration to learn how to create and test traffic with OpenShift routes.

Refer to Kubernetes Ingress Virtual Service Configuration to learn how to create and test traffic with Kubernets ingresses.