Avi Deployment Guide for Google Cloud Platform (GCP)

This article discusses the process of provisioning and configuring Avi Vantage with Google Cloud Platform.

Overview

About Google Cloud Platform

Google Cloud Platform is a cloud computing service that offers hosting on the same supporting infrastructure Google uses internally for end-user products such as Google Search and YouTube. Cloud Platform provides developer products to build a range of programs from simple websites to complex applications.

Google Cloud Platform is a part of a suite of enterprise services from Google Cloud and provides a set of modular cloud-based services with a host of development tools, including hosting and computing, cloud storage, data storage, translation APIs, and prediction APIs.

The following figure represents a sample deployment case for Google Cloud Platform.

GCP

About Avi Vantage

The Avi Vantage Platform provides enterprise-grade distributed ADC solutions for on-premises as well as public-cloud infrastructure. Avi Vantage also provides built-in analytics to diagnose and improve the end-user application experience, while making operationalizing easier for network administrators.

Avi Vantage is a complete software solution which runs on commodity x86 servers or as a virtual machine and is entirely accessible via REST API calls.

Features

Avi Vantage for GCP provides the following functionalities:

  • VMs are created using standard Google versions (For instance, CentOS 7.5). The CentOS image for the base VM is available in the Google repository.
  • The Avi Controller and Avi Service Engines run as Docker containers.
  • Service Engine uses a single interface for control and data traffic.
  • VIP addresses are manually configured or allocated from an Avi-managed static pool. The VIP address or SNAT addresses cannot be in the same subnet as the interface.
  • The service account authentication mechanism is used. Privilege is inherited on being spawned by an authenticated entity through API cells. The Controller instance should be spawned with a read-write scope, while Service Engines are spawned with a read-only scope.
  • The only Controller interaction with the Google API is to add a ‘route’ to the VIP via the instance. The Controller uses query API calls as well. The Controller also interacts with the Google Cloud Platform to program the routes.
  • For SE high availability, only elastic HA modes are supported for SEs.
  • VMs are created with a single interface and are assigned an IP address with a /32 mask by GCE (Google Cloud Environment) from its internal subnet.
  • The GCP Avi Controller instances needs internet access for the GCP-based Linux server cloud to work. GCP instances get internet access only when they have an external IP address attached, or the instance is connected to a network, through VPN, that has internet access.

Limitations

  • Legacy networking mode is not supported.
  • Floating IPs in Google Cloud Platform are not supported.

Provisioning Avi Vantage in GCP

Network, Subnet, Instances in Google Cloud

  1. Browse to the Google Cloud Platform console via https://console.cloud.google.com.

  2. Navigate to the respective project to which you have been subscribed.

    fig1

  3. Navigate to Google Cloud Platform and click on Networking.

    fig2

  4. Under the Networking tab, click on VPN networks and then click on CREATE VPC NETWORK.

    fig3

  5. Provide a name, an appropriate region, and the IP address range of the network to the VPC network. This can only be an IPv4 address, as GCP does not support IPv6. Click on Create.

    fig4

  6. The network is now created.

    fig5

Firewall Rules in Google Cloud

GCP’s default behaviour is to drop traffic. So, firewall rules are required to allow the traffic through. Protocol ports are used by Avi Vantage for management communication as described in Protocol Ports Used by Avi Vantage for Management Communication.

  1. Create a firewall rule to allow TCP, UDP, and ICMP traffic within the network and HTTP/HTTPS from outside under the respective network created as above. The screenshot below displays creating rules for all UDP and TCP traffic. After filing in the details, click on Save.

    fig6

    Create firewall rules on TCP port 80 and 443.

    fig7

    Create firewall rules for ICMP.

    fig8

    Create firewall rules for internal SE-to-SE communication.

    fig9

  2. The firewall rules created will be displayed as shown below.

    fig10

Avi Controller, Service Engine, Server, Client in Google Cloud

  1. Navigate to the Google Cloud Platform icon and click on Compute Engine.

    fig11

  2. Under Compute Engine click on CREATE INSTANCE.

    fig12

    Create an Avi Controller instance as below:

    • Name the instance.
    • Provide the zone in which this instance should be created.
    • Select the machine type n1-standard-4 for four CPUs and 15 GB of memory. Sizing may vary depending on your scaling requirement.

    fig13

  3. Select a boot disk with a CentOS 7 image and with an 80 GB boot disk size.

    fig14

  4. Click on Identity and API access to make sure that the scope is set to Read Write. Set the Compute Engine parameter to Read Write by clicking on Set access for each API for GCP route programming.

    fig15

  5. Enable HTTP and HTTPS tags to permit outside connections. Under the Networking tab, fill the Network and Subnetwork fields. For the Avi Controller instance, set IP forwarding to Off. Click on Create.

    fig16

  6. Add the tag for the internal SE-to-SE communication.

    fig17

  7. Copy the public key from the machine which will be used for initiating SSH.

    fig18

  8. The Avi Controller is created with an external and an internal IP address from the network range specified while creating the networks.

    fig19

Note: Google Cloud Platform does not allow serial console access to the created instance if an external IP is not allocated. Serial console access is not required for installing or operating Avi Vantage, but may be useful for troubleshooting.

You can create another Service Engine by following the above mentioned steps. For high availability purposes, you can create the SEs in different zones.

fig20

To run test traffic, create test server and client instances as shown below.

Server Instance

  1. Click on CREATE INSTANCE under Compute Engine.

    fig21

  2. Create one or more test server insstances as shown below.
    • Name the instance.
    • Provide the zone in which the server will be created.
    • Select small in the pulldown menu for Machine type for one vCPU and 1.7 GB of memory.
  3. Select a boot disk with a CentOS 7 image and a 20 GB capacity.

    fig22

  4. Click on Identity and API access to make sure that the scope is set to Read Only. Set the Compute Engine parameter to Read Only by clicking on Set access for each API for GCP route programming.

    fig23

  5. Enable HTTP tags to permit outside connections. Under the Networking tab, fill the Network and Subnetwork fields. For a server instance, set IP forwarding to On. Click on Create.

    fig24

  6. Copy the public key from the machine which will be used for initiating SSH.

    fig25

    fig26

Client Instance

  1. Click on CREATE INSTANCE under Compute Engine: VM instances.

  2. Create one or more test client instances as shown below.
    • Name the instance.
    • Deploy it in the respective zone.
    • Select small in the pulldown menu for Machine type for one vCPU and 1.7 GB of memory.

    fig27

  3. Select a boot disk with a CentOS 7 image and a 20 GB capacity.

    fig28

  4. Click on Identity and API access to make sure that the scope is set to Read Only. Set the Compute Engine parameter to Read Only by clicking on Set access for each API for GCP route programming.

    fig29

  5. Enable HTTP tags to permit outside connections. Under the Networking tab, fill the Network and Subnetwork fields. For a server instance, set IP forwarding to On. Click on Create.

    fig30

  6. Copy the public key from the machine which will be used for initiating SSH.

    fig31

  7. Verify all the instances created.

    fig32

Preparing the Instances

Turning off yum-cron

For this, the instance needs to be on CentOS 7.5.

Lock release to current running

To ensure you remain on the current release of CentOS/RHEL, follow the steps in the following article: Locking a Linux System to a Specific OS Version.

Installing Docker

  1. To configure a docker repository, create a file under docker.repo/etc/yum.repos.d

    
     [localhost@avi-controller ~]$ sudo vim docker.repo 
     [docker-main]
     name=Docker Repository
     baseurl=https://yum.dockerproject.org/repo/main/centos/7/
     enabled=1
     gpgcheck=1
     gpgkey=https://yum.dockerproject.org/gpg
     
     
  2. Verify the instances are running on CentOS 7.5

    
     [localhost@avi-controller ~]$ cat /etc/centos-release
     CentOS Linux release 7.5.1611 (Core)
     
     
  3. Install and start docker on all five instances.

    
     sudo yum update -y
     sudo yum install -y docker
     sudo systemctl enable docker
     sudo systemctl start docker
     
     

Refer to Docker storage drivers for recommendations on choosing a storage driver.

It is recommended to use devicemapper with thin-pool as the recommended storage driver for production. As shown below, devicemapper is configured with loopback. This is suitable for a proof of concept and not for production environment.



[localhost@avi-controller ~]$ sudo docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.10.3
Storage Driver: devicemapper
Pool Name: docker-8:1-67109509-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 11.8 MB
Data Space Total: 107.4 GB
Data Space Available: 19.92 GB
Metadata Space Used: 581.6 kB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.147 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.135-RHEL7 (2016-09-28)
Execution Driver: native-0.2
Logging Driver: journald
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 3.10.0-514.2.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 2
CPUs: 4
Total Memory: 14.69 GiB
Name: test.c.astral-chassis-136417.internal
ID: TOWE:AFZ3:JHJ4:C4A5:PFAI:MF2J:2HKE:ZQLM:LREW:WYNG:UU4C:NBP2
Registries: docker.io (secure)

If the instance is spawned for Ubuntu 14.04, sshguard must be configured, as explained below. sshguard can take whitelists from files when the -w option argument begins with a ‘.’(dot) or a ‘/’(slash). Given below is a /etc/list, a sample whitelist file.


# comment line (a '#' as very first character)
# a single IPv4 and IPv6 address
1.2.3.4
2001:0db8:85a3:08d3:1319:8a2e:0370:7344
#   address blocks in CIDR notation
127.0.0.0/8
10.11.128.0/17
192.168.0.0/24
2002:836b:4179::836b:0000/126
#   hostnames
rome-fw.enterprise.com
hosts.test.com

The following is a snippet of sshguard referencing the test file.


sshguard -w /etc/test

Testing Server on the Server Instance

To test the instance, start an NGINX server on the server instance to be used as a pool server as shown in the example below.


   sudo docker run -d -p 80:80 avinetworks/server 
   

Configuring Avi Vantage

Installing and Configuring Avi Controller and Service Engines - Method 1

  • In the cloud configuration, ensure DPDK mode is disabled for the Linux server cloud deployed per the steps documented in the above-mentioned article. In addition, ensure in-band management is enabled.

  • ssh into Avi Controller instance, exec to the Controller and start the Avi shell. Type this command to list the container id to be used.

    sudo docker ps

sudo docker exec -it [container_id] bash
shell
  • Create a network with an IP address pool for VIP allocation. In the Avi UI, browse to Infrastructure -> Networks -> Create*[]:

  • Create an IPAM Profile for GCP.


  • Edit “Default-Cloud.” Choose the previously created GCP IPAM provider ‘gcp’ as IPAM provider  and configure a Linux server cloud using IP addresses for the two Avi Service Engine instances created above.

Note: For more information on IPAM provider, read IPAM Provider (Google Cloud Platform)

Installing and Configuring Avi Controller and Service Engines - Method 2

Alternately, if the Controller service file is already created, fresh start a Controller with a setup.json file to configure a Linux server cloud with a GCP IPAM profile and a network for VIP allocation.

  • Copy the setup.json file shown below to /opt/avi/controller/data on the host (assuming /opt/avi/controller/data is the volume used for the Controller in the service file).
  • Modify ssh keys, username, network subnets and network/IPAM names as appropriate.
{
   "CloudConnectorUser": [
       {
           "name": "rangar",
           "tenant_ref": "admin",
           "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9ZZWDLSl/PJHWA8QuDlfDHJuFh6k55qxRxO28fSRvAEbWCXXgXdnH8vSVVDE
Mo0brgqrp+vful2m7hNm0TPv8REbT2luVeWo+G0R1hxzdALzI8VmMBxX2VduKZ5Zrh3C9GKxaUYb4R2hzLaYKUBQnFa2B0YWiAfC3ow71fwwgb7cVhxExTyhhF01gY
9Tcb3w9uugv3vXNzyxDssHXtwY60WcVUIK1L+8SqXu/r6YUG8j4IsaYkXJHBE6CHPwDg4uwRG35IkfhsIg0KtKRwpzHbhOx0qRjG9ZaVc0SnfMIHmdAFwXpDpi/AKV
NAmjkix2GIPIi1OISnEngSjnugVb7\n",
           "private_key": "-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEAvWWVgy0pfzyR1gPELg5XwxybhYepOeasUcTtvH0kbwBG1gl1\
n4F3Zx/L0lVQxDKNG64Kq6fr37pdpu4TZtEz7/ERG09pblXlqPhtEdYcc3QC8yPFZ\njAcV9lXbimeWa4dwvRisWlGG+Edocy2mClAUJxWtgdGFogHwt6MO9X8MIG+
3FYcR\nMU8oYRdNYGPU3G98PbroL971zc8sQ7LB17cGOtFnFVCCtS/vEql7v6+mFBvI+CLG\nmJFyRwROghz8A4OLsERt+SJH4bCINCrSkcKcx24TsdKkYxvWWlXNE
p3zCB5nQBcF\n6Q6YvwClTQJo5IsdhiDyItTiEpxJ4Eo57oFW+wIDAQABAoIBAFu7XeUA9L5ZmdDs\nVhJwg/VOX80W3dHbdc7M8NCAVCsnGSgFwQAAtMBxXiENfAx
A8NKUoS9ejMMUtvNJ\n7x+ywcF3WE63ze/htKGMF2ZNIJ+yAb3Zl6OIswxynTi131cJbINJ9gBwyExsWZyf\nmXIZQwmDKFxeHLlQ80QeR9qDxF3Ypyz7vdEtQMtpI
3JQJMbUX6dmQm0UtOKi5tL8\nzkskZJHnaqwJlem92Zon7S8PIflsPevsAmDrTPbmxIL6Z3KlJkoLzTcWefA6E19N\nw4JmylQokAWiqQ1il+qrcITIZGhsZre081N
wjHkzzA8kdb4EUO0nhy7rzbmS67TN\n08Fe0RECgYEA98WaJR5k/r8VBlKEQTErye29cJmkr0w5ZPX+bwko+ejj2S2vqpJc\nuR0YO3q5zY5a4A/33X/vke+r1bNPr
p9QSnBscFvA/AEXGAiAeuCsuB+pw8C3N5C5\ncTzKNFx1c2KXbejRkhvL9gz5tJZpdHIqzbGQmwEiNFqnYy6BPbhTm8UCgYEAw6+2\n5WvAGH9Ub+ZfySoeNNaxXfI
DvXA2+G/CBg99KYuXzWWmeVx9652lc4Gv+mxhFiJd\nilMfWljlb+f1G5sJnZ3VMKSf/FF¬¬6Mo8MsnAkvjnVWBoezo2sVzu+9g3qGRXNTtRM\nSH1N/eWPeJGwD+Vyk
D3r8K+iag7cMhrLpGPWk78CgYARatumJlfVLJuOwTg42PsK\nC+NYSgSwqfwS49QJ/CvcPYne135U0EsiXDA65iqvj4VF4Pl8oaS2rpF2yU8dqGdd\nhD+rOlf7nxv
/fYGCoc6idt9ZOm/mwQ64LhzMx38eKF0axdYNnlSdLFZVYolxPSFT\nKltO+ipsYb8IktlU/GMsPQKBgQCeirlqzM64yki11Hcce3Q3qQ3QqGihTc4roBgZ\nYuksB
L37mnSy9N3MTFAk8hiKks5h6XvRuyC2yTkyXkL2l7jFq39zRp2cBsMzPTSz\nSSpruF2CYL8+6AeOMYi4v3M/2asaR+R6ApNytk90Bs0XQ/V6qcCDozi6Jsn+Cjmd\
nOYo67wKBgAcUFRHUX4VwCUZAAIxyTM+efpf5z8dKHh/iJA6rtqcTi4vHddEJinT6\ntOiqXjciZEKqZ08GtImIPtuhIBO0m10fCfcjrGxGz2+N9o8fyNvFWU83kG9
IXSq8\nU1YOIYvXwWFQLWIUvyOgnyT4bW0OLa8OrJEq1/DaH8gpvvFi8qRK\n-----END RSA PRIVATE KEY-----\n"
       }
   ],
   "IpamDnsProviderProfile": [                                                                                      [39/5306]
       {
           "name": "gcp",
           "type": "IPAMDNS_TYPE_GCP",
           "tenant_ref": "admin",
           "gcp_profile": {
               "usable_network_refs": [
                   "/api/network/?name=net1"
               ]
           }
       }
   ],
   "Network": [
       {
           "name": "net1",
           "tenant_ref": "admin",
           "cloud_ref": "admin:Default-Cloud",
           "configured_subnets": [
               {
                   "prefix": {
                       "ip_addr": {
                           "type": "V4",
                           "addr": "10.9.0.0"
                       },
                       "mask": 24
                   },
                   "static_ranges": [
                       {
                           "begin": {
                               "type": "V4",
                               "addr": "10.9.0.2"
                           },
                           "end": {
                               "type": "V4",
                               "addr": "10.9.0.254"
                           }
                       }
                   ]
               }
           ]
       }
   ],
   "SeProperties": [
       {
           "se_runtime_properties": {
               "global_mtu": 1400,
               "se_handle_interface_routes": true
           }
       }
   ],
   "Cloud": [
       {
           "name": "Default-Cloud",
           "tenant_ref": "admin",
           "vtype": "CLOUD_LINUXSERVER",
           "ipam_provider_ref": "admin:gcp",
           "linuxserver_configuration": {
               "ssh_attr": {
                   "ssh_user": "rangar",
                   "host_os": "COREOS"
               },
               "se_sys_disk_path": "/”
           }
       }
   ]
}
  • Perform first setup on the Controller & specify a username/password.
  • Edit “Default-Cloud,” choosing the previously created GCP IPAM provider ‘gcp’ as IPAM provider and configure a Linux server cloud using IP addresses for the two Avi Service Engine instances created above.

Creating Virtual Service and Verifying Traffic

  • Create a pool, e.g., GCP-Perf-Test-VS-Pool.
  • Add server instance IP as pool server.
  • Create an internal virtual service called “GCP-Perf-Test-VS” by clicking to the Advanced tab.
  • The VIP is auto-allocated from the VIP/IPAM subnet 10.y.y.y./24
  • Use placement subnet as net1-subnet4 - 10.x.x.x/24
    Note: IP subnet 10.x.x.x is mentioned only for reference purpose. Placement subnet should be set to the major subnet in the VPC used by Avi Controller and Service Engines.
  • After the VS is created (and a VIP is allocated), use VIP as SNAT IP, if desired.

Creating Pool

Creating Virtual Service


Below are screenshots taken after creating the new virtual service:


Testing ICMP Traffic

Send ICMP traffic to the VIP IP 10.10.0.1 in this case and make sure that it gets programmed.

[localhost@avi-test-server ~]$ ping 10.10.0.1

PING 10.10.0.1 (10.10.0.1) 56(84) bytes of data.
64 bytes from 10.10.0.1: icmp_seq=1 ttl=64 time=0.889 ms
64 bytes from 10.10.0.1: icmp_seq=2 ttl=64 time=0.278 ms
64 bytes from 10.10.0.1: icmp_seq=3 ttl=64 time=0.291 ms
64 bytes from 10.10.0.1: icmp_seq=4 ttl=64 time=0.288 ms
64 bytes from 10.10.0.1: icmp_seq=5 ttl=64 time=0.294 ms
64 bytes from 10.10.0.: icmp_seq=6 ttl=64 time=0.302 ms
64 bytes from 10.10.0.1: icmp_seq=7 ttl=64 time=0.320 ms
64 bytes from 10.10.0.1: icmp_seq=8 ttl=64 time=0.257 ms
64 bytes from 10.10.0.1: icmp_seq=9 ttl=64 time=0.315 ms
  • Verify that a route for the VIP/32 is programmed in GCP with nextHop as Service Engine 1 with IP 10.8.2.3, as can be seen below with the notation.


API for Configuring Virtual Service and Pool

Copy the setup.json file shown below to /opt/avi/controller/data on the host (assuming /opt/avi/controller/data is the volume used for the Controller in the service file).


{
    "name": "vs1",
    "pool_ref": "pool_ref",
    "services": [
        {
            "port": 80
        }
    ],
    "vip": [
        {
            "auto_allocate_ip": true,
            "ipam_network_subnet": {
                "network_ref": "network_ref",
                "subnet": {
                    "ip_addr": {
                        "addr": "6.2.0.0",--> IPAM subnet.
                        "type": "V4"
                    },
                    "mask": 16
                }
            },
            "subnet": {
                "ip_addr": {
                    "addr": "10.146.11.0", --> placement subnet, subnet having reachability to client facing VIP
                    "type": "V4"
                },
                "mask": 24
            }
        }
    ]
}

Troubleshooting

One of the commonly seen issue is with Service Engine failing to connect to the Controller or frequently losing connectivity to Ubuntu 14.04.

Root Cause

  • This is due to sshguard. Refer to http://www.sshguard.net/docs/ for more information on sshguard.
  • sshguard supports address whitelisting. Whitelisted addresses are not blocked, even if they appear to generate attacks. This is useful for protecting LAN users from being incidentally blocked.
  • When longer lists are needed for whitelisting, they can be wrapped into a plain text file, one address/hostname/block per line.

Mitigation

Configure the Controller IP (all three, if clustered) in the whitelist file used by sshguard.

sshguard can take whitelists from files when the -w option argument begins with a ‘.’ (dot) or ‘/’ (slash). Below is a sample whitelist file (/etc/test), with comment lines denoted by a ’#’ as the very first character.


       #   a single IPv4 and IPv6 address
       1.2.3.4
       2001:0db8:85a3:08d3:1319:8a2e:0370:7344
       #   address blocks in CIDR notation
       127.0.0.0/8
       10.11.128.0/17
       192.168.0.0/24
       2002:836b:4179::836b:0000/126
       #   hostnames
       rome-fw.enterprise.com
       hosts.test.com
  

sshguard is told to make a whitelist up from the /etc/test file as follows:


    sshguard -w /etc/test