Deploying Avi Vantage in GCP for ILB
This article explains provisioning and configuring Avi Vantage in Google Cloud Platform (GCP) with public IP support for Internal Load Balancer (ILB).
About Google Cloud Platform (GCP)
Google Cloud Platform is a cloud computing service that offers hosting services on the same supporting infrastructure that Google uses internally for end-user products such as search and YouTube. Cloud Platform provides developer products to build a range of programs from simple websites to complex applications.
Google Cloud Platform is a part of the Google Cloud enterprise services suite that provides a set of modular cloud-based services with a host of development tools that includes hosting and computing, cloud storage, data storage, translation APIs, and prediction APIs.
The following figure represents a sample deployment case for Google Cloud Platform.
About Avi Vantage
The Avi Vantage Platform provides enterprise-grade distributed ADC solutions for on-premise and public-cloud infrastructure. Avi Vantage also provides built-in analytics to diagnose and improve the end-user application experience, helping in easy operations for network administrators.
Avi Vantage is a complete software solution which runs on commodity x86 servers or as a virtual machine and is entirely accessible via REST API calls.
Public IP Support for ILB in GCP
Google has introduced public IP support for Internal Load Balancing in GCP. With this, you can host your public IPs to the Google cloud. Starting with release 18.1.2, Avi Vantage supports this feature in Google cloud, where you can create an Internal Load Balancer (ILB) with a VIP from the GCP VPC subnet. You also need to create a static route in GCP with the public IP as the destination and the ILB VIP as next hop.
Internal Load Balancer in GCP
GCP offers Internal Load Balancing for TCP and UDP traffic. Internal Load Balancing enables you to operate and scale your services behind a private load balancing IP address that is accessible only to your internal virtual machine instances.
You can use ILB to configure an Internal Load Balancing IP address to be the frontend for your private backend instances. With this, you will not need a public IP address for your load balancing service. The internal client requests will stay internal to your VPC network and the region, resulting in lower latency, as all load-balanced traffic will be restricted within the Google’s network.
- Works with auto mode VPC networks, custom mode VPC networks, and legacy networks.
- Allows autoscaling across a region, where it can be implemented within the regional managed instance groups, thus making services immune to zonal failures.
- Supports traditional 3-tier web services, where the web tier uses external load balancers such as HTTP, HTTPS, or TCP/UDP network load balancing. ILB also supports instances running the application tier or the backend databases that are deployed behind the Internal Load Balancer.
Load Balancing and Health Check Support
- Supports load balancing TCP and UDP traffic to an internal IP address. You can also configure the Internal Load Balancing IP from within your VPC network.
- Supports load balancing across instances in a region. This allows instantiating instances in multiple availability zones within the same region.
- Provides fully managed load balancing service that scales as required, to handle client traffic.
- GCP health check system monitors the backend instances. You can configure TCP, SSL (TLS), HTTP, or HTTPS health check for these instances.
- Health check probes are in the address range of 126.96.36.199/22 to 188.8.131.52/16. Add firewall rules to allow these addresses.
- If all instances are unhealthy in the backend service, then the ILB will load balance the traffic among all instances.
- ILB supports UDP and TCP. However, in the case of UDP the healthcheck type is not UDP (due to Google limitation) and so the failover time is higher.
- The virtual IP cannot be shared by other virtual services as the forwarding ports cannot be updated. A new forwarding rule with the same IP but different port is not allowed in GCP.
- A virtual service can only have upto 5 ports as the forwarding rule can have only 5 ports.
- The health check is done on the same port as the VIP and not on the instance IP.
- If Avi Service Engine is in N backend service then it will receive N health check probes per health check interval.
- Ensure that the VIP is not configured in the same subnet as that of Avi Service Engine.
Deploying Internal Load Balancer
With ILB, you can configure a private RFC 1918 address as the load balancing IP address and configure backend instance groups to handle requests that are sent to the load balancing IP address from client instances.
The backend instance groups can be zonal or regional, which enables you to configure instance groups according to your availability requirements.
The traffic sourced from the client instances to the ILB must belong to the same VPC network and region, but can be in different subnets as compared to the load balancing IP address and the backend.
The following figure represents a sample deployment:
Provisioning Multi-Project in ILB
A virtual private cloud (VPC) is a global private isolated virtual network partition that provides managed networking functionality to the GCP resources. ILB in GCP supports shared VPC, also known as XPN, that allows you to connect resources from multiple projects to a common VPC network.
On using shared VPC, you need to designate a project as a host project and one or more projects as service projects.
Host Project - This contains one or more shared VPC networks. One or more service projects can be attached to the host project to use the shared network resources.
Service Project - This is any project that participates in a shared VPC by being attached to the host project. Starting with the release 18.1.2, you can configure new IPAM fields in Avi Vantage and cross-project deployment is also supported.
Reach out to your Google support contact for whitelisting specific projects.
The Avi Controller, Service Engines, and the network can all be in different projects. By default, GCP adds a service account to each VM in the following format:
For integration with Avi Vantage,
- Set up the service account in editor role for the Controller VM to call the Google cloud APIs.
- Add this service account as a member to the Service Engines’ and XPN project.
- Map this account to the virtual machines created, so that the required permissions are allotted.
The following is an example of creating a new service account :
firstname.lastname@example.org in the Controller project:
Avi Vantage Configuration
Follow the steps below to configure Avi Vantage ILB support in GCP:
- Setting up the Avi Controller VM
- Configuring GCP IPAM
- Attaching GCP IPAM
- Setting up Service Engine
- Configuring Virtual Service
Setting up the Avi Controller VM
Follow the instructions outlined in Installing Avi Vantage for a Linux Server Cloud to install or run the Avi Controller on the previously created instance. Also ensure that the Service Engine status is active, indicated as Green.
On Avi UI, navigate to Infrastructure > Networks and click on Create to configure the GCP network. Click on Add Subnet to configure the network with an IP address pool for VIP allocation. Enable the checkbox for Add Static IP Address Pool to add the pool range. Click on Save to save the configuration. The configured subnet and the address pool range will be displayed under the Network IP Subnets section.
Configuring GCP IPAM
You can configure GCP IPAM using the commands as shown below.
[admin:10-1-1-1]: > configure ipamdnsproviderprofile gcp-ipam [admin:10-1-1-1]: ipamdnsproviderprofile> type ipamdns_type_gcp [admin:10-1-1-1]: ipamdnsproviderprofile> gcp_profile [admin:10-1-1-1]: ipamdnsproviderprofile:gcp_profile> use_gcp_network [admin:10-1-1-1]: ipamdnsproviderprofile:gcp_profile> vpc_network_name gcp_vcp [admin:10-1-1-1]: ipamdnsproviderprofile:gcp_profile> region_name us-central1 [admin:10-1-1-1]: ipamdnsproviderprofile:gcp_profile> network_host_project_id net1 [admin:10-1-1-1]: ipamdnsproviderprofile:gcp_profile> save [admin:10-1-1-1]: ipamdnsproviderprofile> save +-------------------------+-------------------------------------------------------------+ | Field | Value | +-------------------------+-------------------------------------------------------------+ | uuid | ipamdnsproviderprofile-e39d51e5-2170-415d-b4ac-7a82068b2bc5 | | name | gcp-ipam | | type | IPAMDNS_TYPE_GCP | | gcp_profile | | | match_se_group_subnet | False | | use_gcp_network | True | | region_name | us-central1 | | vpc_network_name | gcp_vcp | | allocate_ip_in_vrf | False | | network_host_project_id | net1 | | tenant_ref | admin | +-------------------------+-------------------------------------------------------------+
Navigate to Templates > Profiles > IPAM/DNS Profiles and click on Create.
Choose the Type as Google Cloud Platform IPAM from the drop-down list.
For the Profile Configuration, click on Manual Configuration, to enter the details for:
- Network Host Project ID
- Service Engine Project ID
- Region Name
- VPC Network Name
You can alternatively select the option for Derive from Controller, to obtain parameters for the profile configuration.
Click on Add Usable Network to specify the network details.
Attaching GCP IPAM
To attach GCP IPAM to a Linux Server cloud, edit the Default-Cloud. Choose the IPAM provider that was created as the GCP IPAM provider.
Setting up Service Engine
Follow the instructions provided in the Installation and Configuration of Avi Controller & Avi Service Engines section in the Avi Deployment Guide for Google Cloud Platform (GCP) document for creating Service Engines.
Add the created Service Engines to the Linux Server cloud.
Configure the Linux server cloud using the IP addresses of the Service Engine instances.
Configuring Virtual Service
Follow the instructions provided in the Creating Virtual Service and Verifying Traffic section in the Avi Deployment Guide for Google Cloud Platform (GCP) document for creating virtual services.
Ensure that the VIP configured is not in the same subnet as that of the Service Engines.