Service Discovery

<< Back to Technical Glossary

Service Discovery Definition

Service discovery is the process of automatically detecting devices and services on a network. Service discovery protocol (SDP) is a networking standard that accomplishes detection of networks by identifying resources. Traditionally, service discovery helps reduce configuration efforts by users who are presented with compatible resources, such as a bluetooth-enabled printer or server.

More recently, the concept has been extended to network or distributed container resources as ‘services’, which are discovered and accessed.

Diagram depicts service discovery between a service registry, service provider and service consumer in application delivery.
FAQs

What is Service Discovery?

Service Discovery has the ability to locate a network automatically making it so that there is no need for a long configuration set up process. Service discovery works by devices connecting through a common language on the network allowing devices or services to connect without any manual intervention. (i.e Kubernetes service discovery, AWS service discovery)

There are two types of service discovery: Server-side and Client-side. Server-side service discovery allows clients applications to find services through a router or a load balancer. Client-side service discovery allows clients applications to find services by looking through or querying a service registry, in which service instances and endpoints are all within the service registry.

How does Service Discovery Work?

There are three components to Service Discovery: the service provider, the service consumer and the service registry.

1) The Service Provider registers itself with the service registry when it enters the system and de-registers itself when it leaves the system.

2) The Service Consumer gets the location of a provider from the service registry, and then connects it to the service provider.

3) The Service Registry is a database that contains the network locations of service instances. The service registry needs to be highly available and up to date so clients can go through network locations obtained from the service registry. A service registry consists of a cluster of servers that use a replication protocol to maintain consistency.

What is Service Discovery in Microservices?

Microservices service discovery is a way for applications and microservices to locate each other on a network. Service discovery implementations within microservices architecture discovery includes both:

• a central server (or servers) that maintain a global view of addresses.
• clients that connect to the central server to update and retrieve addresses.

What are the Advantages of Service Discovery (Server-side & Client-side)?

The advantage of Server-side service discovery is that it makes the client application lighter as it does not have to deal with the lookup procedure and makes a request for services to the router.

The advantage of Client-side service discovery is that the client application does not have to traffic through a router or a load balancer and therefore can avoid that extra hop.

Does Avi Offer Service Discovery?

Yes, Avi offers service discovery which automatically maps service host/domain names to their Virtual IP addresses where they can be accessed and presents in a visual and dynamic “application map”. Service discovery bridges the gap between a service’s name and access information (IP address) by providing a dynamic mapping between a service name and its IP address. Users of all services (users using browsers or apps or other services) use well-known DNS mechanisms to obtain service IP addresses. The service discovery database must be kept up-to-date with this mapping as services are created and destroyed. The “global state” (available service IP addresses) of the application across sites and regions also resides in the service discovery database and is accessible by DNS.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

For more information on service discovery see the following resources:

Service-Oriented Architecture

<< Back to Technical Glossary

Service-Oriented Architecture Definition

Service-Oriented Architecture (SOA) is a software design/software development model for application components that incorporates discovery, control, security and more over a network.

Image depicting service oriented architecture in a triangle formation. Service Provider to service registry to service consumer and back. Service oriented architecture allows you to publish services, discover services and bind to services as well.
FAQs

What is Service-Oriented Architecture?

Service-oriented architecture (SOA) is a software architecture style that supports and distributes application components that incorporates discovery, data mapping, security and more. Service oriented architecture has two main functions:

1) Create a architectural model that defines goals of applications and methods that will help achieve those goals.

2) Define implementations specifications linked through WSDL (Web Services Description Language) and SOAP (Simple Object Access Protocol) specifications.

Service-oriented architecture principles are made up of nine main elements:

1. Standardized Service Contract where services are defined making it easier for client applications to understand the purpose of the service.

2. Loose Coupling is a way to interconnecting components within the system or network so that the components can depend on one another to the least extent acceptable. When a service functionality or setting changes there is no downtime or breakage of the application running.

3. Service Abstraction hides the logic behind what the application is doing. It only relays to the client application what it is doing, not how it executes the action.

4. Service Reusability divides the services with the intent of reusing as much as possible to avoid spending resources on building the same code and configurations.

5. Service Autonomy ensures the logic of a task or a request is completed within the code.

6. Service Statelessness whereby services do not withhold information from one state to another in the client application.

7. Service Discoverability allows services to be discovered via a service registry.

8. Service Composability breaks down larger problems into smaller elements, segmenting the service into modules, making it more manageable.

9. Service Interoperability governs the use of standards (e.g. XML) to ensure larger usability and compatibility.

How Does Service-Oriented Architecture Work?

A service-oriented architecture (SOA) works as an components provider of application services to other components over a network. Service-oriented architecture makes it easier for software components to work with each other over multiple networks.

Service-oriented architecture is implemented with web services (based on WSDL and SOAP), to be more accessible over standard internet protocols that are on independent platforms and programming languages.

Service-oriented architecture has 3 major objectives all of which focus on parts of the application cycle:
1) Structure process and software components as services – making it easier for software developers to create applications in a consistent way.
2) Provide a way to publish available services (functionality and input/output requirements) – allowing developers to easily incorporate them into applications.
3) Control the usage of these services for security purposes – mainly around the components within the architecture, and securing the connections between those components.

Microservices architecture software is largely an updated implementation of service-oriented architecture (SOA). The software components are created as services to be used via APIs ensuring security and best practices, just as in traditional service-oriented architectures.

Benefits of Service-Oriented Architecture?

The main benefits of service-oriented architecture solutions are:
• Extensibility – easily able to expand or add to it.
• Reusability – opportunity to reuse multi-purpose logic.
• Maintainability – the ability to keep it up to date without having to remake and build the architecture again with the same configurations.

What is Service-Oriented Architecture Testing?

Service oriented architecture has a testing phase so that the end is satisfied in terms of the quality of the product. Service oriented architecture testing is not limited to layers and web service, it is the overall testing of the whole architecture. The testing process is within three layers in architecture: Service Consumers, Process Layers, and Service layers. Testing can be divided into four different tiers:

• Tier 1 – Includes service level testing, functional testing, security testing, performance testing. In service level testing it is important to do this first, it tests individual services based on request and response. Functional testing is conducted for services on their business needs to find if the response they receive is correct, business needs are turned into test cases and then requests are formed, the formats of the response scenarios positive and negative have to be executed. Security testing plays a big role in the testing process, authentication of gateways, should be encrypted when the data deciphered, and then the verification of vulnerabilities when it comes to XML; CSRF and SQL injection.

• Tier 2 – Process Testing process involves testing multiple business processes, comprising the integration scenarios and applications covering business requirements. Using stimulators you can generate sample input data and validation for respective outputs. In this process you also test the data flow from different layers to show a smooth functioning when it is integrated.

• Tier 3 – End to End Testing is to validate business requirements functional and non functional, also validating UI of the application, end to end data flow, and all the services integration.

• Tier 4 – Regression Testing is testing the system’s stability, and archiving by manual testing or automated testing.

Does Avi Offer A Service-Oriented Architecture?

No, Avi delivers multi-cloud application services such as load balancing for containerized applications with microservices architecture through dynamic service discovery, application maps, and micro-segmentation. Avi integrates with OpenShift and Kubernetes for container orchestration and security.

Containerized applications based on microservices architecture require a modern, distributed application services platform to deliver an ingress gateway. Manual inputs are no longer an option for web-scale, cloud-native applications deployed using container technology as microservices. In some instances, container clusters can have tens and hundreds of pods, each containing hundreds and thousands of containers, mandating full automation and policy driven deployments.

For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos.

For more information on Service-Oriented Architecture see the following resources:

SDN Load Balancing

<< Back to Technical Glossary

SDN Load Balancing Definition

SDN Load Balancing stands for software-defined networking load balancing.
A SDN-based load balancer physically separates the network control plane from the forwarding plane. More than one device is able to be controlled at the same time when load balancing using SDN. This global view leads to more optimized load balancing.

Diagram depicting SDN Load Balancing from applications, balancing into a data forwarding plane to a network control plane into physical server and virtual machines.
FAQs

What is SDN load balancing?

Software-defined networking (SDN) provides flexible control so enterprises can react to changing business requirements more quickly. Load balancing in SDN separates the physical network control plane from the data plane. An SDN-based load balancer allows for the control of multiple devices. This is how networks can become more agile. The network control can be programmed directly for more responsive and efficient application services.

While compute and storage have seen innovations in virtualization and automation, networks have been lagging behind. Load balancing using SDN allows the network to function like the virtualized versions of compute and storage.

How does load balancing using SDN work?

Software-defined networking (SDN) load balancing removes the protocols at the hardware level to allow for improved network management and diagnosis. SDN controller load balancing makes data path control decisions without having to rely on algorithms defined by traditional network equipment. An SDN-based load balancer saves running time by having control over an entire network of application and web servers.

Load balancing in SDN leads to discovery of the best pathway and server for the fastest delivery of requests.

What are the benefits of SDN load balancing?

Software-defined networking (SDN) load balancing includes the following benefits:

  • Lower cost
  • Greater scalability
  • Higher reliability
  • Flexibility in configuration
  • Reduced time to deploy
  • Automation
  • Ability to build a network without any vendor-specific software/hardware

How does an SDN-based load balancer work in cloud computing?

Traditional load balancers have limitations in flexibility and adaptability. The increase in cloud usage requires more rapid allocation of resources that an SDN-based load balancer is best equipped to handle.

It can dynamically manage massive traffic flows by using virtual switching technology instead of traditional hardware switching technology. SDN controller load balancing monitors the data throughput of each port using variance analysis and then redirects traffic accordingly. This serves users with greater scalability, higher reliability and lower cost.

Does Avi offer SDN load balancing?

Yes. Avi offers elastic load balancing for SDN environments. Avi provides 100% software load balancing to ensure a fast, scalable and secure application experience. It delivers elasticity and intelligence across any environments. Avi’s intent-based applications services platform scales from 0 to 1 million SSL transactions per second in minutes. It achieves 90% faster provisioning and 50% lower TCO than a traditional appliance-based approach. Built on software-defined architectural principles, Avi extends L2-L3 network automation benefits from SDN solutions to L4-L7 services.

For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos.

For more information on SDN load balancing see the following resources:

Software Defined Load Balancing

<< Back to Technical Glossary

Software Defined Load Balancing Definition

Software defined load balancing is built on an architecture with a centralized control plane and a distributed data plane. The control plane is the brain behind the services delivered by the data plane. It receives and analyzes the continuous stream of application telemetry sent by the distributed load balancers across the environments to decide on service placement, autoscaling, and high availability for each application. Software defined load balancers can be deployed across multiple environments (data center or public cloud) and managed by the control plane.

This image depicts a software defined load balancing diagram where the software load balancer is the gateway between the control plane and data plane.

 

Software Defined Load Balancing FAQs

What is Software Defined Load Balancing?

A software defined load balancer sits in front of servers and directs traffic, but not as a physical appliance. The load balancing software routes client requests across all servers to achieve optimal speed and utilization and prevents degraded performance by ensuring that no one server is overworked. The software defined load balancer redirects traffic if one server fails, distributing requests to the remaining online servers. It also sends requests automatically to any new server that is added to the server group.

In other words, software defined load balancing efficiently distributes network load and client requests across all appropriate servers. It offers the administrator the ability to add or remove servers as needed. And it ensures reliability and high availability by routing traffic only to available servers.

Load balancing for software defined networking (SDN) allows for improved diagnosis and network management by removing protocols at the hardware level. While traditional network equipment defines algorithms for classic load balancers, software defined networking load balancing does not rely on those algorithms to make data path control decisions.

Software Defined Load Balancing vs Hardware Based Load Balancing

Network traffic is routed between web servers using software defined load balancing. Software defined load balancers do not require proprietary hardware and run on standard x86 servers. Software load balancers can examine application-level characteristics such as the HTTP header, the IP address, and the contents of the packet to evaluate client requests. They then select which server will receive a particular request. Due to the processing power of modern x86 servers, software defined load balancers with the right architecture have removed the performance advantages of hardware based load balancers.

A hardware based load balancer runs load balancing software as a stand-alone physical appliance. Traditionally, these units are deployed in (active/standby) pairs, so there is always a backup in case one hardware based load balancer fails. They are usually capable of high-performance functions, and can process many gigabits of traffic from applications of all kinds. A hardware based load balancer may also be available as a virtual appliance which uses the same architecture requiring active/standby pairs.

In contrast, software defined load balancing is typically:

 

Software load balancing works just like hardware based load balancing, using a chosen algorithm to distribute traffic among a pool of servers, but with software rather than a dedicated load balancing device.

Software defined load balancers may run in containers, on common hypervisors, or on bare-metal servers with minimal overhead as Linux processes. Depending on the technical requirements and use cases in question, they are highly configurable.

Software defined load balancers can reduce hardware expenditures and save space while matching performance and delivering even greater flexibility. In this way, they fully replace load balancing hardware.

Software Defined Load Balancing Methods

Software defined load balancers may be deployed as load balancer as a service (LBaaS), or installed directly onto an x86 server or virtual machine, and may be located on-premises or off. The LBaaS option places the burden of installing, configuring, maintaining, updating, and managing the load balancing software on the service provider.

Software defined load balancers distribute workloads across multiple servers to make a network more reliable and efficient. By using available servers in the most efficient way, and ensuring servers don’t get overloaded and workloads are well-managed, load balancing increases network capacity and helps the network run faster. Software defined load balancing also directs traffic away from failed servers to functional servers, ensuring reliability of services despite infrastructure failures.

Software defined load balancing uses a variety of algorithms to evaluate client requests and route traffic in real time. The administrator selects the load balancing policy and method the software uses to route traffic.

Software defined load balancers most often determine where network traffic should go based on one of these methods:

  • Round-robin algorithm. The simplest load balancing method, the round-robin algorithm simply moves requests in the same order through a list of available servers.
  • Least-connections algorithm. The least-connections algorithm sends requests to the servers in a given moment that are least busy, or that are processing the fewest workloads.
  • Least-time algorithm. The least-time algorithm selects servers based both on the fewest active requests and the fastest processing speed. These also often allow administrators to privilege servers with better compute, capacity, or memory by integrating weighted load-balancing algorithms.
  • Hash-based algorithm. The hash-based algorithm assigns the source and destination IP address of the client and server a unique hash key. This ensures that repeat requests from the same user will be directed to the same server, and retains previous session data in the server.

What are the Advantages of Software Defined Load Balancing?

Software defined load balancing offers multiple advantages over hardware-based load balancing:

  • Scalability. Scalability is the greatest advantage software defined load balancers have over hardware based load balancing. Software defined load balancers can respond in real-time, automatically, to network traffic fluctuations by adding or dropping virtual servers based on demand.
  • Cost. Especially using LBaaS, organizations can save money with software defined load balancers.
  • Flexibility. Software defined load balancers are more flexible than hardware based load balancers because they function within a variety of environments, including cloud environments, standard desktop operating systems, web servers, virtual servers, bare metal, and containers. Hardware load balancers are not programmable and are just not as flexible.
  • Ease of Use. Software defined load balancers are simply deployed on demand, saving time and money. Hardware load balancers can be costly and difficult to install.
  • Security. Load balancing software provides an extra layer of security as it sits between the client and the server, rejecting suspicious packets.

 

There are clear advantages to software defined load balancing, but every organization must balance their unique needs with the pros and cons of software, software as-a-service load balancing, and hardware. Any properly configured and managed load balancing can render a network safer and more reliable.

Does Avi Offer Software Defined Load Balancing?

Yes. Avi offers 100% software defined load balancing to ensure a fast, scalable and secure application experience. It delivers elasticity and intelligence across any environment. Avi’s intent-based applications services platform scales from 0 to 1 million SSL transactions per second in minutes. It achieves 90% faster provisioning and 50% lower TCO than a traditional appliance-based approach.

For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos or watch the Setting Up Headers How To Video here:

For more information on software load balancing see the following resources:

Server Load Balancing

<< Back to Technical Glossary

Server Load Balancing Definition

Server Load Balancing (SLB) is a technology that distributes high traffic sites among several servers using a network-based hardware or software-defined appliance. And when load balancing across multiple geo locations, the intelligent distribution of traffic is referred to as global server load balancing (GSLB). The servers can be on premises in a company’s own data centers, or hosted in a private cloud or the public cloud.
Server load balancers intercepts traffic for a website and reroutes that traffic to servers.

Diagram depicts server load balancing that distributes application traffic from end users over the internet through a software or hardware load balancer to multiple servers as required for application delivery.
FAQs

What is Server Load Balancing?

Server Load Balancing (SLB) provides network services and content delivery using a series of load balancing algorithms. It prioritizes responses to the specific requests from clients over the network. Server load balancing distributes client traffic to servers to ensure consistent, high-performance application delivery.

Server load balancing ensures application delivery, scalability, reliability and high availability.

How does Server Load Balancing Work?

Server load balancing works within two main types of load balancing:

• Transport-level load balancing is a DNS-based approach which acts independently of the application payload.
• Application-level load balancing uses traffic load to make balancing decisions such as with windows server load balancing.

What are the Advantages of Server Load Balancing?

Distributing incoming network traffic through web server load balancers across multiple servers aims to increase efficiency of application delivery to end users for a reliable application experience. IT teams are increasingly relying on server load balancers to:

• Increase Scalability: load balancers are able to spin up or down server resources based on spikes in traffic to the pool of servers that are best suited to handle these increases in traffic and keep applications performance optimized.

• Redundancy: Using multiple web servers to deliver applications or websites provides a safeguard against the inevitable hardware failure and application downtime. When server load balancers are in place they can automatically transfer traffic to working servers from servers that go down with little to no impact on the end user.

• Maintenance and Performance: Business with web servers distributed across multiple locations and a variety of cloud environments can schedule maintenance at any time to improve performance with minimal impact on application uptime as server load balancers can redirect traffic to resources that are not undergoing maintenance.

What is the Difference Between HTTP Server Load Balancing and TCP Load Balancing?

HTTP server load balancing is a simple HTTP request/response architecture for HTTP traffic. But a TCP load balancer is for applications that do not speak HTTP. TCP load balancing can be implemented at layer 4 or at layer 7. An HTTP load balancer is a reverse proxy that can perform extra actions on HTTPS traffic.

Does Avi offer Server Load Balancing?

Yes. Avi Networks delivers modern, multi-cloud load balancing, including an entirely innovative way of handling local and global server load balancing for enterprise customers across the data center and clouds.
This capability delivers:

• Active / standby data center traffic distribution
• Active / active data center traffic distribution
• Geolocation database and location mapping
• Consistency across data centers
• Rich visibility and metrics for all transactions

For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos.

For more information on server load balancing see the following resources:

SSL Proxy

<< Back to Technical Glossary

SSL Proxy Definition

SSL proxy SSL proxy is a transparent proxy that performs Secure Sockets Layer encryption (SSL) and decryption between the client and the server. Neither the server nor the client can detect its presence. A TLS proxy is similarly used by companies to handle incoming TLS connections and becoming more prominent.

Diagram depicts an SSL Proxy that performs a secure socket layer (ssl) encryption and decryption between application users and web servers in application delivery.
FAQs

What is SSL Proxy?

The SSL proxies control Secure Sockets Layer – SSL traffic -to ensure secure transmission of data between a client and a server. The SSL proxy is transparent, which means it performs SSL encryption and decryption between the client and the server.

The SSL proxy also reproduces server certificates so the server can make a secure (SSL) or unsecure (HTTP) connection to a web server.

What is an SSL Proxy Server?

A proxy server is an intermediary between a user’s computer and the Internet. A user first connects to a proxy server when requesting web pages, videos or any data online. The proxy server then retrieves data that have been previously cached. If an entirely new request, the proxy server gets data from the original source and caches it for future use.

A Secure Sockets Layer (SSL) proxy server ensures secure transmission of data with encryption technology. Security in an SSL connection relies on proxy SSL certificates and private-public key exchange pairs. SSL offload and SSL inspection features require the servers to share their secret keys to be able to decrypt the SSL traffic.

How Does SSL Proxy Work?

A key function of SSL proxy is to emulate server certificates. This allows a web browser to use a trusted certificate to validate the identity of web server. SSL encrypts data to ensure that communications are private and the content has not been tampered with.

The SSL proxy does the following:

• Acts as a client for the server by determining the keys to encrypt and decrypt.
• Acts as a server for the client by first authenticating the original server certificate and issuing a new certificate along with a replacement key.
• Decryption and encryption take place in each direction (client and server), and the keys are different for both encryption and decryption.
• Hands off HTTPS traffic to the HTTP proxy for protocol optimization and other acceleration techniques.

What are the Benefits of SSL Proxy?

• Decrypts SSL traffic to obtain granular application information.
• Enforces use of strong protocols and algorithms by the client and the server.
• Provides visibility and protection against threats embedded in SSL encrypted traffic.
• Controls what needs to be decrypted by using SSL Proxy.

Does Avi Offer SSL Proxy?

Yes. When Avi is serving as an SSL proxy for the back-end servers in the service’s pool, Avi communicates with the client over SSL/TLS.

For more information see the following SSL proxy resources:

Service Proxy

<< Back to Technical Glossary

Service Proxy

Service proxy is a client-side proxy for microservices applications that allows the application components to send and receive traffic. The main job of a service proxy is to ensure traffic is routed to the right destination service or container and to apply security policies. Service proxies also help to ensure that application performance needs are met.

Image depicting service proxy, from application clients to a service proxy to routing those rules to microservice applications.
FAQs

What is a Service Proxy?

A service proxy is a network component that acts as an intermediary for requests seeking resources from microservices application components. A client connects to the service proxy to request a particular service (file, connection, web page, or other resources) provided by one of the microservices components. The service proxy evaluates the request to route the request appropriately based on the load balancing algorithm configured.

What is a Web Service Proxy?

A web service proxy is a gatekeeper between a calling application and target web service. The proxy has the ability to introduce new behaviors within the request sequence. A Web Service proxy can then:

• Add or remove HTTP headers.
• Terminate or offload SSL requests
• Perform URL filtering and content switching
• Provide content caching
• Support Blue-Green deployments and Canary testing

How does a Service Proxy Work?

A service proxy works by acting as an intermediary between the client and the server. In a service proxy configuration, there is no direct communication between the client and the server. Service proxies are typically centrally managed and orchestrated. They do The client is connected to the proxy and sends requests for resources (document, web page, a file) located on a remote service. The proxy deals with this request by fetching the required assets from a remote service and forwarding it to the client.

Advantages of Service Proxy?

There are many advantages of service proxies:

1) Granularity: The service proxy is a necessary part of the networking infrastructure for microservices applications. It offers the granularity of application services needed to deliver scalable microservices applications

2) Traffic management: Service proxies can deliver both local and global load balancing services to applications within and across data centers.

3) Service Discovery: With the right architecture, service proxies can provide service discovery by mapping service host/domain names to the correct virtual IP addresses where they can be accessed.

4) Monitoring and Analytics: Service proxies can provide telemetry to help monitor application performance and alert on any application failures or anomalies.

5) Security: Service proxies can be configured to enforce L4-L7 security policies, application rate limiting, web application firewalling, URL filtering and other security services such as micro-segmentation.

Does Avi Offer Service Proxy?

Yes! Avi offers a service proxy for microservices applications as part of the Container Ingress for Kubernetes and OpenShift based applications. Avi service proxies are a distributed fabric of software load balancers that run on individual containers adjacent (as a side car) to the container representing each microservice. The capabilities of Avi’s Container Ingress are fully described in this white paper. Avi’s service proxies include:

• Full-featured load balancer, including advanced L7 policy-based switching, SSL offload, and data plane scripting
• East-west and north-south traffic management
• Health monitoring of microservices with automatic state synchronization
• 100% REST API with automation and self-service
• Centralized policy management and orchestration

 

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

For more information on Service Proxy see the following resources:

Scaleout Architecture

A scaleout architecture allows a application to dynamically change the amount of data plane resources that it uses to process traffic. Scaling out extends the application across additional load balancers, while scaling in reduces the number of load balancers powering the application.

SSL Offloading

<< Back to Technical Glossary

SSL Offloading Definition

SSL offloading is the process of removing the SSL based encryption from incoming traffic that a web server receives to relieve it from decryption of data. Security Socket Layer (SSL) is a protocol that ensures the security of HTTP traffic and HTTP requests on the internet. SSL traffic can be compute intensive since it requires encryption and decryption of traffic. SSL (called TLS or Transport Layer Security now) relies on public key cryptography to encrypt communications between the client and server sending messages safely across networks. Encryption of sensitive information protects against potential hackers and man-in-the-middle attacks.

Image depicting ssl offloading through a load balancer that ensures security of http to https traffic from applications to webservers.
FAQs

What is SSL Offloading?

SSL is a cryptographic procedure that secures communications over the internet. SSL encoding ensures user communications are secure. The encryption and decryption of SSL are CPU intensive and can put a strain on server resources. In order to balance the compute demands of SSL encryption and decryption of traffic sent via SSL connections, SSL offloading moves that processing to a dedicated server. This frees the web server to handle other application delivery demands.

How does SSL Offloading Work?

SSL offloading relieves a web server of the processing burden of encrypting and decrypting traffic sent via SSL. Every web browser is compatible with SSL security protocol, making SSL traffic common. The processing is offloaded to a separate server designed specifically to perform SSL acceleration or SSL termination. SSL certificates use cryptography keys for encryption. RSA keys of increasing key lengths (e.g. 1024 bits and 2048 bits) were the most common cryptography keys until a few years ago. But more efficient ECC (Elliptic Curve Cryptography) keys of shorter key lengths are replacing the RSA keys as the mechanism to encrypt traffic.

How to Configure SSL Offloading?

To configure SSL offloading, organizations enable routing of SSL requests to an application delivery controller that intercepts SSL traffic, decrypts the traffic, and forwards it to a web server. In SSL offloading, importing a valid certificate and key and binding them to the web server are important to ensure correct exchange of unencrypted traffic.

What is SSL Offloading in a Load Balancer?

SSL offloading on a load balancer is now a required capability and these load balancers also referred to as SSL load balancer. This is a load balancer that has the ability to encrypt and decrypt data transported via HTTPS, which uses the SSL protocol to secure data across the network.

Does Avi Offer SSL Offloading?

Yes, Avi provides SSL offloading of encrypted traffic that uses RSA 2K keys as well as those that use ECC keys. Avi delivers high performance for SSL offloading, as well as a number of enterprise-grade features to help understand the health of SSL traffic including alerting on incorrect versions and to troubleshoot SSL-related issues.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

For more information on ssl offloading see the following resources: