Static Load Balancing

<< Back to Technical Glossary

Static Load Balancing Definition

A load balancer is a hardware device or software platform that prevents single servers from becoming overloaded. Load balancers use algorithms to distribute network traffic between servers. An algorithm is the logic or set of predefined rules used for sorting, and load balancing algorithms can be static or dynamic.

Static load balancing algorithms in cloud computing do not account for the state of the system, or measures such as load level of processors, as they distribute tasks. Static algorithms divide traffic equally among servers or based on other rules that are not sensitive to system states and are therefore intended for systems with very little variation in load. Static load balancing algorithms demand in-depth knowledge of server resources at the time of implementation for better processor performance.

Dynamic load balancing algorithms require real-time communication with the network because they identify the lightest server and give it preference. The dynamic algorithm controls the load based on the present state of the system and transfers traffic to underutilized machines from highly utilized machines in real-time.

This image shows a comparison between static load balancing and dynamic load balancing, with the main difference being static load balancing using a hardware load balancer and one set of web servers.

 

Static Load Balancing FAQs

What is Static Load Balancing?

A load balancer is a hardware or software device that effectively distributes traffic across healthy servers to prevent any one server from becoming overloaded. There are two basic approaches to load balancing: static load balancing and dynamic load balancing.

The difference between static and dynamic load balancing

Static load balancing methods distribute traffic without adjusting for the current state of the system or the servers. Some static algorithms send equal amounts of traffic, either in a specified order or at random, to each server in a group. Dynamic load balancing algorithms account for the current state of the system and each server and base traffic distribution on those factors.

A static load balancing algorithm does not account for the state of the system as it distributes tasks. Instead, distribution is shaped by assumptions and knowns about the overall system made before sorting starts. This includes knowns such as the number of processors and communication speeds and power, and assumptions such as resource requirements, response times, and arrival times of incoming tasks.

Static load balancing algorithms in distributed systems minimize specific performance functions by associating a known set of tasks with available processors. These types of load balancing strategies typically center around a router that optimizes the performance function and distributes loads. The benefit of static load balancing in distributed systems is ease of use and quick, simple deployment, although there are some situations that are not best served by this kind of algorithm.

Dynamic algorithms account for the current load of each node or computing unit in the system, achieving faster processing by moving tasks dynamically away from nodes that are overloaded toward nodes that are underloaded. Dynamic algorithms are far more complicated to design, but especially when execution times for different tasks vary greatly, they can produce superior results. Furthermore, because there is no need to dedicate specific nodes to work distribution, a dynamic load balancing architecture is often more modular.

Unique assignment of tasks involves tasks that are assigned uniquely to a processor based on its state at a particular moment. Dynamic assignment refers to the permanent redistribution of tasks based on the state of the system and its evolution. Obviously, any load balancing algorithm can actually slow down the overall process if it demands excessive communication to reach decisions.

Both dynamic and static load balancing techniques are shaped by other factors as well:

Nature of tasks. The nature of the tasks has a major impact on the efficiency of load balancing algorithms, so maximizing access to task information as algorithm decision making takes place increases optimization potential.

Task size. Exact knowledge of task execution time is extremely rare, but would enable optimal load distribution. There are several ways to estimate various execution times. Where tasks are similarly sized, an average execution time might be used successfully. However, more sophisticated techniques are required when execution times are very irregular. For example, it is possible to add metadata to tasks and then make inferences for future tasks based on statistics based on the previous execution time for similar metadata.

Dependencies. Tasks may depend on each other, and some cannot start until others are completed. Illustrate such interdependencies with a directed acyclic graph and minimize total execution time by optimizing task order. Some algorithms can use metaheuristic methods to calculate optimal task distributions.

Segregation of tasks. This refers to the ability of tasks to be broken down into subtasks during execution because this specificity is important in the design of load balancing algorithms.

Hardware architecture among parallel units:

  • Heterogeneity. Units of different computing power often comprise parallel computing infrastructures, and load distribution must account for this variation. For example, units with less computing power should receive requests that demand less computation, or fewer requests of homogeneous or unknown size than larger units.
  • Memory. Parallel units are often divided into categories of shared and distributed memory. Shared memory units follow the PRAM model, all sharing, reading, and writing in parallel on one, common memory. Distributed memory units follow the distributed memory model, each unit exchanging information via messages and having its own memory. There are advantages to either type, but few systems fall squarely into one category or the other. In general, load balancing algorithms should be adapted specifically to a parallel architecture to avoid reducing efficiency of parallel problem solving.

 

Hierarchy. The two main types of load balancing algorithms are controller-agent and distributed control. In the controller-agent model, the control unit assigns tasks to agents who execute the tasks and inform the controller of progress. The controller can assign or reassign tasks, in case of dynamic algorithms. When control is distributed between nodes, the nodes share responsibility for assigning tasks, and the load balancing algorithm is executed on each of them. An intermediate strategy, with control nodes for sub-clusters, all under the purview of a global control, is also possible. In fact, various multi-level strategies and organizations, using elements of both distributed control and control-agent strategies, are possible.

Scalability. Computer architecture evolves, but it is better to avoid designing a new algorithm with every change to the system. Thus, the scalability of the algorithm, or its ability to adapt to a scalable hardware architecture, is a critical parameter. An algorithm is scalable for an input parameter when the size of the parameter and the algorithm’s performance remain relatively independent. The algorithm is called moldable when it can adapt to a varying number of computing units, but the user must select the number of computing units before execution. Finally, the algorithm is malleable if it can deal with a changing number of processors during execution.

Fault tolerance. The failure of a single component should never cause the failure of the entire parallel algorithm during execution, especially in large-scale computing clusters. Fault tolerant algorithms help detect problems while recovery is still possible.

Approaches to Static Load Balancing

In most static load distribution scenarios, there is not significant prior knowledge about the tasks. However, static load distribution is always possible, even if the execution time is not known in advance.

Round Robin. Among the most used and simplest load balancing algorithms, round robin load balancing distributes client requests to application servers in rotation. It passes client requests in the order it receives them to each application server in turn, without considering their characteristics such as computing ability, availability, and load handling capacity.

Weighted Round Robin. A weighted round robin load balancing algorithm accounts for various application server characteristics within the context of the basic round robin load balancing algorithm. The algorithm distributes traffic in turn, but also giving preference to units the administrator “weights” based on chosen criteria—usually ability to handle traffic. Instead of a classic round robin 1 by 1 distribution to every server in turn, weighted units would receive more requests during their turns.

Opportunistic/Randomized Static. Opportunistic/randomized algorithms randomly assign tasks to the various servers without regard to current workload. This works better for smaller tasks, and performance falls as task size increases. At times bottlenecks arise, in part due to the randomness of the distribution. At any time a machine with a high load might randomly be assigned a new task, making the problem worse.

Most other strategies, such as consistent hash to selected IP addresses, fastest response, fewest servers, fewest tasks, least connections, least load, weighted least connection, and resource-based adaptive, are dynamic load balancing algorithms.

Does Avi Offer Static Load Balancing?

Yes. Avi is a complete load balancing solution. The heart of a load balancer is its ability to effectively distribute traffic across healthy servers. Avi provides a number of algorithms, each with characteristics that may be best suited for one use case versus another. Avi delivers multi-cloud application services including a software load balancer that helps ensure a fast, scalable, and secure application experience.

For more on the implementation of load balancers, check out our Application Delivery How-To Videos.

Single Point of Failure

<< Back to Technical Glossary

Single Point of Failure Definition

A SPOF or single point of failure is any non-redundant part of a system that, if dysfunctional, would cause the entire system to fail. A single point of failure is antithetical to the goal of high availability in a computing system or network, a software application, a business practice, or any other industrial system.

Diagram depicts single point of failure (SPOF) system for high availability in a computing system, network, software application, business practice or other industrial system.
Single Point of Failure FAQs

What is a Single Point of Failure?

A single point of failure (SPOF) is essentially a flaw in the design, configuration, or implementation of a system, circuit, or component that poses a potential risk because it could lead to a situation in which just one malfunction or fault causes the whole system to stop working. Depending on the interdependencies implicated in the failure and its location, a single point of failure in a data center may compromise workload availability or even the availability of the entire location. Productivity and business continuity decrease, and security is compromised.

Single points of failure are undesirable to systems that demand high availability and reliability, such as supply chains, networks, and software applications. SPOFs are possible in both software and hardware layouts in the context of cloud computing.

To make a circuit or system more robust, audit for single points of failure. This way, the organization can plan to add redundancy at each level where a SPOF currently exists. Highly available systems should never rely on single components.

High-availability clusters and both physical redundancy and logical redundancy are key to avoiding SPOFs. If a system component fails, another component should immediately take its place. For example, a database in multiple locations can be accessed even if one location fails. It is important to identify software flaws that can cause outages and eliminate software-based single points of failure in cloud architecture.

How to Eliminate Single Points of Failure

To eliminate single points of failure, first identify potential risk posed by conducting a single point of failure risk assessment across three main areas: hardware, software/providers/services, and people. Create a single point of failure analysis checklist detailing the general areas for assessment.

In each category, the IT team should conduct SPOF analysis and search for any unmonitored devices on the network, any software or hardware systems or providers that have no redundancy, people that cannot be replaced in case of emergency, and any data that isn’t backed up. For each network component, identify what would be lost if that particular piece went down as part of your single point of failure analysis.

Achieve redundancy in computing at the internal component level, at the system level with multiple machines, or at site level with more than one location to avoid single points of failure.

Each individual server within a high-availability server cluster may achieve redundancy by having multiple hard drives, power supplies, and other components.

At the system level, ensure high availability for the server cluster with a load balancer. Spare servers can also deploy in case of failure to achieve system level redundancy.

At the personnel level, a single point of failure person has access to something no one else does, or conducts business critical tasks that no one else can handle.

Obviously, a data center itself supports other operations including business logic. As such, it is in itself a potential single point of failure for the business, if its functions cannot be replicated elsewhere. Achieving this kind of replication is typically the focus of an IT disaster resiliency, continuity plan, or recovery program.

Packet switching, used by “survivable communications networks” such as the internet and ARPANET, is designed to have no single point of failure. It works by allowing multiple routes between any two destinations on the network. This enables users to communicate as the packets “route around” damage even when nodes in between them fail.

Microservices architecture can also reduce the risk of potential SPOFs, in that this type of structure distributes the functionality of a system in many places. This prevents the entire system from failing when a part of it stops working.

Network protocols intended to avoid single points of failure include:

  • Intermediate System to Intermediate System
  • Open Shortest Path First
  • Shortest Path Bridging

Threat Protection and Load Balancer Single Point of Failure

Almost any tool can be a SPOF hazard, including security tools. Advanced threat protection tools such as web application firewalls (WAF), load balancers, intrusion prevention systems (IPS), and advanced threat protection (ATP) solutions are at risk during link or NIC failure, during power failures, or when they either block good traffic or pass bad traffic. During these times they are vulnerable to both common threats such as brute force attacks and more complex threats such as cross-site request forgery or implementing XML external entities.

Because even these security tools can fail to protect the network, redundant security measures are essential. There are ways to configure WAF security architecture that minimize the frequency and effectiveness of various attacks and avoid single points of failure. For example, although basic secure single-tier or two-tier web application architectures are useful during project development, they introduce a SPOF.

Instead, a multi-tier or N-tier architecture offers compartmentalization, separating different application components according to their functions into multiple tiers. With each tier running on a different system, there is no single point of failure. In this sense, multiple, properly configured load balancers can be a single point of failure solutions rather than a source of the problem.

How Does Avi’s Platform Help Eliminate Single Points of Failure?

Avi platform’s load balancing capabilities keeps systems online reliably and reduces the chances of a single point of failure by automatically redistributing traffic, instantiating virtual services in a self-healing manner when one fails, and handling workload additions or moves. These solutions can be configured for high availability load balancing in various modes, as well.

Learn more about how the Avi Networks platform helps reduce risk from SPOFs here.

For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos.

Subnet Mask

<< Back to Technical Glossary

Subnet Mask Definition

Every device has an IP address with two pieces: the client or host address and the server or network address. IP addresses are either configured by a DHCP server or manually configured (static IP addresses). The subnet mask splits the IP address into the host and network addresses, thereby defining which part of the IP address belongs to the device and which part belongs to the network.

The device called a gateway or default gateway connects local devices to other networks. This means that when a local device wants to send information to a device at an IP address on another network, it first sends its packets to the gateway, which then forwards the data on to its destination outside of the local network.

Diagram depicts a subnet mask architecture.
FAQs

What is Subnet Mask?

A subnet mask is a 32-bit number created by setting host bits to all 0s and setting network bits to all 1s. In this way, the subnet mask separates the IP address into the network and host addresses.

The “255” address is always assigned to a broadcast address, and the “0” address is always assigned to a network address. Neither can be assigned to hosts, as they are reserved for these special purposes.

The IP address, subnet mask and gateway or router comprise an underlying structure—the Internet Protocol—that most networks use to facilitate inter-device communication.

When organizations need additional subnetworking, subnetting divides the host element of the IP address further into a subnet. The goal of subnet masks are simply to enable the subnetting process. The phrase “mask” is applied because the subnet mask essentially uses its own 32-bit number to mask the IP address.

IP Address and Subnet Mask

A 32-bit IP address uniquely identifies a single device on an IP network. The 32 binary bits are divided into the host and network sections by the subnet mask but they are also broken into four 8-bit octets.

Because binary is challenging, we convert each octet so they are expressed in dot decimal. This results in the characteristic dotted decimal format for IP addresses—for example, 172.16.254.1. The range of values in decimal is 0 to 255 because that represents 00000000 to 11111111 in binary.

IP Address Classes and Subnet Masks

Since the internet must accommodate networks of all sizes, an addressing scheme for a range of networks exists based on how the octets in an IP address are broken down. You can determine based on the three high-order or left-most bits in any given IP address which of the five different classes of networks, A to E, the address falls within.

(Class D networks are reserved for multicasting, and Class E networks not used on the internet because they are reserved for research by the Internet Engineering Task Force IETF.)

A Class A subnet mask reflects the network portion in the first octet and leaves octets 2, 3, and 4 for the network manager to divide into hosts and subnets as needed. Class A is for networks with more than 65,536 hosts.

A Class B subnet mask claims the first two octets for the network, leaving the remaining part of the address, the 16 bits of octets 3 and 4, for the subnet and host part. Class B is for networks with 256 to 65,534 hosts.

In a Class C subnet mask, the network portion is the first three octets with the hosts and subnets in just the remaining 8 bits of octet 4. Class C is for smaller networks with fewer than 254 hosts.

Class A, B, and C networks have natural masks, or default subnet masks:

  • Class A: 255.0.0.0
  • Class B: 255.255.0.0
  • Class C: 255.255.255.0

You can determine the number and type of IP addresses any given local network requires based on its default subnet mask.

An example of Class A IP address and subnet mask would be the Class A default submask of 255.0.0.0 and an IP address of 10.20.12.2.

How Does Subnetting Work?

Subnetting is the technique for logically partitioning a single physical network into multiple smaller sub-networks or subnets.

Subnetting enables an organization to conceal network complexity and reduce network traffic by adding subnets without a new network number. When a single network number must be used across many segments of a local area network (LAN), subnetting is essential.

The benefits of subnetting include:

  • Reducing broadcast volume and thus network traffic
  • Enabling work from home
  • Allowing organizations to surpass LAN constraints such as maximum number of hosts

Network Addressing

The standard modern network prefix, used for both IPv6 and IPv4, is Classless Inter-Domain Routing (CIDR) notation. IPv4 addresses represented in CIDR notation are called network masks, and they specify the number of bits in the prefix to the address after a forward slash (/) separator. This is the sole standards-based format in IPv6 to denote routing or network prefixes.

To assign an IP address to a network interface since the advent of CIDR, there are two parameters: a subnet mask and the address. Subnetting increases routing complexity, because there must be a separate entry in each connected router’s tables to represent each locally connected subnet.

What Is a Subnet Mask Calculator?

Some know how to calculate subnet masks by hand, but most use subnet mask calculators. There are several types of network subnet calculators. Some cover a wider range of functions and have greater scope, while others have specific utilities. These tools may provide information such as IP range, IP address, subnet mask, and network address.

Here are some of the most common varieties of IP subnet mask calculator:

  • A IPv6 IP Subnet Calculator maps hierarchical subnets.
  • An IPv4/IPv6 Calculator/Converter is an IP mask calculator that supports IPv6 alternative and condensed formats. This network subnet calculator may also allow you to convert IP numbers from IPv4 to IPv6.
  • An IPv4 CIDR Calculator is a subnet mask adjustment and Hex conversion tool.
  • An IPv4 Wildcard Calculator reveals which portions of an IP address are available for examination by calculating the IP address wildcard mask.
  • Use a HEX Subnet Calculator to calculate the first and last subnet addresses, including the hexadecimal notations of multicast addresses.
  • A simple IP Subnet Mask Calculator determines the smallest available corresponding subnet and subnet mask.
  • A Subnet Range/Address Range Calculator provides start and end addresses.

What Does IP Mask Mean?

Typically, although the phrase “subnet mask” is preferred, you might use “IP/Mask” as a shorthand to define both the IP address and submask at once. In this situation, the IP address is followed by the number of bits in the mask. For example:

10.0.1.1/24

216.202.192.66/22

These are equivalent to

IP address: 10.0.1.1 with subnet mask of 255.255.255.0

IP address: 216.202.196.66 with a subnet mask example of 255.255.252.0

However, you do not mask the IP address, you mask the subnet.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

SSL Security

<< Back to Technical Glossary

SSL Security Definition

Secure Sockets Layer (SSL) is a standard security technology for establishing an encrypted link between a server and a client—typically a web server (website) and a browser, or a mail server and a mail client (e.g., Outlook). SSL security safeguards sensitive data such as credit card numbers and financial information, from capture or modification as two systems send and receive it, to prevent unauthorized users from accessing, reading, or modifying any personal information.

The two systems transferring the information might both be servers, as when an application accesses sensitive information. The systems might also be a client and a server, as when a user buys things on an e-commerce website through their web browser.

To protect the sensitive data in transmission, SSL security encrypts the information using algorithms, rendering it unreadable during the transfer between sites, systems, and/or users. Various versions of SSL security protocols are widespread use in applications such as email, chatting and instant messaging, voice over IP (VoIP), and web browsing.

Diagram depicts SSL security from application clients and end users to web servers through public key exchange.
SSL Security FAQs

What is SSL Security?

SSL is a security protocol that determines how to encrypt data using specific algorithms. The secure socket layer SSL protocol assesses both the data to transmit and the link, and determines encryption variables for both.

SSL security technology establishes encrypted links between clients and servers. For example, it scrambles sensitive information transferred between a client, often a website, and a user, often a browser or mail server.

The default mode of communication between web servers and browsers allows data to be sent in plain text. This leaves users vulnerable to hackers who can “see” information they intercept. SSL encrypts sensitive details such as login credentials, social security numbers, and bank information so that unauthorized users cannot interpret and use the data, even if they see it.

The lock icon users see on SSL-secured websites and the “https” address indicate that a secure connection is present. Sites with a green address bar have an Extended Validation SSL-secured website. These visual signals are sometimes called EV indicators.

Although any web browser is able to use the SSL protocol to interact with secured web servers, both the server and the browser must have an SSL certificate to establish a secure connection.

What is an SSL Security Certificate?

A SSL security certificate identifies the certificate/website owner in its subject. SSL security certificates also contain a pair of public and private cryptographic keys they use to establish an encrypted connection.

There are several steps to get a SSL or TLS certificate:

  • Create a Certificate Signing Request (CSR) on your server to generate a public key and a private key.
  • Send the CSR data file called a Certificate Authority or CA to the SSL certificate issuer.
  • The CA contains the public key which the CA uses along with the rest of the CSR data file to match your private key with a data structure. The CA never sees the private key, nor is it compromised.

The certificate issuer gives you the SSL certificate. You install both the SSL certificate and an intermediate certificate that establishes its credibility on your server. Depending on the server and the other facts, how you install and test the SSL certificate will vary.

When your browser connects to the SSL certificate on your server, the SSL (or TLS) protocol begins to encrypt any transferred information.

The transmission control protocol (TCP) is the first layer of defense, with the SSL security protocol serving as a safety zone, operating directly on top of the TCP to provide a secure connection while leaving higher protocol layers unchanged. This way, other protocol layers function normally beneath the SSL layer.

Is SSL Security Safe from Man in the Middle Attacks?

The SSL certificate may allow hackers in a man in the middle attack to see how much data is being transferred, and which IP and port are connected. However they cannot intercept any information, and although they may be able to terminate the connection, it will be clear to both the user and the server that a third party broke the connection.

What is a Certificate Authority (CA)?

A certificate authority (CA) is a trusted entity that issues and manages SSL certificates. As part of the public key infrastructure (PKI), the CA also manages public keys that are used throughout a public network for secure communication. Along with the registration authority (RA), the CA also verifies the information digital certificate requesters provide in support of their requests. Once they verify the information, the CA can then issue a certificate.

Obviously CAs play an important role in encryption, but they can also augment security by authenticating website owner identities via their SSL certificates. There are three authentication categories for certificates based on the authentication level: domain validation certificates, organization validation certificates, and extended validation certificates.

Domain validation certificates require users to prove control over the domain name alone and represent the most basic level of SSL security. Organization validation certificates represent the next level, and for such a certificate to issue users must prove both control over the domain name and legal accountability for and ownership of the business.

Extended validation EV certificates demand the most from users, including everything the other certificates require, plus additional verification steps. This highest level of SSL security helps protect against phishing and other security issues.

Can You Explain How SSL Protocol is Used for Secure Transactions?

When a browser tries to access a website protected by SSL security, the web server and browser use a process called an SSL handshake to establish an SSL connection. This SSL handshake process happens instantaneously and is invisible to the user.

During the SSL handshake, the website and browser connect securely. The browser demands that the server identify itself so it can begin to ensure the connection is authenticated and encrypted. The server responds by sending the browser a copy of its SSL certificate and its public key.

The browser compares the certificate root to a list of trusted CAs to determine whether it can trust the certificate and the website is secure. If the certificate is unrevoked, unexpired, and has a valid common name for the website it is connecting to, the browser will use the server’s public key to create, encrypt, and return a symmetric session key.

The server then uses its private key to decrypt the symmetric session key. It encrypts and sends an acknowledgement using the session key, beginning the encrypted session. Now, the browser and server can use the session key to encrypt all data transmitted. This is why the handshake process is at the heart of how SSL security works.

What’s More Secure – SSL or HTTPS?

This is a common question, but actually a website has https:// or Hyper Text Transfer Protocol Secure appears in its URL when an SSL certificate secures it. By clicking on the lock symbol on the browser bar, you can see the details of the certificate, including the corporate name of the website owner and the issuing authority.

According to Google’s Webmaster best practices, SSL security and HTTPS addresses should be used everywhere on the web. As of 2014, Google has been rewarding SSL secured websites with better web rankings.

What is the Difference Between TLS and SSL Security?

Transport Layer Security (TLS) is actually just an improved, successor version of SSL. The TLS protocol works very similarly to the way SSL works, including using protecting data during transfer using encryption. In fact, TLS and SSL are often used interchangeably in the industry, although SSL is still widely used.

The most recent version is TLS 1.3, which is defined in RFC 8446 from the Internet Engineering Task Force (IETF) standards. In terms of which SSL/TLS version is secure, both SSL and its TLS successor can encrypt sensitive data such as credit card numbers, passwords, usernames, and other private information.

What Causes SSL Security Errors?

After initiating the SSL handshake and establishing the TCP connection, the server sends the user its certificate along with a number of specifications. Those details include which version of SSL/TLS to use, as well as which encryption methods.

When the user attempts to access a page with security issues, an SSL connection error occurs. This is because there are security concerns should they continue, so for their own protection, their access is interrupted.

SSL security errors can take a number of forms, sometimes depending on the browser. In some cases, the https:// of the page may be highlighted, while in others users might see a warning about a connection that is not private.

When setting up SSL security, if an admin misconfigures the system rendering it less secure, there is a danger of poor performance, reduced speed, failed transactions, and data loss. Obviously, SSL security is only as secure as the way the ciphers can compatibly allow client users to connect to specific applications.

Configuring ciphers is simpler in systems that allow admins to enable and deactivate specific ciphers and reorder them by dragging and dropping. Insights into proper security configuration and complex application delivery are also essential to better performance.

The right software-based load balancing platform can help prevent errors in SSL security configuration by revealing granular insights into SSL traffic flow. These insights help expose risks such as the POODLE vulnerability and DDoS attacks, and provide more visibility into what visitors are using, enabling further risk mitigation in real-time.

What is SSL Secure Shopping Online (Ecommerce)?

An SSL certificate will appear in a website’s address bar in that it will read https:// rather than http:// . Some sites in some browsers also show a padlock icon in the address bar to communicate that the connection is encrypted for SSL secure shopping.

Finally, those sites displaying a company name in the address bar in green have an Extended Validation SSL certificate for the most secure shopping experience. However, the mere presence of the padlock icon doesn’t always mean total security, due to data scooping by the NSA, other countries, and private companies, not to mention the Heartbleed bug leaking session data and courts ordering websites to reveal their SSL keys. Moreover, there are multiple vectors to attacking SSL and TLS.

Today the greatest weakness is the SSL private key itself, which is used to encrypt every session. If an attacker acquires that key even much later, decrypting any past transactions that were encrypted using the private key is possible.

There is a simple solution that both SSL and TLS support: Perfect Forward Secrecy. PFS generates a one-time “ephemeral” key to encrypt a single session and then discards it, instead of using the SSL certificate (and key) for encrypting a client’s connection directly.

PFS demands specific combinations of SSL settings, particularly the ephemeral Diffie-Hellman cipher suite, which it deploys for the key exchange. Most current web servers and OpenSSL support PFS and the Diffie-Hellman ciphers. Servers and infrastructure that don’t can still benefit from PFS thanks to newer, software-based load balancers.

Load Balancing and SSL

A load balancer often decrypts SSL traffic to improve performance. This works because decrypting at the load balancer saves the web servers from the extra CPU cycles and power it takes to decrypt data.

This process of decrypting traffic before passing it on is called SSL termination. Obviously, this means that the traffic between the web server and load balancer is no longer encrypted, increasing the risk of an attack, but keeping the load balancer in the same location reduces that risk.

A load balancer can also act as an SSL pass-through, allowing the web server to decrypt requests after passing them on. While this does tax the server’s power more, it provides extra security.

Can Avi’s Platform Support an SSL Security Policy?

Avi supports terminating client SSL and TLS connections as the virtual service. This requires Avi to send a certificate to clients that authenticates the site and establishes secure communications. Avi’s platform can also generate self-signed certificates for testing or safe environments. You can even import an existing certificate and its corresponding private key into Avi directly, such as from another server or load balancer.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

Server Overload

<< Back to Technical Glossary

Server Overload Definition

Server Overload happens when conditions cause a server to exhaust its resources so that it fails to handle incoming requests. For example, the server will no longer be responsive to requests from its email and web servers as it fails to process them, and the applications will no longer function for users.

Diagram depicts a server overload 503 service unavailable response code where web servers are unresponsive to requests from application clients / end users over the internet.
FAQs

What is Server Overload?

There are several factors that help the server handle its load: hard drive speed, memory, and processor speed. Virtual memory or hard drive space and bus speeds may also affect how the server handles the load, but neither is typically implicated in server overload.

Various conditions might create server overload. In many cases operations are using too much bandwidth, and in other situations the system uses too much RAM or runs out of processor power.

Just like the Transportation Security Administration (TSA) plans for a certain number of travelers at each airport, your server is designed to handle certain levels of traffic. When it is overloaded at any given point, it responds too slowly or not at all, and that is reflected in load speeds for websites and user experience with applications and tools, for example.

Why is my Server Overloaded?

There are several common causes of server overload:

Sudden natural traffic spikes. If too many users attempt to use a site at once, it can crash a server, or cause server overload. For example, on the first day of an online sale, the release of an updated version to a game server, or a new web service rollout, this kind of server overload error is common.

Unavailable servers. At times, one server becomes unavailable due to sudden malfunction, hacking, or even planned maintenance. At these times, the backup server handles all extra traffic, and can easily experience server overload.

Malware such as worms and viruses. When a worm or virus infects enough computers or browsers, this type of malware can disrupt normal operations by causing abnormally high server traffic and sudden network spikes. This results in web server overload.

DoS or DDoS denial of service attacks. Hackers launch denial-of-service DoS attacks and distributed-denial-of-service DDoS attacks to render a server unavailable to intended users. These malicious actors flood the network with false requests, and cause the server to deny real requests, crashing it.

Network throughput also has the potential to be confused with a server overload issue. Depending on whether you are running a WAN based server or a LAN based server, network throughput from an ISP could be the source of a bottleneck, not server overload.

How to Fix Server Overload

The best practice here is to guard against server overload in the first place, so watch for and investigate these signs of server overload as you conduct normal monitoring of server performance:

Server overload error codes. The server returns an HTTP error code, such as 503. A 503 error code means the service is temporarily unavailable due to server overload. A 504 gateway timeout error or 504 server overload error code signals an overload because one server is taking too long to respond to another. 408, 500, and 502 are also server overload error codes, all of which signal inappropriate overload conditions.

Delayed requests. The server delays requests, and responses take longer than usual.

Reset or denied TCP connections. Before users see any content returned, the server resets or denies TCP connections.

Partial content. The server returns just a portion of the content the user requests. Sometimes a bug is to blame, but server overload may also be the problem.

A few basic best practices can help prevent server overload. Administrators might designate separate servers or backup servers to handle files of different sizes, use web application firewalls to block unwanted incoming traffic, and/or use site caching to deliver content via alternate sources. However, load balancing is the most effective server overload solution.

How to Prevent Server Overload with Load Balancing

Load balancing distributes network traffic across an organization’s servers as a group, easing the flow of incoming traffic to each server. The load balancer sits between the servers and the client and uses an algorithm to route requests.

If one server fails, the load balancer avoids system disruption by redirecting traffic to other functional servers. Software load balancing also offers real-time scalability, programmability, flexibility, enhanced application security, on-demand deployability, and reduced cost compared to hardware load balancing solutions.

Does Avi Offer a Server Overload Solution?

The Avi platform enables multi-cloud application services including load balancing. It also offers on demand autoscaling, application security, and web application firewall. The Avi Controller has out-of-the-box integrations with data center or public cloud environments. The Avi Platform is built to uniquely take advantage of the closed loop analytics and health scores from the load balancers based on real time traffic conditions and application performance. So, in addition to autoscaling load balancers, Avi can trigger the autoscaling of backend application servers to mitigate the problem of server overload. To help keep your applications secure, available, and responsive, learn more about how Avi prevents server overload.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

SQL Injection Attack

<< Back to Technical Glossary

SQL Injection Attack Definition

Structured Query Language (SQL) has been the standard for handling relational database management systems (DBMS) for years. Since it has become common for internet web applications and SQL databases to be connected, SQL injection attacks of data-driven web apps, also simply called SQLi attacks, have been a serious problem.

A SQLi attack happens when an attacker exploits a vulnerability in the web app’s SQL implementation by submitting a malicious SQL statement via a fillable field. In other words, the attacker will add code to a field to dump or alter data or access the backend.

A successful malicious SQL statement could give an attacker administrator access to a database, allowing them to select data such as employee ID/password combinations or customer records, and delete, modify, or data dump anything in the database they choose. The right SQL injection attack can actually allow access to a hosting machine’s operating system and other network resources, depending on the nature of the SQL database.

Diagram depicts the general process of a SQL Injection Attack involving a Web API Server and a SQL Database Server.
FAQs

What is SQL Injection Attack?

SQL injection is a common attack vector that allows users with malicious SQL code to access hidden information by manipulating the backend of databases. This data may include sensitive business information, private customer details, or user lists. A successful SQL injection can result in deletion of entire databases, unauthorized use of sensitive data, and unintended granting of administrative rights to a database.

How Does SQL Injection Work?

The types of SQL injection attacks vary depending on the kind of database engine. The SQLi attack works on dynamic SQL statements, which are generated at run time using a URI query string or web form.

For example, a simple web application with a login form will accept a user email address and password. It will then submit that data to a PHP file. There is a “remember me” checkbox in most forms like this, indicating that the data from the login session will be stored in a cookie.

Depending on how the statement for checking user ID is written in the backend, it may or may not be sanitized. This example statement is not sanitized, and is vulnerable:

SELECT * FROM users WHERE email = $_POST['email'] AND password = md5($_POST['password']);

This is because although the password is encrypted, the code directly uses the values of the $_POST[] array.

If the administrator should use “admin@company.com” and “password”, like this:

SELECT * FROM users WHERE email = 'admin@company.com' AND password = md5('password');

An SQLi attacker simply needs to comment out the password portion and add a condition that will always be true, such as “1 = 1”.

This creates a dynamic statement that ends with a condition that will always be true, defeating the security measures in place:

SELECT * FROM users WHERE email = 'xxx@xxx.xxx' OR 1 = 1 LIMIT 1 -- ' ] AND password = md5('password').

Popular SQL Injection Attacks

There are several types of common SQL injection attacks. Typically, popular SQL injection attacks include classic SQLi, also called in-band SQLi; blind SQLi, also called inference SQLi; and out-of-band OOB SQLi, also called DMS-specific SQLi.

Classic or basic SQL injection attacks are the simplest and most frequently used form of SQLi. These classic or simple SQL injection attacks may occur when users are permitted to submit a SQL statement to a SQL database. There are two main varieties: UNION-based attacks and error-based SQLi.

UNION-based attacks extract precise data by determining the structure of the database using the SQL UNION operator. Error-based SQLi reveals either specific information or structural details about the database by creating a related SQL error with a web app.

Blind SQLi attacks are similar to classic attacks in that they may be error-based, UNION-based, or another familiar variety. However, the major difference with blind or inference SQL injection attacks is that the attacker doesn’t get specific text or an error message that demonstrates their attack’s success or failure.

The blind attack still does cause a web app to behave differently, and the attacker can then infer the structure of the database from how it responds to the SQL query. They can then use those inferences to build a copy of the database one character at a time—although this highlights the impractical nature of a blind SQL injection attack for many situations involving very large databases.

There are two types of blind SQL injections: boolean and time-based.

In a boolean based blind SQL injection attack, the attacker queries the database and the application returns a result. Whether the query is true or false determines the result, and whether the information in the HTTP response will remain unchanged or be modified. This in turn allows the attacker to determine whether the message generated a true or false result.

Time-based SQL attackers send SQL queries to the database, which must then wait to react. How long the response time is shows the attacker whether a query is true or false.

Although it is a less common type of attack, an out-of-band SQLi is still a risk. This kind of attack involves submitting a DNS or HTTP query that contains a SQL statement to the database. The success of this kind of attack depends on certain features of a SQL database being enabled.

Finally, a compound SQLi attack refers to using standard SQL injection attack techniques in tandem with other cyberattacks. For example, using SQLi with denial of service, cross-site scripting, insufficient authentication, or DNS hijacking attacks allows hackers new ways to get around security measures and avoid detection.

Who is At Risk for SQL Injection Attack?

Knowing how to prevent SQL injection attacks starts with understanding organizational risk. There is really one basic criterion for SQL injection attack risk: having a website that is connected to and interacts with a SQL database. However, data-driven web applications are at special risk for this kind of attack due to the high amounts of data they collect and store.

A data-driven web app is any application that modifies its behavior based on user data. Examples of this sort of data-driven app include:

  • guest books;
  • notification/reminder apps;
  • report-generating apps;
  • search apps;
  • social media platforms;
  • survey/quiz apps; and
  • workflow apps.

SQL Injection Attack Examples

Some of the biggest SQL injection attacks can cause extensive results, including:

  • copying or deletion of portions of, or the entire, database, including sensitive data such as health records or credit card information;
  • modification of the database, including adding, changing, or deleting records;
  • impersonated users, spoofed login credentials, or an entirely bypassed authentication process;
  • execution of OS commands that allow access to other network assets; and
  • an advanced SQL injection attack may take the target DBMS or web app offline completely.

There are several recent SQL injection attack examples that illustrate this kind of risk. In 2018, a vulnerability that bestowed elevated shell privileges to attackers on certain systems that were vulnerable—now patched—was found in Cisco’s Prime License Manager. In 2019, malicious SQL commands targeted the Fortnite website, allowing unauthorized access to user accounts.

How to Prevent SQL Injection Attacks

The Open Web Application Security Project OWASP provides an overview of how to avoid SQL injection attacks. While there are various SQL injection attack tools on the market, there is no substitute for implementing best practices for preventing these attacks. Here are some of the OWASP top strategies for preventing SQLi attacks.

Prepared statements/parameterization

All database queries should be written as prepared statements with parameterized queries. This means that all SQL code for the query will be defined in advance, so the database can distinguish between user inputs and code—and will treat any malicious SQL query as data, not malicious code.

Stored procedures

Stored procedures are similar to prepared statements in that both define valid SQL statements in advance. However, stored procedures remain in the database, while the SQL code of prepared states is stored in the web app. Stored procedures are not as secure as prepared statements, and can be unsafe when dynamic SQL generation takes place inside them.

Input validation

Input validation or sanitation attempts to control the kind of user input the system receives. When there is no way to prepare everything in advance and some SQL code is a core component of user input, it is critical to allow only valid SQL statements.

Only the most essential statements should be included on such a list to avoid unvalidated statements in a query. Validate not only fields where users type in data, but also fields with buttons or drop-down menus. Refer to the OWASP input validation cheat sheet here.

Web application firewall (WAF)

A web application firewall (WAF) is an important part of a larger security solution that detects SQLi along with other threats. WAFs typically do this in part by relying on detailed lists of signatures that are constantly updated, so they can surgically excise threats, including malicious SQL queries.

Escape user-supplied input

This is a tactic that tries to target certain characters in SQL statements to prevent attacks. Specifically, a particular escape function will add a neutralizing character such as a “” to a command to prevent malicious versions from correctly being executed. This is not a very reliable method on its own.

Limit privileges

OWASP recommends limiting application account privileges for your database in particular. For example, if a site only needs SELECT statements from a database to function, its database connection credentials should never have additional privileges such as DELETE, INSERT, or UPDATE privileges.

Granting admin access in these cases may enable more smooth running, but it also leaves the database vulnerable. Various user accounts for web apps can also have access to only specific fields, so that any one attacker has only a limited view of the data.

Update and patch regularly

Always update and make sure you have the latest security patches from every vendor for all web application software components, including database server software libraries, frameworks, plug-ins, and web server software.

Smarter configuration

Configure error reporting and handling properly in the code and on the web server. You want to avoid sending wordy database error messages containing technical details to the client web browser that can provide leverage for attack.

Does VMware NSX Advanced Load Balancer offer an SQL Injection Attack Defense?

The VMware NSX Advanced Load Balancer WAF protects web applications from OWASP Top 10 threats such as SQL Injection Attacks and Cross-site Scripting (XSS) and other common security vulnerabilities while offering customizable rule sets for each application.

The architectural advantages of the VMware NSX Advanced Load Balancer platform power the platform’s WAF, gaining real-time application security insights thanks to the platform’s strategic location in the application traffic path. This architectural advantage and the platform’s multi-cloud capabilities extend to WAF network security. Learn more about how the platform’s WAF helps to prevent SQL injection attacks.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

Session Persistence

<< Back to Technical Glossary

Session Persistence Definition

Session Persistence (sometimes called sticky sessions) involves directing a user’s requests to one application or backend web server for the duration of a “session.” The session is the time it takes a user to complete a transaction or task that might include multiple requests.

As users browse high-traffic websites, they each request information from various backend servers, often using standard http protocols. This means the server is supporting many users at once, and performance can be a problem.

A load balancer can sit in front of the site’s server group to manage where user requests go. However, standard load balancing algorithms can direct user requests to unique backend or application servers each time.

Some load balancers can instead achieve session persistence. By directing requests to the same server—for example, where information is already cached or where information likely to be requested based on existing patterns is already being stored—the load balancer ensures the most efficient possible performance.

Diagram depicts a comparison of the relationship from application clients to web servers in regards to a load balancer, with and without session persistence.
FAQs

What is Session Persistence?

Session persistence ensures that a client will remain connected to the same server throughout a session or period of time. Because load balancing may, by default, send users to unique servers each time they connect, this can mean that complicated or repeated requests are slowed down.
Session persistence ensures that, at least for the duration of the session or amount of time, the client will reconnect with the same server. This is especially important when servers maintain session information locally.

How Does Session Persistence Work?

Load balancer session persistence boosts performance by configuring a backend server to work efficiently with user requests. This kind of load balancer sits between users and the website’s server group and implements logic that connects specific servers to user sessions for as long as is needed.

For example, the backend server is likely to save steps as it fulfills larger requests by caching data about user requests. It will also anticipate which additional data a user might need and cache that as well to shave time off future requests.

This is important because servers break down many client requests that seem simple—such as downloading large files—into multiple request-response transactions. If a single server has already anticipated some requests and cached data in response to them, it will perform more quickly and efficiently.

In other cases, session persistence provides session context. For example, a user might need to buy something, upgrade an account, or fill out a form. Perhaps they initially place something in an online shopping cart, establishing a session. These kinds of transactions demand that users take multiple steps, and as the user exchanges data with the server, it has to store some of that data to proceed. All subsequent requests will go to the same server to save time and resources.

Session persistence enables all of these kinds of exchanges between client and server to happen more smoothly and quickly.

What is a Sticky Session?

Load balancer persistence sticky sessions can help optimize network resource usage and improve user experience.

Session persistence, also called session stickiness, results in a “sticky session” between a user and a particular server. In this process, a load balancer uses logic to find an affinity between a specific network server and a client for the length of an entire session, defined by the amount of time a unique IP address stays on the site.

A load balancer creates sticky sessions by either tracking a user’s IP details or using a cookie to assign that user an identifying attribute. This allows the load balancer to use the tracking ID to route all of that user’s requests to a specific server throughout the session.

Compare Session Cookies vs Persistent Cookies

The persistent cookie vs session cookie comparison actually returns to the difference between browser-length sessions and persistent sessions. The system creates a session cookie as a kind of session ID and stores it in the instance of the browser from that session. Once the user ends that session and closes the browser and the data associated with it, any associated session cookies are deleted. (Or at least this happens soon after the session times out.)

The computer itself stores a persistent cookie, meaning it lasts even between instances of closing and opening the browser. This is why you can reopen your browser and it will still remember your login credentials for your favorite sites, for example. The “stay logged in” option is typically stored in a persistent cookie on the user’s machine, enabling the server to check you as a user for authenticity via an algorithm the next time you are there.

How can Avi Networks Help?

If you’re wondering how to configure session persistence for web applications, Avi Networks presents a simple but powerful load balancing solution that enables session persistence. Use Avi to create persistence profiles for clients, including:

 

For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos.

Software Defined Architecture

<< Back to Technical Glossary

Software Defined Architecture Definition

Software defined architecture (SDA) provides a layer of virtualization between the software and its users, which connects users to a simple dashboard that masks the complex systems operating in the background. A SDA helps large cloud services offer Web scale application services for digital businesses that require high levels of agility and scalability.

Diagram depicts software defined architecture that provides a layer of virtualization between software and the end user to connect users to a simple dashboard that masks the complex systems operating in the background.
FAQs

What is a Software Defined Architecture?

Software defined architecture (SDA) gives large cloud services like Amazon and Netflix the ability to operate at Web scale — serving massive numbers of consumers while adapting to rapidly changing needs. SDA creates a layer of virtualization between the software and the user. This allows consumers to interact with a simple application interface while benefiting from complex systems that run hidden in the background.

Software defined data center architecture also makes it easier to change the software. This flexibility is how digital businesses can achieve maximum agility. They can quickly adjust infrastructure without the consumer noticing.

SDA follows advancements in software defined network architecture and software defined storage.

How Does SDA Work?

SDA is part of a family of terms like software defined networking (SDN) and software defined storage (SDS). The big difference is that the computing infrastructure in SDA is managed by intelligent software and not manually. It is virtualized and delivered as a service.

Software defined data center architecture applies to entire stacks of software. The virtualization creates a line between the internal implementation and what a consumer sees. This allows for changes or replacement of the infrastructure without impact on the consumer.

There are two application programming interfaces. One is internal for software producers to make improvements and and the other is external for consumers. The external interface remains simple while masking all the work happening internally.

Benefits of Software Defined Architecture

Benefits of software defined architecture for cloud computing include the following:

• Virtualization of application services.
• A layer between consumers and the infrastructure running applications that allows for hidden management, monitoring, optimization and orchestration.
• Possibility of letting users design application programming interfaces to suit their tastes without compromising the underlying applications.
• Improved network stability.
• Reduced provisioning time.
• Improved time to market.
• Scalable application delivery across hybrid environments in the public cloud.
• Reliable disaster recovery.
• Easier server health monitoring.

Software Defined Architecture for Applications in Digital Business

Gartner released a study titled “Software Defined Architecture for Applications in Digital Business.” It predicted the following:

• Web scale digital business solutions with adaptive, open and manageable application architecture will become essential for enterprises.
• Rigid legacy organizations will have to lead an agile evolution that embraces the flexibility of digital business software solutions.
• Service virtualization gateway technology will emerge as a key application infrastructure component for application software in digital business.

Does Avi offer Software Defined Architecture?

Yes. Avi uses a software-defined scale-out architecture that is 100% based on REST APIs. It delivers extensible application services including load balancing, application security and microservices and containers on one platform across any environment. It provides elastic autoscaling, built-in analytics and full automation.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

For more information see the following software defined networking resources:

Software Defined Networking

<< Back to Technical Glossary

Software Defined Networking Definition

Software Defined Networking (SDN) is an architecture that gives networks more programmability and flexibility by separating the control plane from the data plane. The role of software defined networks in cloud computing lets users respond quickly to changes. SDN management makes network configuration more efficient and improves network performance and monitoring.

Diagram depicts software defined networking architecture that gives networks more programmability and flexibility by separating the control plane from the data plane in application delivery from application servers to application end users.
FAQs

What is Software Defined Networking?

Software Defined Networking (SDN) enables directly programmable network control for applications and network services. Software defined network architecture decouples network control and forwarding functions from physical hardware such as routers and switches to create a more manageable and dynamic network infrastructure.

SDN architecture includes the following components:

SDN Application — Communicates network resources and network devices to the SDN controller through the northbound interface (NBI).

SDN Controller — Translates the requirements from the SDN application layer to the SDN datapaths. It also provides the SDN applications with a central repository of network policies, a view of the networks and network traffic.

SDN Datapath — Implements switches that move data packets on a network.

SDN API — Application program interfaces (APIs) provide both open and proprietary communication between the SDN Controller and the routers of the network.

How to Implement Software Defined Networking?

Implementing software defined networking basics without purpose and planning is not advised.

The following tips will ensure a smooth network management process:

Define a use case — Be sure there is a real problem for SDN to solve. Focus on that one, clear issue with a use case. This will allow for measurable outcomes and lessons that can be applied elsewhere when fully implementing SDN.

Create a cross-functional team — Do not implement SDN in silos. A team with a diverse skills is needed for successful implementation. Collaboration is key.

Test first — Try a non-critical network area for initial SDN implementation before changing the entire network.

Review — Measure data to see if test outcomes meet goals. Be sure SDN is solving a problem before implementing it across the network.

How Does Software Defined Networking Work?

A software defined network uses a centralized SDN controller to deliver software-based network services. A network administrator can manage network policies from a central control plane without having to handle individual switches.

SDN architecture has three layers that communicate via northbound and southbound application programming interfaces (APIs). Applications can use a northbound interface to talk to the controller. Meanwhile, the controller and switches can use southbound interfaces to communicate.

The layers include:

Application layer — SDN applications communicate behaviors and needed resources with the SDN controller.

Control layer — Manages policies and traffic flow. The centralized controller manages data plane behavior.

Infrastructure layer — Consists of the physical switches in the network.

Benefits of Software Defined Networking

Software defined network (SDN) basics include the following benefits:

Control — Administrators have more control over traffic flow with the ability to change a network’s switch rules based on need. This flexibility is key for multi-tenant architecture in cloud computing.

Management — A centralized controller lets network administrators distribute policies through switches without having to configure individual devices.

Visibility — By monitoring traffic, the centralized controller can identify suspicious traffic and reroute packets.

Efficiency — Virtualization of services reduces reliance on costly hardware.

Does Avi Network offer Software Defined Networking?

Yes. Built on software-defined principles, Avi extends L2-L3 network automation from SDN solutions to L4-L7 application services. Avi offers native integration with industry-leading SDN and network virtualization controllers such as Cisco APIC, VMware NSX, Nuage VSP, and Juniper Contrail. Avi delivers multi-cloud application services that include enterprise-grade load balancing, actionable application insights, and point-and-click security.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

For more information see the following software defined networking resources:

SSL Termination

<< Back to Technical Glossary

SSL Termination Definition

SSL termination describes the transition process when data traffic becomes encrypted and unencrypted. This happens at the server end of a secure socket layer (SSL) connection.

Diagram depicts SSL Termination being performed at the secure socket layer (ssl) via a load blancer between application users and application web servers.
FAQs

What Is SSL Termination?

SSL termination is a process by which SSL-encrypted data traffic is decrypted (or offloaded). Servers with a secure socket layer (SSL) connection can simultaneously handle many connections or sessions. An SSL connection sends encrypted data between an end-user’s computer and web server by using a certificate for authentication. SSL termination helps speed the decryption process and reduces the processing burden on backend servers.

How Does SSL Termination Work?

SSL termination intercepts encrypted https traffic when a server receives data from a secure socket layer (SSL) connection in an SSL session. SSL termination or SSL offloading decrypts and verifies data on the load balancer instead of the application server. Spared of having to organize incoming connections, the server can prioritize on other tasks like loading web pages. This helps increase server speed. SSL termination represents the end — or termination point — of an SSL connection.

What is SSL Termination Load Balancer?

SSL termination at load balancer is desired because decryption is resource and CPU intensive. Putting the decryption burden on the load balancer enables the server to spend processing power on application tasks, which helps improve performance. It also simplifies the management of SSL certificates.

Is SSL Termination Secure?

Secure socket layer (SSL) connections are important for sensitive data. One point to note is that after SSL termination unencrypted traffic is sent between the load balancer and the backend server on the local area network. However, for security purposes, administrators can choose to re-encrypt the traffic at the load balancer before sending it to the servers.

SSL termination at load balancer alleviates web servers of the extra compute cycles needed to decrypt SSL traffic. The security risk of terminating at the load balancer is lessened when the load balancer is within the same data center as the web servers. Some load balancers also provide the ability to use a self-signed SSL between the load balancer and web servers. This provides a secure connection, but requires more compute power.

Can SSL Termination be Performed in Software?

With the advancement of Intel x86-based CPU technology, support for SSL on standard Intel hardware has increased dramatically. The use of Elliptic Curve Cryptography (ECC) keys with shorter key lengths than traditional RSA 2K keys for SSL encryption has put software based load balancers on x86 servers ahead in many cases.

An Advanced Encryption Standard New Instructions (AES-NI) is now integrated into many processors. The purpose of the instruction set is to improve the speed, as well as the resistance to side-channel attacks, of applications performing encryption and decryption the latest security standards. Another key reason to use software-based SSL termination is to completely decouple the dependence on hardware to a simple software version upgrade, and to get support for the latest security versions and bug fixes.

Does Avi Offer SSL Termination?

Using 100% software Avi as the endpoint for SSL enables it to deliver high performance in terms of SSL transactions per second (TPS), maintain full visibility into the traffic and also to apply advanced traffic steering, application security via WAF and acceleration features. Avi offers support for both RSA 2K as well as modern ECC keys for SSL. With the ability to scale a single virtual service horizontally (across multiple servers) as well as scale vertically on a single server (with more cores and higher processing power), Avi’s elastic load balancers support millions of SSL transactions per second and better scalability and price/performance benefits than hardware load balancers.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

For more information on SSL termination see the following resources: