Reverse Proxy Server

<< Back to Technical Glossary

Reverse Proxy Server Definition

A Reverse Proxy Server, sometimes also called a reverse proxy web server, often a feature of a load balancing solution, stands between web servers and users, similar to a forward proxy. However, unlike the forward proxy which sits in front of users, guarding their privacy, the reverse proxy sits in front of web servers, and intercepts requests. In other words, a reverse proxy acts on behalf of the server, while a proxy acts for the client.

Frequently, popular web servers protect the application frameworks of more limited HTTP capabilities using reverse-proxying functionality. Relevant weaknesses in this context include limitations in handling the complete range of request formats that can adhere to HTTP(S) 1.x, HTTP(S) 2.x, or difficult to detect requests, and limited ability to handle excessive loads. In these situations, a reverse proxy server could use the shielded server load as a basis for buffering incoming requests, transform one request into multiple requests to synthesize responses, handle data from cookies or sessions, or create HTTP requests from HTTPS requests, for example.

A reverse proxy server acts like a middleman, communicating with the users so the users never interact directly with the origin servers. It also balances client requests based on location and demand, and offers additional security.

Diagram depicts the architecture of a load balancing solution with reverse proxy web server feature that helps balance client requests and maintain security.
FAQs

Here are a few frequently asked questions about reverse proxy servers:

Reverse Proxy Server vs Proxy Server

The simple difference between a forward proxy server and a reverse proxy server is a forward proxy sits in front of users. It stops origin servers from directly communicating with that user.

What Does a Reverse Proxy Server Do?

A reverse proxy ultimately forwards user/web browser requests to web servers. However, the reverse proxy server protects the web server’s identity. This type of proxy server also moves requests strategically on behalf of web servers, typically to help increase performance, security, and reliability.

Reverse Proxy Servers can:

  • Disguise the characteristics and existence of origin servers.
  • Make initiating takedowns and removing malware easier.
  • Carry TLS acceleration hardware, enabling them to perform TLS encryption in place of secure websites.
  • Distribute load from incoming requests to each of several servers that supports its own application area.
  • Function as web acceleration servers, caching dynamic content and static content, reducing load on origin servers.
  • Compress content, optimizing it and speeding loading times.
  • Perform multivariate testing and A/B testing without inserting JavaScript into pages.
  • Add basic HTTP access authentication to web servers that lack authentication.
  • “Spoon-feed” dynamically generated pages bit by bit to clients even when they are produced at once, allowing the pages and the program that generates them to be closed, releasing server resources during the transfer time.
  • Analyze many incoming requests via a single public IP address, delivering them to multiple web-servers within the local area network.

What are some Common Uses for Reverse Proxy Servers?

A common reverse proxy server example happens when a company has a large e-commerce website. It can’t handle its incoming traffic with just one server, so it uses a reverse proxy server to direct requests from its users to an available server within the pool. There are various methods to direct this traffic, such as round robin load balancing.

Another great use for a reverse proxy server is to cloak a site’s main server because they are concerned about protecting the server from malicious attacks by users. Such a site can appear to be hosted among many servers, and typically only public-facing servers do go down, protecting the backend server.

What are the Benefits of a Reverse Proxy Server?

Benefits of reverse proxy servers include:

  • load balancing
  • global server load balancing (GSLB)
  • caching content and web acceleration for improved performance
  • more efficient and secure SSL encryption, and
  • protection from DDoS attacks and related security issues.

For many sites, but especially for high-volume websites, a single origin server will not be sufficient to handle all inbound site traffic. A reverse proxy server can handle numerous requests for the same site, distributing them to different servers in an available pool.

This more evenly distributes inbound traffic, or balances the load among multiple servers, so no one web server will become overloaded. Should a single server completely fail, the reverse proxy server can redirect the other servers to manage the traffic.

Global Server Load Balancing (GSLB) is load balancing distributed around the world by a reverse proxy. With this kind of load balancing, requests to a website can be distributed locally. This shortens the distances and times that requests and responses need to travel, in turn reducing load times.

Similarly, a reverse proxy cache server can enhance performance by caching local content. This kind of caching improves speed and user experience, especially for sites that feature dynamic content.

Businesses can also save money using a reverse proxy server to encrypt all outgoing responses and decrypt all incoming requests. By handling all communications this way via reverse proxy, the company avoids the much higher cost of encrypting and decrypting SSL communications between clients and servers on the main production server.

Likewise, a reverse proxy server offers protection from attacks. This is because no service or site need ever reveal its web server’s IP address with a reverse proxy in place, and because reverse proxies offer a traffic scrubbing effect.

Protecting the server’s IP address means attackers can only target the reverse proxy, rendering DDoS and related attacks much more difficult. A reverse proxy server also scrubs all incoming traffic, distributing all requests from the internet among a secure group of servers during a DDoS attack to mitigate against its overall impact.

Reverse proxies are well-suited to battling cyber attacks, capable of hosting web application firewalls and other tools for shutting out malware such as hacker requests and bad bots.

Is There a List of Reverse Proxy Servers?

Common reverse proxy servers include hardware load balancers, open source reverse proxies, and reverse proxy software. Reverse proxies are offered by many vendors such as VMware, F5 Networks, Citrix Systems, A10 Networks, Radware, and Public Cloud platforms such as Amazon Web Services and Microsoft Azure.

How to Setup a Reverse Proxy Server?

Enterprises spend several weeks planning, procuring, and deploying specialized hardware to setup reverse proxy servers. These solutions can be expensive and require complex operational steps from IT teams.. However, modern applications and multi-cloud environments can benefit from advanced software-defined reverse proxy servers which simplify operations with distributed multi-cloud architectures at lower costs.

Does Avi Provide a Reverse Proxy Server?

VMware NSX Advanced Load Balancer is architected as a reverse proxy server and it is just one part of the cloud-native, elastic load balancing solution that it delivers. The Avi platform also delivers reverse proxy server security capabilities with web application firewall and DDoS mitigation.

With the Avi Platform your business achieves a quick, secure, scalable application experience. Avi stands apart from legacy load balancers as the 100 percent software-defined option. It provides:

  • Multi-cloud – an orchestrated, consistent experience across cloud and on-premises environments through central management
  • Intelligence – Avi’s baked-in analytics provide actionable insights that render automation intelligent, autoscaling seamless, and decisions simpler
  • Automation – application delivery integrated into the CI/CD pipeline and self-service provisioning, thanks to 100% RESTful APIs

In addition to a Software Load Balancer, the multi-cloud application services also include Web App Security and Container Ingress.

Round Robin Load Balancing

<< Back to Technical Glossary

Round Robin Load Balancing Definition

Round robin load balancing is a simple way to distribute client requests across a group of servers. A client request is forwarded to each server in turn. The algorithm instructs the load balancer to go back to the top of the list and repeats again.

Diagram depicts round robin load balancing where load balancers distribute client (end user) requests on the internet to a grou of application servers in a specific order to increase efficiency of application performance.
FAQs

What is Round Robin Load Balancing?

Easy to implement and conceptualize, round robin is the most widely deployed load balancing algorithm. Using this method, client requests are routed to available servers on a cyclical basis. Round robin server load balancing works best when servers have roughly identical computing capabilities and storage capacity.

How Does Round Robin Load Balancing Work?

In a nutshell, round robin network load balancing rotates connection requests among web servers in the order that requests are received. For a simplified example, assume that an enterprise has a cluster of three servers: Server A, Server B, and Server C.

• The first request is sent to Server A.
• The second request is sent to Server B.
• The third request is sent to Server C.

The load balancer continues passing requests to servers based on this order. This ensures that the server load is distributed evenly to handle high traffic.

What is the Difference Between Weighted Load Balancing vs Round Robin Load Balancing?

The biggest drawback of using the round robin algorithm in load balancing is that the algorithm assumes that servers are similar enough to handle equivalent loads. If certain servers have more CPU, RAM, or other specifications, the algorithm has no way to distribute more requests to these servers. As a result, servers with less capacity may overload and fail more quickly while capacity on other servers lie idle.

The weighted round robin load balancing algorithm allows site administrators to assign weights to each server based on criteria like traffic-handling capacity. Servers with higher weights receive a higher proportion of client requests. For a simplified example, assume that an enterprise has a cluster of three servers:

• Server A can handle 15 requests per second, on average
• Server B can handle 10 requests per second, on average
• Server C can handle 5 requests per second, on average

Next, assume that the load balancer receives 6 requests.

• 3 requests are sent to Server A
• 2 requests are sent to Server B
• 1 request is sent to Server C.

In this manner, the weighted round robin algorithm distributes the load according to each server’s capacity.

What is the Difference Between Load Balancer Sticky Session vs. Round Robin Load Balancing?

A load balancer that keeps sticky sessions will create a unique session object for each client. For each request from the same client, the load balancer processes the request to the same web server each time, where data is stored and updated as long as the session exists. Sticky sessions can be more efficient because unique session-related data does not need to be migrated from server to server. However, sticky sessions can become inefficient if one server accumulates multiple sessions with heavy workloads, disrupting the balance among servers.

If sticky load balancers are used to load balance round robin style, a user’s first request is routed to a web server using the round robin algorithm. Subsequent requests are then forwarded to the same server until the sticky session expires, when the round robin algorithm is used again to set a new sticky session. Conversely, if the load balancer is non-sticky, the round robin algorithm is used for each request, regardless of whether or not requests come from the same client.

What is the Difference Between Round Robin DNS vs. Load Balancing?

Round robin DNS uses a DNS server, rather than a dedicated hardware load balancer, to load balance using the round robin algorithm. With round robin DNS, each website or service is hosted on several redundant web servers, which are usually geographically distributed. Each server hands out a unique IP address for the same website or server. Using the round robin algorithm, the DNS server rotates through these IP addresses, balancing the load between the servers.

What is the Difference Between DNS Round Robin vs. Network Load Balancing?

As mentioned above, round robin DNS refers to a specific load balancing mechanism with a DNS server. On the other hand, network load balancing is a generic term that refers to network traffic management without elaborate routing protocols like the Border Gateway Protocol (BGP).

What is the Difference Between Load Balancing Round Robin vs. Least Connections Load Balancing?

With least connections load balancing, load balancers send requests to servers with the fewest active connections, which minimizes chances of server overload. In contrast, round robin load balancing sends requests to servers in a rotational manner, even if some servers have more active connections than others.

What Are the Benefits of Round Robin Load Balancing?

The biggest advantage of round robin load balancing is that it is simple to understand and implement. However, the simplicity of the round robin algorithm is also its biggest disadvantage, which is why many load balancers use weighted round robin or more complex algorithms.

Does Avi Networks Offer Round Robin Load Balancing?

Yes, enterprises can configure round robin load balancing with Avi Networks. Round robin algorithm is most commonly used when conducting basic tests to ensure that new load balancers are correctly configured.

For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos.

For more information see the following round robin load balancing resources: