DNS Load Balancing

<< Back to Technical Glossary

DNS Load Balancing Definition

Domain name system (DNS) load balancing is distributing client requests to a domain across a group of server machines by configuring a domain in the Domain Name System (DNS) to correspond to a mail system, a website, a print server, or another online service.

This image depicts network congestion showing: data traveling through network and becoming congetsted on it's way to receivers.



What is DNS Based Load Balancing?

Load balancing improves availability and performance by distributing traffic across multiple servers. Organizations speed both private networks and websites using various types of load balancing, and most websites and internet applications would not function correctly or route traffic effectively without it.

DNS, sometimes called the phone book for the internet, translates website domains such as avinetworks.com into IP addresses in a process called DNS resolution. Thus like a phone book connects names and phone numbers, DNS turns domain names into long, numerical IP addresses so that web servers can identify the sites along with connected devices. DNS resolution saves humans from having to memorize long, difficult number sequences to access applications and websites.

In DNS resolution, user browsers make DNS queries, also called DNS requests: they request the correct list of IP addresses of destination websites from a DNS server. DNS-based load balancing improves availability and performance by distributing traffic across multiple servers, but it provides different IP addresses through the DNS in response to DNS queries to distribute that traffic.

DNS load balancers may respond to a DNS query using various rules or methods for choosing which IP address to share. Round-robin DNS is among the DNS load balancing techniques used most often.

Advantages of DNS Load Balancing?

The advantages of DNS load balancing include:

​​Ease of configuration. Simply direct multiple DNS records for one hostname toward the various IPs serving web service requests. Traffic is routed at the DNS level so there are no additional server configuration changes to make and no software to install.

Health checks. DNS load balancing health checks monitor unhealthy and failed servers and remove them from client query requests almost instantly without affecting users.

Scalability. All servers sit behind a single external IP, so it’s possible to scale out and add DNS services dynamically without updating DNS name services.Improved performance. The traditional round-robin DNS approach accounts for neither health visibility nor server loading. High-volume DNS load balancing is based on load and performance.

Drawbacks of DNS Load Balancing?

Unfortunately, because DNS load balancing is a simple implementation, it also has inherent problems that limit its efficiency and reliability. Most notably, DNS always returns the same set of IP addresses for a domain because it does not check for network or server errors or outages, so at times it may direct traffic toward servers that are inaccessible or down.

Another potential problem is that both clients and intermediate DNS servers or resolvers, both to reduce the level of DNS traffic on the network and to improve performance. The system assigns each resolved address a validity lifetime or time-to-live (TTL). Short lifetimes improve accuracy but increase DNS traffic and processing times meant to be reduced by caching. Meanwhile, long lifetimes may prevent clients from learning of server changes quickly.

Standard DNS load balancing and failover solutions work well enough in many network environments. However, for certain classes of network infrastructures, the standard DNS load balancing failover mechanism does not function well:

  • Any Internet Service Provider (ISP) network for whom DNS server failure would create performance issues for users at an unacceptable level because DNS is part of core services.
  • Large service providers with high volume network infrastructures such as cloud service providers, network carriers, and high transaction data center environments.
  • Service providers and businesses whose infrastructures must maintain high end-user performance requirements, such as online retailers, stock traders, etc.
  • Global Server Load Balancing (GSLB) implementations.

DNS Load Balancing vs Hardware Load Balancing

Three major load balancing methods exist: DNS load balancing, hardware-based load balancing, and software-based load balancing. Here is the difference between DNS load balancing and hardware load balancing:

Equipment. Hardware load balancing supplements network servers with actual hardware to distribute and balance traffic based on those specifications of the hardware itself. DNS load balancing distributes client requests across many servers in different data centers using a domain name configuration under the Domain Name System (DNS).

Cost. DNS load balancing is typically lower cost, and may be a subscription. Hardware balancing has a higher cost up front for the equipment itself but does not usually involve additional costs until it is time to replace the hardware.

Maintenance. Typically physical hardware demands maintenance of its own, while DNS server load balancing solutions include maintenance.

Scalability. It is generally less expensive and easier to scale DNS load balancing, as users can utilize more servers by merely changing their subscription. Some providers offer global DNS load balancing services. Particularly on a global scale, it is more costly to scale and expand hardware balancing.

How to Configure DNS Load Balancing

To configure DNS load balancing for an API endpoint, a website or, another web service, point the A records for the website hostname to all IPs of the target machines. For example, five different machines serve requests for website.com hostname, and each has a unique IP address. Configure DNS load balancing here with five separate A records for website.com, each pointing to a different target machine’s IP address. Each new end user will be routed to a different IP address once the DNS changes are propagated.

Cloud-based DNS load balancing can also handle mail server traffic. The most common approach to implementing DNS load balancing for a mail server is to assign all MX records for a given domain the same priority (usually a priority of 10). Most SMTP servers target the first record in a response, and every time a request is made, the SMTP server resolving to the domain will get all MX records in a different order.

DNS Round Robin vs Network Load Balancing

Network load balancing is a broad term that describes the management of network traffic without detailed protocols for routing. DNS round-robin load balancing is a particular DNS server mechanism.

DNS round-robin load balancing distributes traffic to improve site reliability and performance, just like other kinds of DNS-based load balancing. However, rather than using a hardware-based or software-based load balancer, DNS round-robin uses an authoritative nameserver, a type of DNS server, to perform load balancing.

Authoritative nameservers contain A records or AAAA records. These DNS records contain the matching domain name and IP address for websites. The goal of a client DNS query is to find the single A (or AAAA) record of a domain. A DNS query will always return the same IP address in a basic setup, because each A record is tied to a single IP address.

In contrast, domains have multiple A records in round-robin DNS, each tied to a different IP address. As DNS queries come in, they are spread across associated servers because IP addresses rotate in a round-robin fashion.

If a round-robin DNS load balancer is using four IP addresses, every fourth request would return any one IP address. This makes it less likely that any one server will get overloaded.

The round-robin approach is popular, but there are other traffic routing methods. Some DNS-based load balancing configurations use a weighted algorithm to assign traffic proportionately based on capacity in response to DNS queries. Examples of this type of load balancing algorithm include weighted least connection and weighted round-robin.

Many approaches to DNS load balancing are dynamic, in that the DNS load balancers consider server response times and health when assigning requests. Dynamic algorithms all follow different rules and offer different advantages, but they do the same broader thing: optimize how traffic is assigned and monitor server health.

Least connection is one type of dynamic load balancing algorithm. In this configuration, traffic is assigned to the server with the fewest open connections at the time, based on server monitoring.

Another common dynamic algorithm is geo-location, where the load balancer assigns all regional requests to a defined server. For example, all requests originating in the US might go to server USA.

Finally, a proximity-based algorithm instructs the load balancer to assign traffic dynamically to the user’s closest server.

Does VMware NSX Advanced Load Balancer Offer a DNS Load Balancing Solution?

Yes. The VMware NSX Advanced Load Balancer DNS virtual service is a generic DNS infrastructure that can implement DNS Load Balancing, Hosting Manual or Static DNS Entries, Virtual Service IP Address DNS Hosting, and Hosting GSLB Service DNS Entries.

For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos.

Data Center Orchestration

<< Back to Technical Glossary

Data Center Orchestration Definition

Data center orchestration is a process-driven workflow that helps make data centers more efficient. Repetitive, slow and error-prone manual tasks are replaced by the automation of tasks and the orchestration of processes.

Diagram depicting the data center orchestration process involving automation of tasks for a responsive data center.

What is Data Center Orchestration?

Data center orchestration software uses the automation of tasks to implement processes, such as deploying new servers. Automation solutions which orchestrate data center operations enable an agile DevOps approach for continual improvements to applications running in the data center.

Consider instruments or singers in a band. Each has an independent sound, just as software defined applications have their own task. But when they work well together, a unique sound is produced. Orchestration outlines how individual tasks can come together for a larger purpose. It organizes all the tasks of various services and creates a highly functioning and responsive data center.

The massive amount of data move to cloud computing has put more pressure on the data center. Data center needs to be a central management platform and be able to provide the same agility as public clouds. Data center automation and orchestration creates the efficiencies needed to meet that demand.

What is the Function of Data Center Orchestration Systems?

Data center orchestration systems automate the configuration of L2-L7 network services, compute and storage for physical, virtual and hybrid networks. New applications can be quickly deployed.

Benefits of data center orchestration systems include:

  • Streamlined provisioning of private and public cloud resources.
  • Less time to value between a business need and when the infrastructure can meet the need.
  • Less time for IT department to deliver a domain specific environment.

Data center orchestration systems are a framework for managing data exchanged between a business process and application. They use the following functions:

    • Scheduling and coordination of data services.
    • Leveraging of distributed data repository for large data sets.
    • Tracking and publishing APIs for automatic updates of metadata management.
    • Updating policy enforcement and providing alerts for corrupted data.
    • Integrating data services with cloud services.

Data Center Orchestration Tools

The following are popular data center orchestration and automation tools for the data center and cloud environments:

      • Ansible — An agentless configuration management tool. Created to push changes and re-configure newly deployed machines. Has an ecosystem conducive for writing custom apps. The platform is written in Python and allows users to script commands in YAML.
      • Chef — Uses a client-server architecture and is based on Ruby. Presents the infrastructure as a code.
      • Docker — Features isolated containers, which allows the execution of many applications on one server.
      • Puppet — Built with Ruby and uses an agent/primary architecture. Created to automate tasks for system administrators.
      • Vagrant — Helps build, configure and manage lightweight virtual machine environments. Lets every member of the IT team have identical development environments.

Does VMware NSX Advanced Load Balancer offer Data Center Orchestration?

Yes. The VMware NSX Advanced Load Balancer provides centrally orchestrated microservices composed of a fabric of proxy services with dynamic load balancing, service discovery, security, micro-segmentation, and analytics for container-based applications running in OpenShift and Kubernetes environments. The VMware NSX Advanced Load Balancer delivers scalable, enterprise-class microservices and containers to deploy and manage business-critical workloads in production environments using Redhat container orchestration with OpenShift and Kubernetes clusters.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

For more information on data center orchestration see the following resources:


<< Back to Technical Glossary

DDoS Definition

DDoS stands for Distributed Denial of Service, a malicious attempt by an attacker to disallow legitimate users access to a server or network resource by overloading it with artificial traffic.

Diagram depicts the process of a DDoS attack from an attacker accessing a control server to leverage a botnet of infected hosts towards a victim's server.

What is DDoS?

Distributed Denial of Service (DDoS) is the effect of a cyber attack in which a server or network resource becomes unavailable for legitimate user traffic. Denial of service occurs as the result of the attack – intentional disruptions of a target host connected to the internet by a perpetrator (attacker).

What is a DDoS Attack and How Does it Work?

DDoS is a type of denial of service (DoS) attack where a perpetrator maliciously attempts to disrupt the normal traffic of a target network or server by flooding the surrounding infrastructure with Internet traffic. This typically involves co-opting large numbers of client devices with a Trojan virus and coordinating them to make requests to the same resource at the same time. Popular for hackers due to its simplicity, DDoS attacks can also be affordable if not profitable, leading malicious actors or “hacktivists” to turn to this form of cyber attack.

In general, a DDoS attack maliciously floods an IP address with thousands of messages through the use of distributed (control) servers and botnets. Victims of an attack are unable to access systems or network resources to make legitimate requests because of unwanted traffic draining the network’s performance.

Types of DDoS Attacks

Types of DDoS attacks range from those that crash services and those that flood services. The three basic categories of DDoS attacks today are volume-based attacks focused on network bandwidth, protocol attacks focused on server resources, and application attacks focused on web applications. Some of the most common DDoS tools include:

• SYN Flood – a Synchronized (SYN) Flood exploits weaknesses in the TCP connection sequence, also known as a three-way handshake.

• HTTP Flood – sends artificial GET or POST requests to use maximum server resources.

• UDP Flood – a User Datagram Protocol (UDP) attack targets random ports on a computer or network with UDP packets.

• Smurf Attack – this type of attack exploits IP and Internet Control Message Protocol (ICMP) with a malware program called smurf.

• Fraggle Attack – similar to a smurf attack, a fraggle attack applies large amounts of UDP traffic to a router’s broadcast network using UDP rather than ICMP.

• Shrew Attack – targets TCP using short synchronized bursts of traffic on the same link.

• Ping of Death – manipulates IP by sending malicious pings to a system.

• Slowloris – uses minimal resources during an attack, targeting web servers in a similar approach to HTTP flooding while keeping connection with target open for as long as possible.

• Application Layer Attacks – go after specific weaknesses in applications as opposed to an entire server.

• NTP Amplification – exploits Network Time Protocol (NTP) servers with an amplified reflection attack.

How to Stop a DDoS Attack

It is important to establish the best DDoS protection for your business to prevent DDoS attacks that could compromise your company data and intellectual property. DDoS protection, otherwise known as DDoS mitigation, is crucial for companies to maintain as DDoS threats are growing. The average week-long DDoS attack costs less than $200, and more than 2,000 of them occur worldwide every day. Firms often pay a fraction of the cost for anti DDoS prevention services compared to the damages that victims of an attack incur.

If you don’t currently have a plan for DDoS attack mitigation, now is a good time to start. DDoS security comes in several different approaches, such as DIY solutions, on-premises tools, and cloud-based solutions.

DIY DDoS Protection

This method is by far the least expensive, but is often considered a weak approach and inadequate for online businesses with decent traffic. The main goal of most DIY defenses is to stop flood attacks by implementing traffic thresholds and IP denylisting rules.

These anti DDoS setups are reactive in nature, usually kicking in after an initial attack. Although this approach might hinder future attacks, most aggressors are able to adapt and modify their methods. Moreover, constraints in network bandwidth with DIY solutions commonly prove ineffective as companies lack the scalability to defend from attack.

On-Premises DDoS Protection

This approach adds an extra layer of hardware appliances deployed on-site at customer data centers along with other networking equipment and servers. On-premises protection can often be an expensive option for DDoS security.

Advantages of the advanced traffic filtering offered by on-premises DDoS protection solutions include low latency, control of data, and compliance with strict regulations in certain industries. Drawbacks involve higher costs for DDoS mitigation, requirement for manual deployment in case of an attack, and constraints on available bandwidth.

Cloud Server DDoS Protection

Off-premise cloud solutions are outsourced services that require less investment in management or upkeep than other DDoS mitigation services, while providing effective protection against both network and application layer threats. These services are deployed either as an always-on or on-demand service and can elastically scale up resources to counteract DDoS attacks. Services such as a Content Delivery Network (CDN) can route traffic filter traffic, offloading malicious requests and sending only traffic determined to be “safe” to the website.

Always-on services enable DNS server redirection, focusing on mitigation of application layer attacks that exhaust server resources. The on-demand option mitigates network layer attacks that target core components of network infrastructure, such as a UDP flood, through elastic scale-up of services.

DoS vs DDoS

A DoS attack is a denial of service attack where one or more computers are used to flood a server with TCP and UDP packets in order to overload a target server’s capacity and make it unavailable for normal users. A DDoS attack is one of the most common types of DoS attack, using multiple distributed devices to target a single system. This type of attack is often more effective than other types of DoS attacks because there are more resources the attacker can leverage, making recovery increasingly complicated.

Does VMware NSX Advanced Load Balancer offer DDoS Protection?

The VMware NSX Advanced Load Balancer protects and mitigates against DDoS attacks by identifying threats, informing admins and automatically protecting against these attacks. Some of the features that are used to accomplish this are TCP SYN Flooding Protection, HTTP DDoS Protection, URL filtering, Connection Rate Limiting per Client, Connection Rate Limiting per User Defined Clients, Limiting Max Throughput per VS, Limiting Max Concurrent Connections per VS and Limiting Max Concurrent Connections per Server.

In addition, the VMware NSX Advanced Load Balancer’s elastic application services enable on-demand autoscaling of services during an attack giving administrators much needed time to work on mitigating the attack while maintaining quality of service. Learn about the autoscaling of VMware NSX Advanced Load Balancer’s Software Load Balancer.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.