Forceful Browsing

<< Back to Technical Glossary

Forceful Browsing Definition

Forceful browsing, also called forced browsing, is a brute force attack that aims to enumerate files and gain access to resources that the application does not reference, but can still retrieve.

Using brute force techniques, an attacker can search the domain directory for unlinked contents such as temporary directories and files, old configuration files, and old backup data. These resources are valuable to intruders because they may store sensitive information about web applications and operational systems, such as credentials, internal network addresses, and source code.

Forced browsing attacks can be performed manually when application index pages and directories are based on predictable values or number generation. For more common directory names and files, this type of attack can also be conducted using automated tools.

The forced browsing attack is also known as active directory enumeration, file access enumeration, predictable resource location, authentication forceful browsing, and resource enumeration.

Forceful Browsing Diagram depicts a forced browsing attack.
Forceful Browsing FAQs

What is Forceful Browsing?

Forced browsing is an attack that allows intruders access to restricted pages and web server resources outside of the correct sequence. Authentication protects most web applications so only users with sufficient rights can access specific areas and pages after providing their username and password. Forced browsing attempts to bypass these controls by directly requesting access to areas beyond their access level or to authenticated areas of the application without providing valid credentials. Improper configuration of permissions on these pages leaves them vulnerable to unauthorized users and forceful browsing attacks.

Forced browsing often succeeds when attackers know, infer, or guess the target URL directly. This type of brute force attack allows an unauthorized user to view directory names and files not intended for viewing by the public, and uncover hidden website functionality and content. To prevent forced browsing, it is critical to restrict users’ access rights to all pages in the web application interface, not just those available to the user, to the correct privilege level.

These hidden files are rich resources to forceful browsing intruders, because they may contain administrative site sections, backup files, configuration files, demo applications, logs, sample files, and temporary files. These files may include database information, file paths to other sensitive areas, machine names, passwords, sensitive information about the website, and web application internals, among other sensitive data.

The attacker can make brute force attacks based on educated guesses because files and paths frequently reside in standard locations and are named based on common conventions. If the site owner fails to enforce restricted files, scripts, or URLs that reside in the web server directory with appropriate sequencing logic or authorization logic, they may be vulnerable to forceful browsing.

Forceful Browsing Methods

Forceful browsing can be either manual or automated. An attacker can manually predict unlinked resources using number rotation techniques or simply with good guesses when URLs are generated in a predictable way. Hackers can also use automated tools for common files and directory names. Most open-source and commercial scanners typically also search for indirectly linked yet valuable, predictable resources that contain sensitive information.

Forced Browsing Attacks

Here are several forced browsing attack examples:

Example 1

In this example, we see an attacker manually orient and identify resources by modifying URL parameters. The user, user504, checks their calendar using the following URL:

http://www.vulnerablesite.com/users/calendar.php/user504/20200615

This URL identifies both the date and the user name, allowing attackers to make forced browsing attacks by predicting the dates and user names of other users:

http://www.vulnerablesite.com/users/calendar.php/user602/20200618

This attack is successful if the unauthorized user can access the other user’s agenda. Part of the reason this kind of forced browsing attack succeeds is a poor implementation of the authorization mechanism.

Example 2

Similarly, websites that fail to enforce proper checks before processing operations may be vulnerable to forced browsing attacks.

In this example, a hacker accesses a money transfer URL from a bank directly, without following the web application workflow, by analyzing HTTP requests for the online money transfer:

http://www.vulnerablebank.com/transfer-money.asp?From_account=12345678&To_account=87654321&amount=100

Now this attacker can try their own account number in the To_account number value. A successful attack of this kind will appear to have the From_account owner’s permission.

The flaw here is the web application fails to verify the login—the successful performance of the first step—before performing the money transfer, the second step. This is the forced browsing vulnerability.

Example 3

In this example, Alice creates a grocery delivery app called “Food Drop” which allows users to order food from people doing shopping for themselves in their area. One of the features shows users how much time and how many miles they have saved using the app, and this includes the ability to see the routes their deliveries have taken.

Bob, a hacker, downloads and installs “Food Drop.” He creates an account and logs in. First, Bob learns how the backend server and the application communicate using basic network-sniffing techniques. Then, he watches network traffic using a network proxy on his device so he can better understand “Food Drop” and its functionality.

The network sniffer output tells Bob that a simple HTTP GET request fetches user history, and that there is a number in the path of the GET request: 333. There are no authentication headers in the request.

An attacker like Bob can use an API tester tool to generate a customized HTTP GET request with different numbers, and eventually one will work, allowing him to access the history:

/vulnerable_api/mobile/history/333
/vulnerable_api/mobile/history/147

Bob may also enumerate files on the application web server to access files and user data.

Security Misconfiguration Attacks

Forced browsing attacks are the result of a type of security misconfiguration vulnerability. These kinds of vulnerabilities occur when insecure configuration or misconfiguration leave web application components open to attack.

Misconfiguration vulnerabilities may exist in subsystems or software components. Some examples of this include remote administration functionality and other unneeded services that software may have enabled, or sample configuration files or scripts, or even default user accounts that web server software may arrive with. These existing features leave access to the system an attacker can exploit.

The following types of attacks, along with brute force and forceful browsing, can target misconfiguration vulnerabilities:

  • Buffer overflow
  • Code injection
  • Command injection
  • Credential stuffing
  • Cross-site scripting (XSS)

How to Prevent Forced Browsing

There are two techniques that protect against forced browsing: using proper access control and enforcing an application URL space allowlist.

Using proper access control and authorization policies means giving users access commensurate with their privileges, and no more. A web application firewall (WAF) offers access control enforcement by implementing authorization policies along with protection against session-based attacks at the URL level.

Creating an allowlist, involves granting explicit access to safe, allowed URLs. These URLs that are considered a necessary part of the functional application, essentially, and any request outside this URL space will be denied by default.

It is time-consuming and tedious to create and maintain such an allowlist manually. A WAF can automatically create and enforce your allowlist, analyzing trusted traffic to learn the valid URL space. It can also enforce a block list of directories and files that are often left vulnerable.

Even more advanced WAF architectures may contain security flaws that render them vulnerable to forced browsing attacks. However, approaches to configuring WAF security architecture exist that optimize the effectiveness of the WAF and minimize the frequency and success of common attacks.

Basic secure single-tier or two-tier web application architectures—in which the same host machine is home to the application’s database server and web server—is useful for early stages of project development. However, it introduces a single point of failure, so it is less ideal for production applications. Instead, a multi-tier / N-tier architecture avoids a single point of failure and provides compartmentalization by separating different components of the application according to their functions into multiple tiers, each running on a different system.

In most application architectures, position the WAF behind the load balancing tier to maximize performance, utilization, visibility, and reliability. And while WAFs can be deployed anywhere in the data path, they are ideally positioned closest to the application they are protecting behind the load balancing tier for those same performance related reasons.

Does VMware NSX Advanced Load Balancer Prevent Forceful Browsing?

The VMware NSX Advanced Load Balancer platform provides web application security for online services from a wide range of malicious security attacks, including forceful browsing, cross-site scripting (XSS), and SQL injection. VMware NSX Advanced Load Balancer’s WAF security detects and filters out threats which could degrade, compromise, or expose online applications to denial-of-serivce (DoS) attacks. WAF security examines HTTP traffic before it reaches the application server. It also protects against unauthorized transfer of data from the server, providing an effective forceful browsing solution.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

Failover

<< Back to Technical Glossary

Failover Definition

Failover is the ability to seamlessly and automatically switch to a reliable backup system. Either redundancy or moving into a standby operational mode when a primary system component fails should achieve failover and reduce or eliminate negative user impact.

A redundant or standby database server, system, or other hardware component, server, or network should be ready to replace any previously active version upon its abnormal termination or failure. Because failover is essential to disaster recovery, all standby computer server systems and other backup techniques must themselves be immune to failure.

Switchover is basically the same operation, but unlike failover it is not automatic and demands human intervention. Most computer systems are backed up by automatic failover solutions.

Diagram depicts optimal architecture of a highly available failover cluster for application server failover.
FAQs

What is Failover?

For servers, failover automation includes heartbeat cables that connect a pair of servers. The secondary server merely rests as long as it perceives the pulse or heartbeat continues.

However, any change in the pulse it receives from the primary failover server will cause the secondary server to initiate its instances and take over the operations of the primary. It will also send a message to the data center or technician, requesting that the primary server be brought back online.

Some systems instead simply alert the data center or technician, requesting a manual change to the secondary server. This kind of system is called automated with manual approval configuration.

Storage area networks (SAN) enable multiple paths of connectivity among and between data storage systems and servers. This means fewer single points of failure, redundant or standby computer components, and multiple paths to help find a functional path in the event of component failure.

Virtualization uses a pseudomachine or virtual machine with host software to simulate a computer environment. This frees failover from dependency on physical computer server system hardware components.

What is a Failover Cluster?

A set of computer servers that together provide continuous availability (CA), fault tolerance (FT), or high availability (HA) is called a failover cluster. Failover clusters may use physical hardware only, or they may also include virtual machines (VMs).

In a failover cluster, the failover process is triggered if one of the servers goes down. This prevents downtime by instantly sending the workload from the failed component to another node in the cluster.

Providing either CA or HA for services and applications is the primary goal of a failover cluster. CA clusters, also called fault tolerant (FT) clusters, eliminate downtime when a primary system fails, allowing end users to keep using services and applications without any timeouts.

HA clusters, in contrast, offer automatic recovery, minimal downtime, and no data loss despite a potential brief interruption in service. Most failover cluster solutions include failover cluster manager tools that allow users to configure the process.

More generally, a cluster is two or more servers, or nodes, which are typically connected both via software and physically with cables. Some failover implementations include additional clustering technology such as load balancing, parallel or concurrent processing, and storage solutions.

Active-Active vs Active-Standby Configurations

The most common high availability (HA) configurations are active-active and active-standby or active-passive. These implementation techniques both improve reliability, but each achieves failover in a different way.

An active-active high availability cluster is usually composed of at least two nodes actively running the same type of service at the same time. The active-active cluster achieves load balancing—preventing any one node from overloading by distributing workloads across all the nodes more evenly. This also improves response and throughout times, because more nodes are available. The individual settings and configurations of the twin nodes should be identical to ensure redundancy and seamless operation of the HA cluster.

Load balancers assign clients to nodes in a cluster based on an algorithm, not randomly. For example, a round robin algorithm evenly distributes clients to servers based on when they connect.

In contrast, although there must be at least two nodes in an active-passive cluster, not all of them are active. Using a two node example again, with the first node in active mode, the second will be on standby or passive. This second node is the failover server, ready to function as a backup should the primary, active server stop functioning for any reason. Meanwhile, clients will only be connecting to the active server unless something goes wrong.

In the active-standby cluster, both servers must be configured with the very same settings, just as in the active-active cluster. This way, should the failover server need to take over, clients will not be able to perceive a difference in service.

Obviously, although the standby node is always running in an active-standby configuration, actual utilization of the standby node is nearly zero. Utilization of both nodes in an active-active configuration approaches 50-50, although each node is capable of handling the entire load. This means that if one node in an active-active configuration consistently handles more than half of the load, node failure can mean degraded performance.

With an active-active HA configuration, outage time during a failure is virtually zero because both paths are active. However, outage time has the potential to be greater with an active-passive configuration as the system needs time to switch from one node to the other.

What is a SQL Server Failover Cluster?

A SQL server failover cluster, also called a high-availability cluster, makes critical systems redundant. The SQL failover cluster eliminates any potential single point of failure by including shared data storage and multiple network connections via NAS (Network Attached Storage) or SANs.

The network connection called the heartbeat, discussed above, connects two servers. The heartbeat monitors each node in the SQL failover cluster environment constantly.

What is DHCP Failover?

A DHCP server relies on the standard Dynamic Host Configuration Protocol or DHCP to respond to client broadcast queries. This network server assigns and provides default gateways, IP addresses, and other network parameters to client devices automatically.

DHCP failover configuration involves using two or more DHCP servers to manage the same pool of addresses. This enables each of the DHCP servers to backup the other in case of network outages, and share the task of lease assignment for that pool at all times.

However, dialogue between failover partners is insecure, in that it is neither authenticated nor encrypted. In most organizations this is unnecessarily costly, because DHCP servers typically exist within the company’s secure intranet.

On the other hand, if your DHCP failover peers communicate across insecure networks, security is far more important. Configure local firewalls to prevent unauthorized users and devices from accessing the failover port. You can also protect the failover partnership from accidental or deliberate disruption by third parties by using VPN tunneling between the DHCP failover peers.

What is DNS Failover?

The Domain Name System (DNS) is the protocol that helps translate between IP addresses and hostnames that humans can read. DNS failover helps network services or websites stay accessible during an outage.

DNS failover creates a DNS record that includes two or more IP addresses or failover links for a single server. This way, you can redirect traffic to a live, redundant server and away from a failing server.

In contrast, failover hosting involves hosting a separate copy of your site at a different datacenter. This way no data is lost should one copy fail.

What is Application Server Failover?

Application server failover is simply a failover strategy that protects multiple servers that run applications. Ideally these application servers should themselves run on different servers, but they should at least have unique domain names. Application server load balancing is often part of a strategy following failover cluster best practices.

What is Failover Testing?

Failover testing is a method that validates failover capability in servers. In other words, it tests a system’s capacity to allocate sufficient resources toward recovery during a server failure.

Can the system move operations to backup systems and handle the necessary extra resources in the event of any kind of failure or abnormal termination? For example, failover and recovery testing will assess the system’s ability to power and manage multiple servers or an additional CPU when it reaches a performance threshold.

This threshold is most likely to be breached during critical failures—highlighting the relationship between security and resilience and failover testing.

Does VMware NSX Advanced Load Balancer offer a Cloud Failover Strategy?

A 3-node Controller cluster can help achieve high availability, provides node-level redundancy for the Controller and optimizes performance for CPU-intensive analytics functions. Find out more about Controller cluster high availability failover operation here.

Fault Tolerance

<< Back to Technical Glossary

Fault Tolerance Definition

Fault Tolerance simply means a system’s ability to continue operating uninterrupted despite the failure of one or more of its components. This is true whether it is a computer system, a cloud cluster, a network, or something else. In other words, fault tolerance refers to how an operating system (OS) responds to and allows for software or hardware malfunctions and failures.

An OS’s ability to recover and tolerate faults without failing can be handled by hardware, software, or a combined solution leveraging load balancers(see more below). Some computer systems use multiple duplicate fault tolerant systems to handle faults gracefully. This is called a fault tolerant network.

Diagram depicts a fault tolerant load balancer architecture from a web application to web servers.
FAQs

What is Fault Tolerance?

The goal of fault tolerant computer systems is to ensure business continuity and high availability by preventing disruptions arising from a single point of failure. Fault tolerance solutions therefore tend to focus most on mission-critical applications or systems.

Fault tolerant computing may include several levels of tolerance:

  • At the lowest level, the ability to respond to a power failure, for example.
  • A step up: during a system failure, the ability to use a backup system immediately.
  • Enhanced fault tolerance: a disk fails, and mirrored disks take over for it immediately. This provides functionality despite partial system failure, or graceful degradation, rather than an immediate breakdown and loss of function.
  • High level fault tolerant computing: multiple processors collaborate to scan data and output to detect errors, and then immediately correct them.

Fault tolerance software may be part of the OS interface, allowing the programmer to check critical data at specific points during a transaction.

Fault-tolerant systems ensure no break in service by using backup components that take the place of failed components automatically. These may include:

  • Hardware systems with identical or equivalent backup operating systems. For example, a server with an identical fault tolerant server mirroring all operations in backup, running in parallel, is fault tolerant. By eliminating single points of failure, hardware fault tolerance in the form of redundancy can make any component or system far safer and more reliable.
  • Software systems backed up by other instances of software. For example, if you replicate your customer database continuously, operations in the primary database can be automatically redirected to the second database if the first goes down.
  • Redundant power sources can help avoid a system fault if alternative sources can take over automatically during power failures, ensuring no loss of service.

High Availability vs Fault Tolerance

Highly available systems are designed to minimize downtime to avoid loss of service. Expressed as a percentage of total running time in terms of a system’s uptime, 99.999 percent uptime is the ultimate goal of high availability.

Although both high availability and fault tolerance reference a system’s total uptime and functionality over time, there are important differences and both strategies are often necessary. For example, a totally mirrored system is fault-tolerant; if one mirror fails, the other kicks in and the system keeps working with no downtime at all. However, that’s an expensive and sometimes unwieldy solution.

On the other hand, a highly available system such as one served by a load balancer allows minimal downtime and related interruption in service without total redundancy when a failure occurs. A system with some critical parts mirrored and other, smaller components duplicated has a hybrid strategy.

In an organizational setting, there are several important concerns when creating high availability and fault tolerant systems:

Cost. Fault tolerant strategies can be expensive, because they demand the continuous maintenance and operation of redundant components. High availability is usually part of a larger system, one of the benefits of a load balancing solution, for example.

Downtime. The greatest difference between a fault-tolerant system and a highly available system is downtime, in that a highly available system has some minimal permitted level of service interruption. In contrast, a fault-tolerant system should work continuously with no downtime even when a component fails. Even a system with the five nines standard for high availability will experience approximately 5 minutes of downtime annually.

Scope. High availability systems tend to share resources designed to minimize downtime and co-manage failures. Fault tolerant systems require more, including software or hardware that can detect failures and change to redundant components instantly, and reliable power supply backups.

Certain systems may require a fault-tolerant design, which is why fault tolerance is important as a basic matter. On the other hand, high availability is enough for others. The right business continuity strategy may include both fault tolerance and high availability, intended to maintain critical functions throughout both minor failures and major disasters.

What are Fault Tolerance Requirements?

Depending on the fault tolerance issues that your organization copes with, there may be different fault tolerance requirements for your system. That is because fault-tolerant software and fault-tolerant hardware solutions both offer very high levels of availability, but in different ways.

Fault-tolerant servers use a minimal amount of system overhead to achieve high availability with an optimal level of performance. Fault-tolerant software may be able to run on servers you already have in place that meet industry standards.

What is Fault Tolerance Architecture?

There is more than one way to create a fault-tolerant server platform and thus prevent data loss and eliminate unplanned downtime. Fault tolerance in computer architecture simply reflects the decisions administrators and engineers use to ensure a system persists even after a failure. This is why there are various types of fault tolerance tools to consider.

At the drive controller level, a redundant array of inexpensive disks (RAID) is a common fault tolerance strategy that can be implemented. Other facility level forms of fault tolerance exist, including cold, hot, warm, and mirror sites.

Fault tolerance computing also deals with outages and disasters. For this reason a fault tolerance strategy may include some uninterruptible power supply (UPS) such as a generator—some way to run independently from the grid should it fail.

Byzantine fault tolerance (BFT) is another issue for modern fault tolerant architecture. BFT systems are important to the aviation, blockchain, nuclear power, and space industries because these systems prevent downtime even if certain nodes in a system fail or are driven by malicious actors.

What is the Relationship Between Security and Fault Tolerance?

Fault tolerant design prevents security breaches by keeping your systems online and by ensuring they are well-designed. A naively-designed system can be taken offline easily by an attack, causing your organization to lose data, business, and trust. Each firewall, for example, that is not fault tolerant is a security risk for your site and organization.

What is Fault Tolerance in Cloud Computing?

Conceptually, fault tolerance in cloud computing is mostly the same as it is in hosted environments. Cloud fault tolerance simply means your infrastructure is capable of supporting uninterrupted functionality of your applications despite failures of components.

In a cloud computing setting that may be due to autoscaling across geographic zones or in the same data centers. There is likely more than one way to achieve fault tolerant applications in the cloud in most cases. The overall system will still demand monitoring of available resources and potential failures, as with any fault tolerance in distributed systems.

What Are the Characteristics of a Fault Tolerant Data Center?

To be called a fault tolerant data center, a facility must avoid any single point of failure. Therefore, it should have two parallel systems for power and cooling. However, total duplication is costly, gains are not always worth that cost, and infrastructure is not the only answer. Therefore, many data centers practice fault avoidance strategies as a mid-level measure.

Load Balancing Fault Tolerance Issues

Load balancing and failover solutions can work together in the application delivery context. These strategies provide quicker recovery from disasters through redundancy, ensuring availability, which is why load balancing is part of many fault tolerant systems.

Load balancing solutions remove single points of failure, enabling applications to run on multiple network nodes. Most load balancers also make various computing resources more resilient to slowdowns and other disruptions by optimizing distribution of workloads across the system components. Load balancing also helps deal with partial network failures, shifting workloads when individual components experience problems.

Does VMware NSX Advanced Load Balancer Offer a Fault Tolerance Solution?

The VMware NSX Advanced Load Balancer offers load balancing capabilities that can keep your systems online reliably. The VMware NSX Advanced Load Balancer aids fault tolerance by automatically instantiating virtual services when one fails, redistributing traffic, and handling workload moves or additions, reducing the chance of a single point of failure strangling your system.

The VMware NSX Advanced Load Balancer Software Load Balancer