VLAN Configuration

<< Back to Technical Glossary

VLAN Configuration Definition

A virtual LAN (VLAN) is a logical overlay network that collects a group of devices and isolates their traffic with a shared LAN or physical network in the same location. Like the foundational LAN, a VLAN typically operates on the Ethernet level or Layer 2 of the network, the broadcast domain. Layer 2 is where network devices receive Ethernet broadcast packets.

Although computers on the LAN are located on a number of different LAN segments, Layer 2 VLAN configuration ensures they communicate as if they were attached to the same wire.

A single location can have many interconnected LANs, because once traffic engages Layer 3 functions across a router, it is no longer on the same LAN.

VLANs are flexible because they are based on logical rather than physical connections. VLAN configurations partition one network into many virtual networks that can serve various use cases and meet many requirements.

However, this also requires communication among VLANs which in turn must travel through a router. A network switch assigns VLAN membership—specific end-stations for each VLAN—to distinguish among the VLANs. Before it can share the broadcast domain with other end-stations on the VLAN, an end-station must itself achieve VLAN membership.

A VLAN database is used to store the VLAN ID, MTU, name, and other VLAN data.


Image showing vlan configuration: 1 server communicating with 3 vlan devices.

VLAN Configuration FAQs

What is VLAN Configuration?

VLANs classify and prioritize traffic and create isolated subnets. These allow selected devices to operate together, whether or not they are located on the same physical LAN.

Enterprises manage and partition traffic with VLANs. An organization might separate data traffic from engineering, legal, and finance employees by adding VLANs for each department. This way, even if multiple applications have different latency and throughput requirements, they can execute on the same server and share a link.

A VLAN ID identifies VLANs on network switches. Each port on a switch has one or more assigned VLAN IDs and will receive a default VLAN if no other VLAN is assigned. Each VLAN ID is connected to switch configured ports to provide data-link access to all hosts.

In the header data of every Ethernet frame sent to a VLAN ID is a VLAN tag, a 12-bit field. IEEE defines VLAN tagging in the 802.1Q standard. Up to 4,096 VLANs can be defined per switching domain as a tag is 12 bits long.

Attached hosts send Ethernet frames without VLAN tags. The switch inserts the VLAN tag—the VLAN ID of the associated ingress port in a static VLAN, and the tag associated with the device’s ID for a dynamic VLAN.

Switches forward only to ports the VLAN is associated with. Trunk ports or links between switches accept and route all traffic for any VLAN in use on both sides of the trunk. The VLAN tag is removed at the destination switch port, before transmission to the destination device.

VLANs can be static/port-based or dynamic/use-based.

Engineers create VLANs called port-based VLANs by assigning one network switch port per VLAN. These config ports only communicate on their assigned VLANs. Port-based VLANs are not truly static, because it is possible to change their assigned access ports while in use, either manually or using automated tools.

Some VLAN use cases are more practical, such as creating segregated access to devices in an office setting. Others are more complex, such as preventing trading and retail departments of a bank from interacting or accessing each others’ resources.

Benefits of VLAN Configuration

Network engineers typically configure VLANs for multiple reasons, including:

Enhanced performance. VLANs reduce the traffic endpoints experience, improving performance for devices. They also reduce the source of hosts by breaking up broadcast domains and limit network resources to relevant traffic. It is also possible to define various traffic-handling rules per VLAN, such as prioritizing some kinds of traffic for specific business use cases.

Improved security. VLAN partitioning can also enable more control over which devices can access each other to improve security. For example, network access can be restricted to specific VLANs for IoT devices.

Reduced administrative burden. Administrators can reduce burdens by using VLANs to group endpoints for nontechnical purposes. For example, they may group devices by department on a single VLAN.

VLANs also have some disadvantages.

A single network segment may host hundreds or thousands of distinct organizations, and each may need hundreds of VLANs. However, there is a limit of 4,096 VLANs per switching domain. Various protocols address this limitation, including network virtualization, Virtual Extensible LAN, and Encapsulation DOT1q in Cisco VLAN configuration. They enable more VLANs to be defined by supporting larger tags.

Another challenge is VLAN identification for AP and wall jack access.

Best Practice VLAN Configuration Recommendations

Learning how to configure VLANs can be complex and time-consuming, but the significant advantages VLANs offer to enterprise networks are often worthwhile.

Configuration of VLAN switches is the first step. Whenever the network configuration is altered, teams must also update switches, switch configurations, or add an additional VLAN.

Second, set up VLAN access control lists (VACLs) to control the access to the VLANs wherever packets enter or exit the VLAN to protect network security. Next, apply command-line interfaces.

Packages that automate and simplify management of VLAN configuration are available from various third party equipment vendors, reducing the likelihood of error. These packages can rapidly reinstall the last working configuration in case of an error based on the complete record of each set of configuration settings they maintain. They also make it simpler to add and remove VLANs, and to understand how to check VLAN configuration and troubleshoot VLAN configurations.

The IEEE 802.1Q standard describes how to identify VLANs with tagging. The beginning of the Ethernet packets show the addresses for source media and destination access control, with the 32-bit VLAN identifier field behind them.

Can you configure VLAN routing on a firewall? Yes, users can define VLANs for single firewalls and clusters of firewalls.

Does Avi Support VLAN Configuration?

Avi Vantage supports vLAN trunking on bare-metal servers and vLAN interface configuration on Linux server cloud. If the Avi Controller is deployed on a bare-metal server, the individual physical links of the server can be configured to support 802.1q-tagged virtual LANs (vLANs). Each vLAN interface has its own IP address. Multiple vLAN interfaces per physical link are supported.

Learn more about how Avi supports VLANs.

For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos.

Virtual Routing and Forwarding (VRF)

<< Back to Technical Glossary

VRF Definition

Virtual routing and forwarding (VRF) is an IP-based computer network technology that enables the simultaneous co-existence of multiple virtual routers (VRs) as instances or virtual router instances (VRIs) within the same router. One or multiple physical or logical interfaces may have a VRF—but none of the VRFs share routes. Packets are forwarded only between interfaces on the same VRF.

VRFs work on Layer 3 of the OSI model. The independent routing instances allow users to deploy IP internet protocol addresses that overlap or are the same without conflict. Because users may segment network paths without multiple routers, network functionality improves—one of the key benefits of virtual routing and forwarding.


This image depicts Virtual routing and forwarding (VRF), an IP-based computer network technology that enables the simultaneous co-existence of multiple virtual routers (VRs) as instances or virtual router instances (VRIs) within the same router.
Virtual Routing and Forwarding (VRF) Diagram



What is VRF?

Virtual routing and forwarding (VRF) IP technology allows users to configure multiple routing table instances to simultaneously co-exist within the same router. Overlapping IP addresses can be used without conflicting because the multiple routing instances are independent, and can select different outgoing interfaces.

VRFs are used for network isolation/virtualization at Layer 3 of the OSI model as VLANs serve similarly at Layer 2. Typically, users implement VRFs primarily to seperate network traffic and more efficiently use network routers. Virtual routing and forwarding can also create VPN tunnels to be solely dedicated to a single network or client.


Virtual Routing and Forwarding Basics

There are basically two types of VRF: VRF in its complete form and VRF lite. Here are the basic differences.

Full VRF focuses on labeling Layer 3 traffic via MPLS—a similar idea to Layer 2 VLANS. The multiprotocol label switching or MPLS cloud in the service provider cloud environment uses multiprotocol border gateway protocol, or MP BGP. VRF isolates traffic from source to destination through that MPLS cloud. To separate overlapping routes and make use of common services, VRF incorporates Route Distinguishers (RDs) and Route Targets (RTs).

VRF lite, actually a subset of VRF, is normally VRF without MPLS and MP BGP. VRF lite is generally used in the office LAN or data center environment to virtualize various security zones and network elements. Full VRF is a highly scalable solution, whereas VRF lite is not scalable.


Advantages of Virtual Routing and Forwarding

The are several benefits of virtual routing and forwarding:

  • Enables the virtual creation of multiple routes instate on one physical device
  • Allows users to simultaneously manage multiple routing tables
  • Can be used for MP BGP and MPLS deployments
  • Multiple VPNs for customers can use overlapping IP addresses without conflict
  • Users may segment network paths without multiple routers, improving network functionality


VRF: Key Terms

There are several key terms to define in the context of virtual routing and forwarding, and a few comparisons to make, because they answer common questions. Here they are:


A virtual private network (referred to as VPN) is a network that provides private services over a public infrastructure. Sets of sites that privately communicate together over other private or public networks over the internet are virtual private networks VPNs. The “private” in VPN does not automatically signal encryption or security; it merely means a separated pathway.

Virtual routing and forwarding or VRF configurations enable multiple VPN environments to simultaneously co-exist in a router on the same physical network or infrastructure. This allows an organization to have separated network services that reside in the same physical infrastructure invisible to each other—such as wireless, voice (VoIP), data, and video. VRFs can also be used for multiprotocol label switching or MPLS deployments.


Virtual route forwarding instances (VRF) are what supports virtualization for Layer 3 of the OSI model. Virtual device contexts (VDCs) have a broader focus: virtualizing the device itself. The VDC presents the physical switch as multiple devices and may contain its own independent, unique set of VRFs and VLANs.

VLANs are switches functioning on Layer 2 of the OSI model. VLANs split ethernet networks into multiple separated virtual networks to improve security and performance without constraining the physical layout of the network. In contrast, VRFs enable users to create multiple VRs in one physical piece of hardware.

Static routes

A VPN routing and forwarding (VRF) instance, whether the default VRF or one specified by the user, always has a static route associated with it. Users can configure a default VRF static route in lieu of specifying a VRF, which allows a user to customize a static route in VRF configuration mode.

Does Avi Offer VRF Routing Support?

Yes. Avi application delivery platform has a feature which enables the assignment of Avi service engine data interfaces to multiple VRFs. The Avi platform helps each VRF network achieve the target level of performance and increases network security.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

Virtual Server

<< Back to Technical Glossary

Virtual Server Definition

Compared to a dedicated server, a virtual server shares software and hardware resources with other operating systems (OS). Virtual servers are common because they can provide more efficient resource control and are cost-effective through server virtualization.

Traditionally, a physical server is dedicated to a specific task or application with its full processing power. Multiple physical servers require space, power, and money to maintain.

Diagram depicts a virtual servers sharing software and hardware resources with other operating systems (OS) to deliver applications.

What is a Virtual Server?

A virtual server mimics the functionality of a physical dedicated server. Multiple virtual servers may be implemented on a single bare metal server, each with its own OS, independent provisioning, and software. A virtual machine server uses virtual infrastructure, virtualization software and abstracts the physical server’s computer resources to create virtual environments.

Benefits of virtual servers include faster provisioning of applications and resources, improved disaster recovery and business continuity, and minimized or eliminated downtime. Virtualization also increases IT productivity, agility, efficiency, and responsiveness. Additional benefits of virtual servers include reduced operating costs and capital, and simplified data center management.

Virtual server environments also mimic dedicated server environments in terms of how they maintain passwords and security systems. Virtual server hosting is less expensive than data center maintenance, and server software installation provisioning may further reduce web hosting costs.

Resource hogging is the most frequent of the potential problems with virtual servers. This happens when an overflow of virtual servers in a physical machine causes some virtual servers to overuse resources, leading to performance issues. However, this resource issue is avoidable with appropriate implementation.

To achieve efficiency, administrators use special server virtualization software to divide one physical dedicated server into multiple virtual servers. Converting one physical server into multiple virtual servers makes better use of power and resources. This in turn enables each physical server to efficiently run multiple OS and applications.

What is the Difference Between a Physical Server vs Virtual Server?

Technically, a virtual server exists only as a partitioned space inside a physical server. For users, there is little difference. Practically, though, there is a series of benefits to server virtualization, discussed below.

What is Server Virtualization?

Server virtualization is using virtualization software to partition or divide up the server so that it looks and functions like multiple virtual servers. Each virtual server can then run their own OS, and be used as needed. This way, the server as a whole can be used in many ways and optimized rather than being dedicated to just one application or task.

What are Server Virtualization Benefits and Challenges?

Benefits of server virtualization include:

  • Cost-effective. By partitioning servers the supply of servers increases dramatically at almost zero cost.
  • Resource isolation. Independent user environments ensure that things like software testing don’t affect all users.
  • Save energy and space. Fewer servers mean less power consumed and less space storing them.

Resource hogging is the most common server virtualization challenge. Too many virtual servers will crowd a physical server and hurt performance.

What is a Virtual Private Server?

A virtual private server (VPS) is a virtual server that is a dedicated/private server from the user’s perspective, although a shared physical computer running multiple operating systems is running each virtual server. A VPS is also sometimes called a virtual dedicated server (VDS). Both a VPS and a VDS are types of virtual servers.

What is the Difference Between Virtual Server vs Cloud Hosting?

The primary difference between virtual servers and cloud hosting environments is that a virtual server is created for one user, while cloud hosting is designed for many users.

What is the Difference Between Virtual Desktop and Virtual Server?

Virtual servers and virtual desktops can achieve some of the same server virtualization goals for your computer network in practice, although they are not the same thing.
A virtual desktop is technology that allows different users to run different operating systems on one computer, work apart from the physical machine, or sever connected devices should one be lost or stolen.

A virtual server may still allow remote users to work and run different OSs, but it also has additional capabilities. For example, a virtual server can be used to test new software or applications without bringing down an entire server, and this is not the role of the virtual desktop.

A virtual desktop server is a form of virtual desktop infrastructure. This kind of virtual server is used to create a virtual desktop environment to host multiple virtual desktops on a virtual server designed for this purpose.

Does Avi Offer Load Balancing for Virtual Server Environments?

Yes. Avi uses software-defined principles to deliver advanced load balancing for virtual server environments. Avi Networks’ Software Load Balancer provides scalable application delivery across any infrastructure. Avi provides 100% software load balancing to ensure a fast, scalable and secure application experience for virtual server environments.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

For more information on load balancing for virtual servers see the following resources:

Virtual Load Balancer

<< Back to Technical Glossary

Virtual Load Balancer Definition

A Virtual Load Balancer provides more flexibility to balance the workload of a server by distributing traffic across multiple network servers. Virtual load balancing aims to mimic software-driven infrastructure through virtualization. It runs the software of a physical load balancing appliance on a virtual machine.

Diagram depicting Virtual load balancing from a pool of application servers (physical servers and virtual machines) to the virtual load balancers to the Application Clients.

What is a Virtual Load Balancer?

A virtual network load balancer promises to deliver software load balancing by taking the software of a physical appliance and running it on a virtual machine load balancer. Virtual load balancers, however, are a short-term solution. The architectural challenges of traditional hardware appliances remain, such as limited scalability and automation, and lack of central management (including the separation of control plane and data plane) in data centers.

How Does Virtual Load Balancing Work?

The traditional application delivery controller companies build virtual load balancers that utilize code from legacy hardware load balancers. The code simply runs on a virtual machine. But these virtual load balancers are still monolithic load balancers with static capacity.

Virtual Load Balancer vs. Hardware Load Balancer?

The complexity and limitations of a virtual load balancer is similar to that of a hardware load balancer.

A hardware load balancer uses rack-mounted, on-premises physical hardware. Hardware load balancers are proven to handle high traffic volume well. But the hardware can be expensive and limit flexibility.

A virtual load balancer uses the same code from a physical appliance. It also tightly couples the data and control plane in the same virtual machine. This leads to the same inflexibility as the hardware load balancer.

For example, while an F5 virtual load balancer lowers the CapEx compared to hardware load balancers, virtual appliances are in reality hardware-defined software.

Virtual Load Balancer vs. Software Load Balancer?

Virtual load balancers seem similar to a software load balancer, but the key difference is that virtual versions are not software-defined. That means virtual load balancers do not solve the issues of inelasticity, cost and manual operations plagued by traditional hardware-based load balancers.

Software load balancers, however, are an entirely different architecture designed for high performance and agility. Software load balancers also offer lower cost without being locked into any one vendor.

Does Avi Offer a Virtual Load Balancer ?

No, Avi Networks does not offer a virtual load balancer. Avi offers a software-defined load balancing solution called Avi, which uses a software-defined scale-out architecture that separates the central control plane (Avi Controller) from the distributed data plane (Avi Service Engine). It delivers extensible application services including load balancing, security and container ingress on ONE platform across any environment. Avi is 100% REST API based that makes it fully automatable and seamless with the CI/CD pipeline for application delivery. With elastic autoscaling, Avi can scale based on application loads. And built-in analytics provide actionable insights based on performance monitoring, logs and security events in a single dashboard (Avi App Insights) with end-to-end visibility.

For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos.

For more information on virtual load balancers see the following resources: