Multi-Tenant

<< Back to Technical Glossary

Multi-Tenant Definition

Multi-tenant architecture serves multiple customers using a single instance of software running on a server. Separate customers in a multi-tenant environment tap into the same data storage and hardware, each creating a dedicated instance. Although every tenant’s data runs on the same server, it remains isolated and invisible to others.

Within the context of application delivery and load balancing, multi-tenancy has a similar definition. In this instance each tenant might represent a business unit or customer organization requiring access to an isolated group of resources (servers and applications). Each tenant may have different requirements based on their needs such as security protocols, compliance requirements, budget allocations. A multi-tenant load balancer can manage the requirements for each of these different tenants within the same central management cluster.

Originally, multitenancy simply referred to a single software instance that serves multiple tenants. However, the term multi-tenant has broadened in meaning beyond software multitenancy thanks to modern cloud computing, and now also refers to shared cloud infrastructure.

In cloud computing, online users access data and applications that are hosted in various data centers by remote servers. Instead of locating applications and data on servers on the premises of a company or on smartphones, laptops, and other individual client devices, they are centralized in the cloud. The ease and convenience of accessing multiple apps and platforms from various devices has, in part, driven the explosion of cloud-based multi tenant applications.

This image depicts traditional multi-tenant deployment and the reasons why Avi is the best choice for per-app / per-tenant load balancing.

 

Multi-Tenant FAQs

What is Multitenant Architecture?

Multitenant architecture, multi tenant architecture, or multitenancy architecture in cloud computing refers to multiple cloud vendor customers using shared computing resources. However, although they share resources, the data of cloud customers is kept totally separate, and they aren’t aware of each other. Without multitenancy or multi-tenant architecture, cloud services including containers, IaaS, PaaS, serverless computing, and software-as-a-service (SaaS) would be far less practical.

Multi tenant architecture references a single instance of the software, such as one workable application, that runs on the multi-tenant cloud infrastructure provided by the cloud vendor, such as Azure, AWS, or GCP, to simultaneously serve the needs of multiple customers. tenants are invisible to each other and customer data is stored exclusively in multi-tenant SaaS architecture.

Some multi-tenant architecture examples would be Hubspot, Github, and Salesforce. In each case, every user shares the main multi-tenant database and software application, but each tenant’s data is invisible to others and isolated. Users can customize some features, such as notifications or themes, but core application code remains intact and cannot be changed by tenants.

Within the realm of application delivery, a multi-tenant architecture is one that can handle different policies for each entity requiring access to each pool of resources. This means one central management control plane, can govern application services for different tenants. Those tenants may access different applications, with distinct SLAs and security policies, but the ADC will handle them centrally, providing visibility across the tenant environment.

Single Tenant vs Multi-Tenant

There are several ways to think about the differences between single tenancy versus multitenancy. A classic way of thinking about single tenant architecture versus multi-tenant architecture is the analogy of a single family home versus an apartment building.

It is true that users of the multi-tenant architecture share infrastructure and amenities as you would in an apartment building or condominium complex, and that user accounts or “apartments” are customizable. However, there are drawbacks in privacy with this housing analogy that do not necessarily exist in a cloud environment.

A better analogy for understanding multitenancy might be how multiple customers use a bank. The many users of such a facility mostly are unaware of each other, and enjoy much greater security due to their shared amenities. Although they may be shared in a common location, assets are completely separate. And while at an individual branch (or on an individual app or server) there may be an occasional “noisy neighbor” effect on a busy day, bank customers mostly don’t perceive each other.

Users of public cloud computing platforms access the same servers and other infrastructure while maintaining separate business logic and data. And while originally multi-tenant architecture referred to one instance of software that served multiple tenants, modern cloud computing has expanded multitenancy to include shared cloud infrastructure.

Within application delivery, a single tenant might represent an individual customer, a business unit, a function within an organization or team. Multi-tenancy then refers to a combination of those business units, teams or customers, each of which might have their own requirements, resources and cost centers.

Single Tenant vs Multi-Tenant Pros and Cons

To better compare single tenancy and multi-tenant platforms, consider their basic structures, benefits, and drawbacks:

Single Tenant SaaS

In single tenant SaaS architecture the client is the tenant. Each user has supporting infrastructure and a dedicated server in the single tenant environment. Users cannot share single tenant products, but they can customize them to their exact requirements.

A subdivision with one basic model home that can be customized is a metaphor for a single tenant SaaS environment. In this kind of neighborhood community, the basic floor plan and infrastructure are designed and built by the same engineer, but each household uses its own infrastructure and can modify it as needed. Similarly, each user in a single tenant architecture can customize their individual software instance to meet their business requirements.

Advantages of Single Tenant Architecture

Security. Single tenancy isolates each user’s data completely from other users. This structure protects against breaches and hacking, since customers can’t access the sensitive information of others.

Reliability. Single tenant environments are more reliable because the activities of one user cannot affect anyone else. For example, downtime during one client’s difficult integration impacts that client’s software alone; it won’t impact the software of any other users.

Easier Backup and Restoration. Isolated backups to a dedicated portion of the server for each user’s database make it easier to access historical data for backup and restoration. Because all user data is stored in account-specific locations, teams can more easily restore previous settings.

Individual Upgrades. Single tenants don’t need to wait for universal updates from the software provider and can upgrade their services individually, on their own time as soon as the download is available, without disrupting workflow, after hours.

Easier Migration. Migrating to a self-hosted environment from a SaaS environment is easier because it is simpler to export and transfer data that is all stored in one space.

Drawbacks of Single Tenant Environments

Some drawbacks associated with single tenant environments include:

Cost. Typically, single tenancy costs more than multi-tenant cloud architecture. Each new user requires a new instance, and every one has an associated cost. There is also no cost-sharing for monitoring, deployment, or other services. Furthermore, more maintenance and customizations demand more time and compute resources.

Maintenance. Single tenant SaaS architecture which demands constant upgrades and updates generally requires more maintenance. This can consume extensive time, and the user must manage it.

Efficiency. Single tenant SaaS is often less efficient than multi-tenant SaaS because until it is completely onboarded it cannot make efficient use of resources. The ongoing need to update, practically, means either using an outdated version or permanently dedicating resources to maintenance.

What is Multi-Tenant Architecture?

As described above, a multi-tenant SaaS architecture sees multiple users saving and storing data with a single instance of the software along with its supporting data. Each user has some level of customization possible, but shares the same application and multi-tenant database. Based on this, there are several benefits to a multi-tenant cloud management platform.

Advantages of Multi-Tenant Architecture

Lower Costs. Multi-tenant architecture often costs less than a single tenant structure because it allows for the exchange of applications, resources, databases, and services. Additional users can use the same software, so scaling has fewer implications.

Efficient Resources. Multi-tenant software architecture shares all resources, offering optimum efficiency and the capacity to power multiple users at once, because it is a dynamic environment where users access resources simultaneously.

Lower Maintenance Costs and Fewer User Responsibilities. Typically, users don’t have to pay expensive maintenance costs and other fees to maintain the software as those costs are associated with SaaS subscriptions. Clients retain responsibility for patches, updates, and other software development, but not areas that can be moved to the cloud, such as hosting.

Common Data Centers. Customers use a common infrastructure so there is no need to create a new data center for each new customer.

Increased Computing Capacity. Multitenancy architecture allows for more computing or server capacity within the same infrastructure and data center.

Simplified Data Mining. All data can be accessed from within a single database schema by all customers, making it more accessible.

Streamlined Data Release and Installation. A multi-tenant package only requires installation on one server rather than individual releases of code and data to specific servers and client desktops.

These same advantages for SaaS also translate to application delivery whereby multiple business units (tenants) can share the central capabilities and costs of the ADC between each other, and scale them up or down as required. This prevents over-provisioning which historically has been a challenge for hardware-based ADCs that are not divisible or scalable.

Drawbacks of Multi-Tenant Cloud Architecture

Multi-tenant architecture has its own shortcomings.

Downtime. Because it relies on large, complex databases that require routine hardware and software downtime, multi-tenant architecture may experience more downtime. This can make an organization appear less reliable and cause issues with availability for customers.

Security and Compliance. Certain potential multi-tenant cloud security risks and compliance issues exist. For example, due to regulatory requirements, some companies may not be able to use shared infrastructure to store data, no matter how secure it is. Additionally, although it shouldn’t occur when infrastructure is configured properly by the cloud vendor and it is extremely rare, corrupted data or other security problems from one tenant could spread to other tenants on the same machine.

However, cloud vendors typically invest more than individual businesses can in their security. The right multi-tenant security model greatly mitigates these risks. A multi-tenant firewall provides a dedicated instance for each user, and multi-tenant monitoring software also offers added security. Ultimately, most multi-tenancy systems provide much more security than single tenant systems.

Noisy Neighbors. There may be more noise and in-app disturbances in multi-tenant environments. Shared databases inside a multi-tenant environment can mean hardware and software issues for one tenant impact others. This “noisy neighbor” effect can mean inadequate computing power and reduced performance for other users, or even an outage. However, if the cloud vendor has correctly set up their infrastructure, this should not occur.

Less Customization. Multi-tenant SaaS is less customizable than single tenant SaaS and users cannot totally control environmental quality because services and resources are shared with multiple customers.

How Multi-Tenancy is Implemented?

Various technical principles enable multitenancy in different cloud computing settings.

Public Cloud Computing. Public cloud providers implement multitenancy so that the same tool works to meet each user’s specific needs in a slightly different way. The provider will define multitenancy as a shared software instance that can be altered at runtime using stored tenant metadata so it performs better for each user. Permissions isolate the tenants from each other and they all experience and use the software differently.

Container Architecture/Multi-Tenant Kubernetes. Containers are self-contained, and can ensure consistent application performance regardless of hosting location. Each of the multitenant database containers runs as if it were the host machine’s only system, partitioned from other containers in different user space environments. These characteristics mean that it’s easy to run multiple cloud customer containers on one host machine using the single multitenant container database.

In Kubernetes multitenancy, multiple workloads or applications run side by side, sharing resources between tenants. The control plane and cluster are shared by the applications, workloads, or users.

Serverless Computing/Function-as-a-Service (FaaS). In this model of cloud computing, applications are broken up into smaller portions called functions. Each function runs separately from other functions and only on demand. Serverless functions run on any available machine in the serverless provider’s infrastructure, not on dedicated servers. Serverless providers may be running code from multiple customers on one server simultaneously because users do not have their own discrete physical servers.

Private Cloud Computing. Similar to public cloud computing, multiple tenants or customers share architecture in private cloud computing. The difference is that the multiple tenants are teams within one private organizational cloud, not multiple organizations.

Does VMware NSX Advanced Load Balancer Support Multi-Tenant Solutions?

Yes – the Platform supports multi-tenancy. Within VMware NSX Advanced Load Balancer a tenant, such as a business unit, can manage an isolated group of resources. Each tenant as a full set of controls, monitoring, visibility and reporting across those resources.

In fact, the VMware NSX Advanced Load Balancer platform supports the different forms of tenancy:

  • Control plane isolation only — Policies and application configuration are isolated between each tenant. This means there are no shared policies or configurations between tenants. The applications are provisioned using a common set of Service Engine entities, so the engines are shared between tenants even though the policies implemented are unique.
  • Control + Data plane isolation — Policies and applications configuration are isolated across tenants. Furthermore, the applications are provisioned on an isolated set of Service Engines, not shared between tenants. This enables full tenancy.

This flexibility, combined with the Platform’s ability to assign users to single or multiple tenants, gives the Platform a high degree of configurability to meet a range of enterprise requirements and situations.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

Find out more about how the VMware NSX Advanced Load Balancer platform here.

Multi-site Load Balancing

<< Back to Technical Glossary

Multi-site Load Balancing Definition

Multi-site Load Balancing distributes traffic across servers located in multiple sites or locations around the world to facilitate disaster recovery and business continuity.

Diagram depicting multi-site load balancing across multiple server and application architecture environments distributed across many different locations for application delivery and application services.
FAQs

What Is Multi-Site Load Balancing?

Multi-site load balancing, also known as global server load balancing (GSLB), distributes traffic across servers located in multiple sites or locations around the world. The servers can be on-premises or hosted in a public or private cloud.

Multi-site load balancing is important for quick disaster recovery and business continuity after a disaster in one location renders a server inoperable. GSLB multi-site load balancing automatically diverts requests away from the failed server to servers in other locations not affected by the failure.

How Does Multi-Site Load Balancing Work?

Multi-site load balancing preserves data and business continuity when there is a sudden server failure or service disruption. Multi-site load balancing redirects traffic to the nearest server not affected by the failure. The automatic application traffic management is based on predefined policies. This load balancing solution provides seamless failover and failback without the need for manual intervention, which results in a lower mean time to recovery (MTTR).

Does VMware NSX Advanced Load Balancer Offer Multi-Site Load Balancing?

Yes. VMware NSX Advanced Load Balancer delivers global server load balancing (GSLB), also known as multi-site load balancing, for enterprise customers as part of the software load balancing capabilities of VMware NSX Advanced Load Balancer. This capability includes:

    • • Active / Standby data center traffic distribution

 

    • • Active / Active data center traffic distribution

 

    • • Geolocation database and location mapping

 

    • • Data center persistence

 

    • Rich visibility and metrics for all transactions

VMware NSX Advanced Load Balancer’s software load balancer is intent-based. It delivers elasticity and intelligence across every cloud and can be managed from a single, centralized controller

For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos.

For more information on multi-site load balancing see the following resources:

Managed Service Provider

<< Back to Technical Glossary

Managed Service Provider Definition

A managed service provider is a third-party company that provides network, application and system management services to enterprises with a pay-as-you-go pricing model. The managed service provider allows enterprises without IT expertise to improve their day-to-day operations and avoid IT maintenance issues. Managed IT service providers have existed since the 1990s. They started with IT infrastructure management, developed into application service providers (ASPs) but now services include mobile device management, cloud storage, remote monitoring and management, and security-as-a-service.

Diagram depicting a company acting as a managed service provider for other companies, managing their IT, security, application delivery, data storage, networking and other virtual services.
FAQs

What is a Managed Service Provider?

When an enterprise does not have the resources to employ a dedicated IT team to handle development, maintenance and break fix, those needs are outsourced to a managed IT service provider. The managed service provider works remotely and for a fixed cost. This allows small and medium-sized businesses to reduce their IT budget, be cost-effective and focus on the core business. Large corporations and enterprises, such as government agencies, will contract with a managed service provider when they have budget and hiring limitations.

How do Managed Service Providers Work?

Using a managed service provider does not mean an enterprise gives up all control and responsibility for IT operations. Using a managed service provider is a strategic method for enterprises to keep some operations in-house while outsourcing others. Together, the company and the managed service provider determine the best course.

Initially, the managed service provider reviews an enterprise’s processes to find ways to improve efficiency, lower risk and reduce costs.

Typically, the managed service provider handles the most time-consuming, complex ad repetitive work. They also provide on-going maintenance and support.

What do MSPs do?

The managed service provider business model offers more than just convenient and lower cost IT application management. Cloud managed service providers help companies think strategically about cloud computing to avoid pitfalls and fully appreciate the benefits. Not everything can and should migrate to the cloud and a managed service provider can provide guidance while helping an enterprise grow its business utilizing the cloud.

A cloud managed service provider handles operations that customers don’t see, such as IT, human resources, vendor management and procurement. Costs are predictable with a fixed monthly fee. Enterprises save money by outsourcing IT employees and avoiding unplanned maintenance and repairs.

The expertise of the managed IT service provider gives top IT knowledge to small and medium sized businesses that wouldn’t possess on their own. This helps organizations reduce risk and liability when it comes to compliance with government regulations.

Other benefits include:

• Improved Security: Backup and disaster recovery plans for any possible incident. Network monitoring around the clock.

• Comprehensive Reporting: View of entire infrastructure in real time, which helps enterprises track all activity.

What are Examples of Managed Service Provider (MSP) Companies?

There are specialized versions of managed service providers in areas of security, business continuity and data storage solutions. Managed security service providers, and managed IT service providers can also focus on specific industries like legal, financial services, healthcare and government agencies. The managed services market is rich with specialist providers of different sizes, focus and cost.

What Does VMware NSX Advanced Load Balancer Offer for a Managed Service Provider?

VMware NSX Advanced Load Balancer is purpose-built for the multi-cloud and mobile era using a unique analytics-driven, 100% software approach. VMware NSX Advanced Load Balancer is the first platform to leverage the power of software-defined principles to achieve unprecedented agility, insights, and efficiency in application delivery. Managed service providers can use VMware NSX Advanced Load Balancer to deliver, secure and manage their applications on behalf of customers.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

For more information on MSPs see the following resources:

Multi-cloud

<< Back to Technical Glossary

Multi-Cloud Definition

Multi-cloud (also multicloud or multi cloud) is the use of multiple cloud computing and storage services in a single network architecture.

This refers to the distribution of cloud assets, software, applications, and more across several cloud environments. With a typical multi-cloud architecture utilizing two or more public clouds as well as private clouds, a multi-cloud environment aims to eliminate the reliance on any single cloud provider or instance.

Multi-cloud diagram showing an enterprise or company using multiple cloud service providers for data storage and application delivery.
FAQs

What is Multi-Cloud?

Multi-cloud is the use of two or more cloud computing services from any number of different cloud vendors. A multi-cloud environment could be all-private, all-public or a combination of both. Companies use multi-cloud environments to distribute computing resources and minimize the risk of downtime and data loss. They can also increase the computing power and storage available to a business. Innovations in the cloud in recent years have resulted in a move from single-user private clouds to multi-tenant public clouds and hybrid clouds — a heterogeneous environment that leverages different infrastructure environments like the private and public cloud.

Why Use a Multi-Cloud Strategy?

A multi-cloud strategy allows companies to select different cloud services from different providers because some are better for certain tasks than others. For example, some cloud platforms specialize in large data transfers or have integrated machine learning capabilities. Organizations implement a multi-cloud environment for the following reasons:

• Choice: The additional choice of multiple cloud environments gives you flexibility and the ability to avoid vendor lock-in.

• Disaster Avoidance: Outages happen; sometimes it is due to a disaster; other times it is due to human error. Having multiple cloud environments ensures that you always have compute resources and data storage available so you can avoid downtime.

• Compliance: Many multi-cloud environments can help enterprises achieve their goals for governance, risk management and compliance regulations.

What is Multi-Cloud Management?

Multi-cloud management involves workload or application management in multi-cloud computing as information moves from one cloud platform to another. This requires an organization to possess an expertise in multiple cloud providers and complex cloud management.

How Secure is Multi-Cloud?

Multi-cloud security has the specific challenge of protecting data in a consistent way across a variety of cloud providers. When a company uses a multi-cloud approach, third-party partners handle different aspects of security. That is why it is important in cloud deployment to clearly define and distribute security responsibilities among the parties.

What are the Benefits of Multi-Cloud?

A multi-cloud platform combines the best services that each platform offers. This allows companies to customize an infrastructure that is specific to their business goals. A multi-cloud architecture also provides lower risk. If one web service host fails, a business can continue to operate with other platforms in a multi-cloud environment versus storing all data in one place.

What are Examples of Cloud Providers?

There are many public cloud providers including:

    • • AWS

 

    • • Google Cloud Platform

 

    • • IBM Cloud

 

    • • Microsoft Azure

 

    • • Openstack (private cloud)

 

    • • Rackspace

 

    • VMware Cloud

How does Avi Networks Enable Multi-Cloud??

Avi is purpose-built for the cloud and mobile era using a unique analytics-driven, 100% software approach. Avi is the first platform to leverage the power of software-defined principles to achieve unprecedented agility, insights, and efficiency in application delivery.

For more information on multi-cloud see the following resources:

Microservices

<< Back to Technical Glossary

Microservices Definition

Microservices is an architectural design for building a distributed application using containers. They get their name because each function of the application operates as an independent service. This architecture allows for each service to scale or update without disrupting other services in the application. A microservices framework creates a massively scalable and distributed system, which avoids the bottlenecks of a central database and improves business capabilities, such as enabling continuous delivery/deployment applications and modernizing the technology stack.

Diagram depicting a company using a microservices architecture for application services and application delivery across container and other environments. Diagram also shows this in comparison to legacy monolithic architectures.
FAQs

What Is Microservices Architecture?

Microservices architecture treats each function of an application as an independent service that can be altered, updated or taken down without affecting the rest of the application.

Applications were traditionally built as monolithic pieces of software. Adding new features requires reconfiguring and updating everything from process and communications to security within the application. Traditional monolithic applications have long lifecycles, are updated infrequently and changes usually affect the entire application. This costly and cumbersome process delays advancements and updates in enterprise application development.

The architecture was designed to solve this problem. All services are created individually and deployed separately from one another. This architectural style allows for scaling services based on specific business needs. Services can also be rapidly changed without affecting other parts of the application. Continuous delivery is one of the many advantages of microservices.

Microservices architecture has the following attributes:

• Application is broken into modular, loosely coupled components
• Application can be distributed across clouds and data centers
• Adding new features only requires those individual microservices to be updated
• Network services must be software-defined and run as a fabric for each microservice to connect to

When to Use Microservices?

Ultimately, any size company can benefit from the use of a microservices architecture if they have applications that need frequent updates, experience dynamic traffic patterns, or require near real-time communication.

Who Uses Microservices?

Social media companies like Facebook and Twitter, retailers like Amazon, media provider like Netflix, ride-sharing services like Uber and Lyft, and many of the world’s largest financial services companies all use microservices. The trend has seen enterprises moving from a monolithic architecture to microservices applications, setting new standards for container technology and proving the benefits of using this architectural design.

How Are Microservices Deployed?

Deployment of microservices requires the following:

• Ability to scale simultaneously among many applications, even when each service has different amounts of traffic
• Quickly building microservices which are independently deployable from others
• Failure in one microservice must not affect any of the other services

Docker is a standard way to deploy microservices using the following steps:

• Package the microservice as a container image
• Deploy each service instance as a container
• Scaling is done based on changing the number of container instances

Using Kubernetes with an orchestration system like Docker in deployment allows for management of a cluster of containers as a single system. It also lets enterprises run containers across multiple hosts while providing service discovery and replication control. Large scale deployments often rely on Kubernetes.

How Do Microservices Scale?

Microservice architectures allow organizations to divide applications into separate domains managed by individual groups. This is key for building highly scaled applications. This is one of the key reasons why businesses turn to public cloud to deliver microservices applications as on-prem infrastructure is more optimized for legacy monolithic applications, although it’s not necessarily the case. A new generation of technology vendors set out to provide solutions for both. The separation of responsibilities fosters independent work on individual services, which has no impact on developers in other groups working on the same application.

Does VMware NSX Advanced Load Balancer Provide Microservices?

VMware NSX Advanced Load Balancer provides a centrally orchestrated, container ingress with dynamic load balancing, service discovery, security, micro-segmentation for both north/south and east/west traffic, and analytics for container-based applications running in OpenShift and Kubernetes environments.

VMware NSX Advanced Load Balancer provides a container application networking platform with two major components:

Controller: A cluster of up to three nodes that provide the control, management and analytics plane for microservices. Controller communicates with a container management platform such as OpenShift for Kubernetes, deploys and manages Service Engines, configures services on all Service Engines and aggregates telemetry data from Service Engines to form an application map.

Service Engines: A service proxy deployed on every Kubernetes node providing the application services in the dataplane and reporting real-time telemetry data to the Controller.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

For more information on multi-cloud see the following resources:

Microsegmentation

<< Back to Technical Glossary

Microsegmentation Definition

Microsegmentation is a security technique that enables security policies to be assigned to data center applications, down to the workload level. One major benefit of microsegmentation is that it integrates security directly into a virtualized workload without requiring a hardware-based firewall. This means that security policies can be synchronized with a virtual network. Policies can be synchronized with a virtual machine (VM), operating system (OS), or other virtual security targets. As a result, security models may be deployed deep inside a data center, using a virtualized, software-only approach.

Forrester Research established the concept of the “Zero Trust model” of virtualized network security. This model is a replacement to the “trust, but verify” model which was previously common. “Zero Trust” implements methods to localize and isolate threats, of which microsegmentation is an important component. Under “Zero Trust,” rules and policies can be assigned to workloads, VMs, or network connections. This means that only necessary actions and connections are enabled in a workload or application, blocking anything else.

Under the microsegmentation network security approach, security architects logically divide the data center into distinct security segments. These segments are defined down to the individual workload level. Architects may then define security controls and deliver services for each unique segment. Because security policies are applied to separate workloads, microsegmentation software can significantly bolster a company’s resistance to attack or intrusion.

Diagram depicts the structure of microsegmentation in connecting application clients to application environments with layers in between.
FAQs

What is Microsegmentation?

Microsegmentation is a security technique used in data centers and cloud network environments. This fine grained approach integrates security directly into a virtualized workload, removing the need for a firewall on the hardware. Security policies for access control may be synchronized with a virtual network. Virtual machines (VM), operating system (OS), or other virtual security targets can benefit from this policy approach. Through network segmentation, the attack surface and corresponding liability are minimized.

The “Zero Trust model” of virtualized network security was established by Forrester Research, as an alternative to “trust but verify.” This approach abolishes the idea of a trusted network inside a defined corporate perimeter for software defined networks and cloud deployments. “Zero Trust” mandates that enterprises create secure microperimeters of control around sensitive data assets. Encryption and authorized user identity are two common forms of microperimeters used in this model.

Under the microsegmentation network security approach, security architects divide the data center into distinct segments at the workload or application level. Architects may then define security controls and deliver services for each unique segment. Security policies applied to separate workloads through microsegmentation software significantly changes the attack surface.

This approach removes the need for multiple physical or hardware firewalls. This reduces manual effort involved in configuring internal firewalls for east-west traffic control. It also reduces the effort required to maintain those configurations over time. Those configurations often mount greater cost and complexity for the enterprise.

How Does Microsegmentation Work?

Network segmentation isn’t new. Enterprises have employed firewalls, virtual local area networks (VLAN) and access control lists (ACL) for their network segmentation security models.

Network segmentation breaks an Ethernet network into subnetworks (or subnets) which allow network traffic to be organized and contained. This approach boosts network performance and can introduce simple security in traditional static networks. The rise of software-defined networks and network virtualization has paved the way for microsegmentation.

Software-defined networking (SDN) and software-defined data center (SDDC) technologies have changed the data center landscape. They have catalyzed the ability for the policies to be applied to individual workloads to further reduce the attack surface.

Network microsegmentation uses virtualization technology to create increasingly fine grained secure zones in data centers and cloud deployments. These zones isolate each individual workload or application and secure it separately. The idea is to significantly reduce the exposed surface vulnerable to malicious activity. This restricts unwanted lateral (east west) traffic once a perimeter is penetrated.

Identity driven microsegmentation and encrypted microsegmentation both restrict lateral access within networks. Both approaches limit the permissions to the micro-packet or workload. Since policies are tied to logical segments, any migration of the workload or packet move the security policies along with it. This eliminates manual configuration processes which can lead to security flaws.

Why is Microsegmentation Important?

Microsegmentation is built for today’s security environments — and tomorrow’s. Cyberthreats are prolific and continuously adapting — hackers’ creativity reveals no shortage of tactics to do damage to enterprises of all kinds. Many argue that “Trust but verify” is no longer a valid security model. Moat and castle strategies ignore threats that compromise assets inside the castle.

Traditional firewalls can remain in place to enable security perimeter (north-south) defenses. Microsegmentation restricts unwanted communication between workloads (east west) traffic. This zero-trust security model addresses network attacks where attackers penetrate the perimeter and wait before deploying their ultimate disruption. This model is on the rise.

Many consider it necessary to assume your enterprise has already been compromised; you simply don’t know it yet. Those proponents argue that “Trust but verify” leaves enterprise leaders flatfooted and focused on crisis management instead of proactive network security. Zero Trust provides the proactive, architectural approach to align with mission priorities.

Microsegmentation does not eliminate the threat of cyberattacks, but reduces bad actors’ access to a small segment of the organization’s data. That can restrict the impact of a problematic incident.

For example, identity driven microsegmentation allows communities of interest to be provisioned with access quickly and efficiently. Administrators can then monitor and secure access control, to restrict permissions once the workload is complete. By enforcing segmentation with encryption at the workload or application level, security architects can hide sensitive packets.

If adversaries are able to infiltrate a microsegment, the damage would be contained to that small area. Intruders would be unable to move laterally and attack other segments. That interrupts the process by which an attack is escalated and could mean the difference between a manageable incident and an enterprise-wide catastrophe.

How to Implement Microsegmentation

Microsegmentation is widely regarded as a best-practice solution for securing data center and cloud assets. It is considered one of the first practices for implementing a “zero trust” security model. Microsegmentation policies dictate which applications can and cannot communicate with each other. Well-designed policies in this model ensure any unauthorized communication attempt is not only blocked. They can also trigger an alert that an intruder may be present.

Enterprises that successfully make the microsegmentation switch typically take a phased approach. This approach starts with a few “quick wins” on priority projects and gradually builds out a more robust program. In the process, the buy-in from this approach creates momentum which catalyzes policy roll-out across the whole enterprise.

Start by focusing on projects that are manageable, fairly easy to complete, and can deliver tangible results. Some common areas security architects prioritize are regulatory compliance, and devops. Security architects can then create secure data repositories for sensitive data types. Examples include those which may generate protected health or medical data.

These examples represent organizational needs for which microsegmentation is well-suited. They highlight the stakeholders who work outside the IT security team. Their buy-in is often required for an effective network microsegmentation strategy to work. Many organizations begin by convening all stakeholders to identify priorities and establish an implementation hierarchy.

Enterprise leaders may look to security to reach out to stakeholders from all business and IT units. The security team may be expected to take responsibility to ensure all stakeholders understand how all the application and business pieces work together. Thoughtful planning and mapping out the security model in advance will save hours of trial and error in implementation.

Creating effective and practical microsegmentation requires a thoughtful approach to implementation. Be clear on the essential process-level visibility. Equip all stakeholders with the information they need to identify logical groupings of applications for segmentation. Establish platform-agnostic policies so each packet or workload may be fully secure in heterogeneous environments, where they may migrate.

Label assets clearly to ensure that they can be closely-monitored in dynamic and auto-scale environments. Develop customizable hierarchies which reflect the importance of empowering different stakeholders. Give them the needed tools to organize and create rules that meet their workload or application’s unique needs. Develop a platform to automate policies and segments. This means that, as infrastructure and workload demand scales, newly deployed workloads can be appropriately allocated.

Microsegmentation Implementation Phases

Microsegmentation implementation can generally be broken down into six phases:

  • Find and identify all the applications running in the data center. Ensure you understand the level and bandwidth of access control required.
  • Define which applications need to be able to communicate with each other.
  • Develop a hierarchy of logical groups for the creation of security policies. Use a careful definition to avoid creating too many discrete groupings or creating groups so broad that policies will lack precision.
  • Once the logical groupings are defined, policies can be created, tested and refined for each group.
  • Deploy policies across the workloads and applications prioritized for this implementation.
  • The solution should enable monitoring of every port and all east-west traffic for anomalies.

What are the Benefits and Challenges of Microsegmentation?

Traditional firewalls can remain in place to maintain familiar perimeter (north-south) defenses. Microsegmentation significantly limits unwanted communication between workloads (east-west) within the enterprise. Microsegmentation allows direct east west communication between approved systems. This eliminates the need to hairpin traffic. Without hairpinning, network architects must maintain a simpler and higher-performing design.

Microsegmentation gives companies greater control over east west traffic or lateral communication that occurs between servers. Increasingly, this traffic bypasses perimeter-focused security tools. Bulkheading sensitive areas of the network away from less-valuable and less-hardened areas is a technique security architects use. They lean on segmentation to thwart attackers from moving laterally and escalating privileges across networks. Dark Reading estimates that 75% to 80% of enterprise traffic flows east-west, or server-to-server, between applications in today’s hybrid cloud world. Segmentation rules applied to the workload or application reduce the risk of an attacker moving from one compromised device to another.

Network virtualization and microsegmentation have the potential to provide boosts in network security because of the notion of persistence. In a physical network environment, networks are tied to specific hardware boxes. Security for this model is often implemented by a hardware-based firewall, which gates access by IP addresses or other security policies. If the physical environment is changed, these policies are ineffective.

In a virtual environment, architects can create secure policies assigned to virtual connections. Those policies and connections can move with an application if the network is reconfigured – making the security policy persistent. Many users like the fact that software-defined networking (SDN) supports moving workloads around a network quickly. In SDN microsegmentation the security policy gets assigned to the workload level. This means that the access control can persist no matter where the workload is moved.

Operational efficiency presents both a benefit and a challenge for the enterprise implementing microsegmentation. The traditional hardware or “trust but verify” approach demands a number of control tools which can get unwieldy. Things like access control lists, routing rules and firewall policies which can introduce a lot of management overhead.

These policies can be difficult to scale in rapidly changing environments. Microsegmentation is typically done in software, which makes it easier to define fine grained segments. On a software defined network, IT can work to centralize network microsegmentation policy and reduce the number of firewall rules needed.

On the other hand, consolidating firewall rules and access control lists and translating them into a new policy framework can be challenging. One important starting point is mapping the connections between workloads, applications, and environments to establish the proper policies. This requires institutional resources and buy-in.

Complexity and consistency are important to consider, and may present unexpected challenges for administrators implementing microsegmentation. Microsegmentation basically distributes security policies and rules to workloads.

Those policies and rules must follow consistent guidelines, and account for all workloads — even idle or powered-down virtual machines. Without guidelines or best practices, it’s possible for policies to shift between workloads or locations. Without a policy that closely monitors for complexity, an idle workload may come online in lockdown without the ability to communicate properly.

What types of Technology Insight for Microsegmentation are Required to Deploy Properly?

Research firm Gartner says microsegmentation is the future of modern data center and cloud security. However, they underscore the essential nature of getting the microsegmentation-supporting technology right. Choosing the wrong one can be analogous to building the wrong foundation for a building and trying to adapt afterward. In addition, microsegmentation, if not conducted properly, can lead to a security practice which creates new problems such as oversegmentation.

Gartner defines four different architectural models for microsegmentation. Native microsegmentation uses the inherent or included capabilities offered within the virtualization platform, IaaS, operating system, hypervisor or infrastructure. The third-party model for microsegmentation is based primarily upon the virtual firewalls offered by third-party firewall vendors. The overlay model for microsegmentation uses some form of agent or software within each host. This model is in contrast with moderating communications the way that firewalls do. The hybrid model of microsegmentation provides a combination of native and third-party controls.

In order to deploy this properly, stakeholders need to understand their network. Factors include what types of architecture will best suit their needs. They also need to know how that architectural model will connect with the network dependencies they must manage. Each stage of microsegmentation deployment requires an understanding of the enterprise’s network and dependencies within that network. Selecting the vendor, deploying and testing the policies requires an understanding of the architectural model, access requirements, and how they fit together.

What are some Microsegmentation Vendors?

When comparing products for microsegmentation in virtualized data centers, there are a number of different vendors to consider. Research firm Gartner defines four different architectural models for microsegmentation. The vendors in the microsegmentation market are grouped by their offerings along these architectural models.

Native microsegmentation uses the inherent or included capabilities offered within the virtualization platform, IaaS, operating system, hypervisor or infrastructure. Vendors include AWS microsegmentation, Microsoft and VMware microsegmentation. The third-party model for microsegmentation is based primarily upon the virtual firewalls offered by third-party firewall vendors. Vendors include those well-known for providing firewall solutions, such as Cisco, Checkpoint, Fortinet, Juniper, Palo Alto Networks, SonicWall, Sophos, and Huawei.

The overlay model for microsegmentation uses some form of agent or software within each host. This is in contrast to moderating communications the way that firewalls do. Vendors include CloudPassage, Drawbridge Networks, Guardicore microsegmentation, Illumio, Juniper, ShieldX, vArmour, and Unisys microsegmentation. The hybrid model of microsegmentation provides a combination of native and third-party controls.

Does VMware NSX Advanced Load Balancer, now Part of VMware, offer Solutions for Microsegmentation?

VMware NSX Advanced Load Balancer integrates into NSX with enhanced load balancing and WAF capabilities which work across multiple clouds and environments. NSX (via NCP) can apply microsegmentation to container pods with predefined tags based rules. NSX provides container clusters with full network traceability and visibility.

For Kubernetes microsegmentation, NSX (via NCP) can apply Kubernetes network policy per namespace. NSX has built-in operational tools for Kubernetes. Predefined tag rules enable you to define firewall policies in advance of deployment. These tag rules are based on business logic rather than using less efficient methods such as static IP addresses to craft security policy. With this method, security groups defined in NSX are microsegmented to protect sensitive applications and data down to the pod and container level.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.