<< Back to Technical Glossary

Istio Definition

Istio is an open source service mesh solution organizations use to run microservices-based, distributed applications anywhere. Istio aggregates telemetry data, enforces access policies, and manages traffic flows—without changing application code.

Istio networking eases deployment complexity by transparently layering onto distributed applications to speed modernization. The Istio platform enables organizations to readily connect, secure, control, and monitor microservices architecture.

Image shows an istio mesh of ingress and mesh traffic moving through service A and service B proxy, while the control plane releases policy checks.

Istio FAQs

What is Istio?

What is Istio service mesh? Istio is an open source service mesh that forms a transparent layer atop distributed applications. Istio is a complete platform, including APIs, that is capable of integrating with most policy, telemetry, or logging systems.

Service mesh refers to the network of microservices that includes both applications and interactions between them. As a service mesh increases in complexity and size, it can become more challenging to manage and even comprehend.

Requirements for service meshes can include discovery, failure recovery, load balancing, metrics, and monitoring. More complex operational requirements might include A/B testing, access control, canary rollouts, end-to-end authentication, and rate limiting. Istio service mesh solutions are part of a language-independent, transparent, modernized service networking layer that provides these functions.

What is Istio in Kubernetes—and is it different from any other Istio deployment? The key to understanding Istio Kubernetes compatibility and the Istio architecture is to understand how Envoy and Kubernetes function together. It’s not a question of Istio vs kubernetes—they often work in tandem to ensure a containerized, microservices-based environment operates smoothly.

For example, service mesh solutions such as Istio are made up of both a data plane and a control plane. An extended version of Envoy manages all inbound and outbound traffic and serves as the data plane for the Istio service mesh.

In contrast, Kubernetes is an open source platform that automates and orchestrates many of the manual processes involved in scaling and deploying containerized applications. Although Istio is platform agnostic, developers often use Istio and Kubernetes together.

In this way, Kubernetes, Envoy, and Istio are all related tools organizations use to manage distributed systems.

Istio Features

Here are a few of the Istio advantages and popular features most users cite:

Visibility. Straightforward rules configuration and traffic management simplify configuration of service-level properties such as timeouts, retries, and circuit breakers. Istio also makes critical tasks such as canary rollouts, A/B testing, and staged rollouts easier. Improved visibility into traffic and out-of-box failure recovery and Istio fault tolerance features allow for more reliable, robust networks under various conditions.

Security. Istio security capabilities empower developers to work on security at the application level and consistently enforce policies by default across diverse runtimes and protocols with few or no changes to the application. Istio provides the secure, foundational communication channel, and manages authorization, authentication, and encryption of service to service communication at scale.

Observability. Istio performance monitoring offers monitoring, tracing, and logging capabilities. Istio monitoring features customizable dashboards that offer insight into how service performance is affecting other processes. The Mixer component collects telemetry data from the Envoy proxy and provides policy controls, offering Istio operators granular control over the mesh, the infrastructure backends, and their interactions.

Istio Architecture. Istio security architecture consists of three components:

  • The open source Envoy Proxy that manages security and connections. Envoy proxies are typically deployed to support the microservices applications within Kubernetes clusters as sidecars;
  • The Istio data plane, which consists of all Envoy proxies running beside cluster applications;
  • The Istio control plane, which manages the Envoy proxies in the data plane.


Istio Load Balancer. Istio uses a round-robin policy by default.

Istio Fault Injection. To improve resiliency and traffic management, users can inject faults and test Istio using their method.

Istio Autoscaling. Find the Istio autoscaling rules and policy for K8s here.

Istio Service Entry. This describes the service properties such as protocols, ports, DNS name, VIPs, etc.

Envoy Proxy. To extend Istio performance and capabilities, the system uses Envoy proxy and many of its built-in features, including: TLS termination, HTTP/2 and gRPC proxies, staged rollouts, rich metrics, and more. The sidecar proxy model also allows you to add Istio capabilities to an existing deployment with no need to rearchitect or rewrite code. Envoy health check takes the place of a specific Istio health check, and allows users to automatically perform active checks of all the cluster services based on health check data and discovery. The proxy increases Istio’s latency.

Pilot. Pilot configures and runs high-level routing rules controlling traffic behavior for Envoy. There is no dynamic Istio service discovery. Instead, Pilot offers service discovery for Envoy sidecars, resiliency (retries, timeouts, circuit breakers), and traffic management for routing (canary rollouts, A/B tests).

Istio Benefits

The key benefits of Istio are:

Security. Istio security best practices [found here] allow users to create secure networks of distributed services with service-to-service authentication, load balancing, monitoring, and other options, without changing service code. Istio can also be deployed behind a web application firewall (WAF).

Support. This Istio WAF or sidecar proxy configuration can offer Istio services support throughout the environment using its control plane functionality:

  • Automatic load balancing for WebSocket, gRPC, HTTP, and TCP traffic
  • Traffic control with routing rules, failovers, retries, and fault injection
  • A configuration API supporting rate limits, access controls, and quotas
  • Automatic metrics, logs, and traces including cluster egress and ingress
  • Supported Istio releases found here
  • Secure service-to-service in-cluster communication


Safe, secure, reliable communications. The Istio service mesh is more consistent and efficient, allowing users to avoid directly implementing desired security and connectivity behavior in each application.

Communications abstracted away from the application layer. Istio abstracts the control plane and data plane away from the applications and physical infrastructure. This makes securing, managing, and observing distributed applications simpler.

Traffic management. Istio enables traffic splitting in support of blue/green deployments, A/B testing, and canary deployments of applications.

API gateway vs Istio gateways. Istio ingress gateways are cloud- and Kubernetes-native, unlike most standard API gateway options. With Istio, ingress control is part of load balancing and service deployment.

Portability. Using the Istio system, a single service can operate in multiple environments or clouds for redundancy.


How to Use Istio

Because it is an open source platform, Istio can be licensed from a commercial provider like, but that isn’t necessary. It can also be downloaded from community-led repositories on GitHub.

Once the platform is sourced, the user deploys the Istio control plane in Kubernetes clusters. Envoy Proxy allows users to configure Istio and operate the data plane, enforcing both North-South and East-West traffic policies.

Istio operators deploy and install Istio in Kubernetes using either Helm charts or YAML files. The Istio control plane is used to set policies, manage configurations, and perform updates.

A virtual service in Istio defines traffic routing rules and applies them. A routing rule defines matching traffic criteria for each specific protocol. The Istio virtual service responds with matched traffic to the destination service or subset defined by the registry.

Istio Alternatives

There are various alternatives to Istio or, specifically, tools that enhance or replace it. These include NGINX, HashiCorp, HAProxy, and of course Avi Networks and VMware.

Istio vs NGINX. NGINX is web server software that functions as a load balancer, Hypertext Transfer Protocol (HTTP) cache, reverse proxy, and mail proxy. NGINX can run as an ingress controller, but there is also an Istio ingress controller capability.

Consul vs Istio. HashiCorp Consul is a service networking solution for managing secure network connectivity between services and across multi-cloud and on-prem environments. Consul offers traffic management, service mesh, service discovery, and automated network infrastructure updates to devices.

HAProxy vs Istio. HAProxy is another ingress controller but it cannot run with Envoy, unlike Istio or Consul. HAProxy is also not suited for serving static files or running dynamic apps.

Envoy vs Istio. Although what is the difference between Istio and Envoy is a common question, Istio vs Envoy is a false distinction. Envoy was designed for integration with Istio. In fact, this relates back to the discussion of the difference between Istio and Kubernetes.

Read on to learn about how Istio, VMware, and Avi Networks compare and why Avi Networks/VMware together with Tanzu offer the best alternative to Istio.

IP Spoofing

<< Back to Technical Glossary

IP Spoofing Definition

Spoofing is a type of cyber-attack in which the attacker uses a device or network to trick other computer networks into believing they are a legitimate entity to take over the devices themselves as zombies for malicious use, gain access to sensitive data, or launch Denial-of-Service (DoS) attacks. IP spoofing is the most common type of spoofing.

Sometimes called Internet Protocol (IP) spoofing or IP address spoofing, IP spoofing refers to impersonating another computer system by creating IP packets with false source IP addresses. IP spoofing detection can often be difficult. This is because IP spoofing allows cybercriminals to engage in malicious activity such as infecting a device with malware, stealing data, or crashing a server, without detection.

Attackers often engage in IP spoofing to target devices with man-in-the-middle attacks and distributed denial of service (DDoS) attacks, as well as their surrounding infrastructures. The goal of DoS attacks and IP spoofing attacks is to flood a target with traffic and overwhelm it, while preventing mitigation efforts by hiding the identity of the attack source.

Attackers spoofing an IP address can:

  • Prevent security teams and authorities from identifying them and tying them to the attack;
  • Stop targeted devices from alerting users to attacks, making them unwitting participants; and
  • Avoid security devices, scripts, and services that blocklist IP addresses that are malicious traffic sources.

Image depicts attacker computer using IP spoofing to attack victim.

IP Spoofing FAQs

What is IP Spoofing?

What is IP spoofing and how can it be prevented? When users transmit data over the internet, it is first broken into multiple units called packets. The packets travel independently and at the end, the receiving system reassembles them. Packets contain IP headers with routing information including the source IP address and the destination IP address. The packet is similar to a package in transit with its return address represented by the source IP address.

In IP address spoofing, a hacker modifies the source address in the packet header with basic IP spoofing tools so the receiving system thinks the packet is from a trusted source, such as a device on a legitimate enterprise network, and accepts it. There is no trace of tampering, because IP spoofing works at the network level.

To engage in IP spoofing, hackers need only a trusted IP address and the ability to intercept packets and replace authentic IP headers with fraudulent versions. Traditional “castle and moat” network defense structures are highly vulnerable to IP spoofing and other attacks that prey on trusted relationships.

Although identity theft and online fraud or cybercriminals attacking corporate servers and websites are the most common examples of IP spoofing, it also has legitimate applications. For example, before websites live, organizations may use IP spoofing tests to ensure the site can handle volume without being overwhelmed. This kind of IP spoofing is not illegal.

Types of IP Spoofing

Among the most common IP spoofing techniques are:

Distributed Denial of Service (DDoS) attacks

In a DDoS attack, hackers overwhelm computer servers with packets of data using spoofed IP addresses. This enables them to hide their identity while slowing down or crashing a network or site with massive amounts of traffic.

Masking botnet devices

A botnet is a network of devices controlled by a hacker from a single location using IP spoofing software. Cybercriminals obtain access to computers by IP spoofing and masking botnets. Each bot in the network has a spoofed IP address, so IP spoofing allows the attacker to mask the botnet without being traced, maximizing their rewards by prolonging the duration of an attack.

Man-in-the-middle attacks

A ‘man-in-the-middle’ attack is another malicious IP spoofing technique. This method interrupts two devices as they communicate, alters the packets, and transmits them without either sender or receiver knowing. By spoofing an IP address and accessing personal accounts, attackers can direct users to fake websites, steal information, and more, making man-in-the-middle attacks highly lucrative.

MAC spoofing vs IP spoofing

MAC spoofing attacks take place when malicious clients use MAC addresses that do not belong to them to generate traffic. The goal is the ability to gain access or get past access control based on MAC information.

IP spoofing attacks are similar to MAC spoofing attacks, but the client uses an IP address. The goal is to harm both the initial target and innocent bystanders by prompting the initial target destination IP address to reply to as many source IP addresses as it can—replies the attacker never sees, since the source IP addresses are spoofed.

IP spoofing vs VPN

A VPN is itself a kind of IP spoofing service. It encrypts the user’s internet connection to protect the sensitive data being sent and received. So although the traditional use case for the VPN we think of here is to protect users from those who want to spy on our IP addresses—for any reason—they can also be used to spoof location.

What is AWS IP spoofing protection?

Amazon EC2 instances are protected by host-based, AWS-controlled firewall infrastructure that will not allow them to send spoofed network traffic with a source MAC or IP address other than its own.

Why is IP Spoofing Important?

IP spoofing is important to prepare for principally because it is difficult to detect. Victims often learn of it too late, as it happens before malicious actors initiate communication with them or attempt to access the target network.

Some of the main reasons IP spoofing can be difficult to detect and prevent include:

Easier to conceal than phishing. A successful spoof merely redirects the transaction or communication to a spoofed IP address and away from a legitimate receiver—there are no signs as there are with phishing.

Evades security. IP spoofing can bypass perimeter security and firewalls to disrupt the system and flood the network.

Extended concealment. A spoofed IP enables hackers to gain access as trusted users and hide inside vulnerable systems for extended periods of time.

More remote connections. The shift to remote work ensures that greater numbers of devices and users are connecting to the network, increasing the risk of IP spoofing greatly.

How to Detect IP Spoofing

How is IP address spoofing detected? For end users, detecting IP spoofing is difficult. There are no external signs of tampering because these attacks are carried out on Layer 3 or the network layer of the Open Systems Interconnection communications model. This allows spoofed connection requests to externally resemble legitimate connection requests.

However, organizations can perform traffic analysis with network monitoring tools at network endpoints. Packet filtering systems, frequently contained in firewalls and routers, are the primary way to do this. Packet filtering systems detect fraudulent packets and refer to access control lists (ACLs) to detect inconsistencies between the desired IP addresses on the list and the packet’s IP address.

Packet filtering includes two types: ingress and egress filtering.

  • Ingress filtering examines the source IP headers of incoming packets to confirm they match a permitted source address. Those that don’t match or exhibit any other behavior that is suspicious the system rejects. This filtering process establishes an ACL of source IP addresses the system permits.
  • Egress filtering, intended to prevent IP spoofing attacks launched by insiders, searches outgoing IT scans for source IP addresses that don’t match the company network.

How to Prevent IP Spoofing

IP spoofing attacks are difficult to spot, designed to conceal the identity of attackers. Server-side teams have the task of doing what they can to prevent IP spoofing. IP spoofing protection for IT specialists includes:

Monitoring. Monitor networks for unusual activity.
Packet filtering. Detect inconsistencies with packet filtering—for example, outgoing packets with source IP addresses that don’t match the network.
Verification. Deploy robust verification methods, including on the network.
Authentication. Authenticate all IP addresses with a network attack blocker.
Firewalls and IP spoofing. Placing some or all computing resources behind an IP spoofing firewall.

IP spoofing protection for end users is more hit and miss, because technically speaking, end-users can’t prevent IP spoofing. However, end users can minimize risk by engaging in best practices for cyber hygiene that ensure optimal online security:

  • Use strong authentication and verification methods for all remote access. Do not authenticate users or devices based solely on IP address.
  • Ensure secure system passwords and change default usernames and passwords to strong versions that contain at least 12 characters and a mix of numbers, upper- and lower-case letters, and symbols.
  • Be cautious on public Wi-Fi networks and avoid sharing sensitive information or conducting banking, shopping, or other financial transactions over unsecured public Wi-Fi. Use a VPN to stay safer if you do need to use public hotspots.
  • Use antivirus software and other security software that monitors suspicious network activity.
  • Use encryption protocols to protect all traffic to and from the server.
  • Visit HTTPS sites that encrypt data with an up-to-date SSL certificate, so users are less vulnerable to attacks. Sites with URLs that start HTTP instead of HTTPS are not secure; look for the padlock icon in the URL address bar.
  • Update and patch network software.
  • Watch for phishing attempts and use comprehensive antivirus protection to guard against viruses, hackers, malware, and other online threats. It’s also essential to keep your software up-to-date to ensure it has the latest security features.
  • Performing ongoing network monitoring.

Other Types of Network Spoofing

There are various types of spoofing, and some of them happen on IP-based networks, but most do not change the IP addresses of packets, so they are not IP address spoofing. Some other types of spoofing types that still involve IP addresses include:

An address resolution protocol or ARP spoofing vs IP spoofing attack occurs when an attacker spoofs and sends false ARP messages rather than packets. In this case, the attack happens over a local area network (LAN) at the data link layer and links the media access control address of the attacker to a legitimate IP address of a server or computer on the network.

In a domain name system (DNS) spoofing attack, the attacker alters DNS records rather than packets to divert internet traffic toward fake servers and away from legitimate sites.

Other types of spoofing may not affect IP addresses at all, or at least not directly:

Caller ID spoofing alters a caller ID display to make a phone call appear to originate from a different location.

Email spoofing alters email header fields to show a different sender and is often used in phishing attacks.

Global positioning system (GPS) spoofing allows the user of a device to trick it into displaying a different location using navigation information from a third-party application.

Short Message Service (SMS) or text message spoofing allows senders to obscure their real phone numbers. Legitimate organizations may use this method to replace phone numbers that are difficult-to-remember with alphanumeric IDs, but attackers may also use this technology to include malware downloads or links to phishing sites in texts.

URL spoofing uses URLs that are nearly identical to real ones to lure targets to enter sensitive information.

Examples of IP Spoofing

Attackers use spoofed IP addresses to launch DDoS attacks and overwhelm computer servers with massive packet volumes. Large botnets containing tens of thousands of computers are often used to send geographically dispersed packets, and each can spoof multiple source IP addresses simultaneously. This makes for automated attacks that are difficult to trace.

Examples of DDoS IP spoofing, man-in-the-middle attacks, and botnets include the following:

GitHub. In 2018, Attackers spoofed the GitHub code hosting platform’s IP address in what was believed to be the largest DDoS attack ever. Attackers sent queries to servers that speed up database-driven sites, and those servers then amplified the returned data from the requests by a factor of about 50, causing an outage.

In 2015 Europol enforced against the man-in-the-middle attack—an action that spanned the continent. The hackers used IP spoofing to intercept payment requests between customers and businesses and accessed organizations’ corporate email accounts. They ultimately tricked customers into sending money to their bank accounts.

In 2011, a botnet called GameOver Zeus infected 1 million computers worldwide with malware designed to steal banking credentials. It helped the users to steal over $100 million and took a massive investigation and 3 years to shut down in 2014.

In 1994, hacker Kevin Mitnick launched an IP spoofing attack against the computer of rival hacker Tsutomu Shimomura and flooded it with SYN requests from routable but inactive spoofed IP addresses. The computer’s memory filled with SYN requests as it was unable to respond to the requests—a technique called SYN scanning.

IP spoofing may also be used to test websites before or while they go live, and to test how systems respond to various attacks and security threats.

Does the VMware NSX Advanced Load Balancer Protect Against IP Spoofing?

For most applications, Vantage is the last line of defense, directly exposed to untrusted public networks. Vantage Service Engines (SEs) protect application traffic by detecting and mitigating a wide range of Layer 4-7 network attacks including various common denial of service (DoS) attacks and distributed DoS (DDoS) attacks.

Learn more here.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

Ingress Load Balancer for Kubernetes

<< Back to Technical Glossary

Ingress Load Balancer Kubernetes Definition

Within Kubernetes or K8s, a collection of routing rules that control how Kubernetes cluster services are accessed by external users is called ingress. Managing ingress in Kubernetes can take one of several approaches.

An application can be exposed to external users via a Kubernetes ingress resource; a Kubernetes NodePort service which exposes the application on a port across each node; or using an ingress load balancer for Kubernetes that points to a service in your cluster.

An external load balancer routes external traffic to a Kubernetes service in your cluster and is associated with a specific IP address. Its precise implementation is controlled by which service types the cloud provider supports. Kubernetes deployments on bare metal may require custom load balancer implementations.

However, properly supported ingress load balancing for Kubernetes is the simplest, more secure way to route traffic.


This image depicts an ingress load balancer for kubernetes designed to connect the end users to the kubernetes clusters.

Ingress Load Balancer Kubernetes FAQs

What is Ingress Load Balancing for Kubernetes?

Kubernetes ingress is an API object that manages external access to Kubernetes cluster services, typically HTTP and HTTPS requests. An ingress object may provide multiple services, including SSL termination, load balancing, and name-based virtual hosting.

Although a load balancer for Kubernetes routes traffic, such as http traffic and https traffic, routing is only the most obvious example of ingress load balancing. There are additional important ingress requirements for Kubernetes services. For example:

  • authentication
  • content-based routing, such as routing based on http request headers, http method, or other specific requests
  • support for multiple protocols
  • resilience, such as, timeouts, rate limiting


Support for some or all of these capabilities is important for all but the most simple cloud applications. Critically, it may be necessary to manage many of these requirements at the service level—inside Kubernetes.

A load balancer, network load balancer, or load balancer service is the standard method for exposing a service to the internet. All traffic forwards to the service at a single Kubernetes ingress load balancer IP address.

Ingress is a complex yet powerful way to expose services, and there are many types of Ingress controllers, such as Contour, Google Cloud Load Balancer, Istio, and Nginx, as well as plugins for Ingress controllers. Ingress is most useful for exposing multiple services using the same L7 protocol (typically HTTP) under the same external IP address.

Kubernetes Ingress vs Load Balancer

The way that Kubernetes ingress and Kubernetes load balancer service interact can be confusing. However, the function of the K8s service is the most important aspect of the Kubernetes load balancer strategy.

A Kubernetes application load balancer is a type of service, while Kubernetes ingress is a collection of rules, not a service. Instead, Kubernetes ingress sits in front of multiple services and acts as the entry point for an entire cluster of pods. Ingress allows multiple services to be exposed using a single IP address. Those services then all use the same L7 protocol (such as HTTP).

A Kubernetes cluster’s ingress controller monitors ingress resources and updates the configuration of the server side according to the ingress rules. The default ingress controller will spin up a HTTP(s) Load Balancer. This will let you do both path-based and subdomain-based routing to backend services (such as authentication, routing).

A Kubernetes load balancer is the default way to expose a service to the internet. On GKE for example, this will spin up a network load balancer that will give you a single IP address that will forward all traffic to your service. The Kubernetes load balancer service operates at the L4 level, meaning it will direct all the traffic to the service and support a range of traffic types, including TCP, UDP and gRPC. This is different from ingress which operates at the L7 level. Each service exposed by a Kubernetes load balancer will be assigned its own IP address (rather than sharing them like ingress) and require a dedicated load balancer.

A Kubernetes network load balancer points to load balancers which reside external to the cluster—also called Kubernetes external load balancers. They have native capability for working with pods that are externally routable, as AWS and Google do.

Different Kubernetes providers (such as Amazon EKS, GKE, or bare metal) of both ingress and Kubernetes load balancing resources support different features.This means files and configurations may not port between controllers and platforms.

In contrast, with ingress users define a set of rules for the controller to use—and these ingress rules may also be followed by a load balancer service. Ingress rules will not function unless they are mapped to an ingress controller for processing. The NGINX and Amazon ALB controllers are among the most widely-used ingress controllers.

Does the VMware NSX Advanced Load Balancer Offer Kubernetes Ingress Services?

Yes. The VMware NSX Advanced Load Balancer provides container ingress, on-demand application scaling, L4-L7 load balancing, a web application firewall (WAF), real-time application analytics, management of Kubernetes objects, and global server load balancing (GSLB)—all from a single platform. Find out more about the operational simplicity, observability, and cloud-native automation VMware NSX Advanced Load Balancer’s integrated solution for Kubernetes Ingress Services delivers here.

For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos or watch the Kubernetes Ingress and Load Balancer How To Video here:

Infrastructure as a Service (IaaS)

<< Back to Technical Glossary

Infrastructure as a Service Definition

Infrastructure as a Service (IaaS) hosts infrastructure on the public cloud and private cloud instead of in a traditional on-premises data center. The infrastructure is delivered to customers on demand while being fully managed by the service provider.

Diagram depicts infrastructure as a service (IaaS) where service providers host physical infrastructure - such as servers and networks - in multi-cloud environments and provide those services to end users on behalf of the service users (business).

What is Infrastructure as a Service?

Infrastructure as a Service (IaaS) is a cloud computing service where enterprises rent or lease servers for compute and storage in the cloud. Users can run any operating system or applications on the rented servers without the maintenance and operating costs of those servers. Other advantages of Infrastructure as a Service include giving customers access to servers in geographic locations close to their end users. IaaS automatically scales, both up and down, depending on demand and provides guaranteed service-level agreement (SLA) both in terms of uptime and performance. It eliminates the need to manually provision and manage physical servers in data centers.

What are the Benefits of Infrastructure-as-a-Service?

Infrastructure as a Service (IaaS) can be more efficient for an enterprise than owning and managing its own infrastructure. New applications can be tested with an IaaS provider instead of acquiring the infrastructure for the test.

Other advantages of infrastructure-as-a-service include:

  • Continuity and disaster recover – Cloud service in different locations allows access to applications and data during a disaster or outage.
  • Faster scaling – Quickly scale up and down resources according to application demand in all categories of cloud computing.
  • Core focus – IaaS allows enterprises to focus more on core business activities instead of IT infrastructure and computing resources.

How to Implement Infrastructure as a Service?

The implementation can be in a public, private or hybrid cloud setting. Customers use a graphical interface to change the infrastructure as needed. The infrastructure can also be accessed through an API key — so new servers are brought online as part of an automation when needed.

Enterprises use IaaS to do the following more efficiently:

  • Test and development – Test and development environments are fast and easy to set up with IaaS. This allows for bringing applications to market quicker.
  • Backup and recover – IaaS solves for storage management and recovery issues. It handles unpredictable demand and storage needs without the enterprise having to dedicate staff to manage it.
  • Big data analysis – IaaS provides the processing power to economically mine large data sets.

How Does Infrastructure as a Service Work?

IaaS started in the cloud as one of the service layers including Platform as a Service (PaaS) and Software as a Service (SaaS). Customers use dashboards and APIs to directly access their servers and storage. With IaaS, there is higher scalability.

IaaS users enjoy many advantages of Infrastructure as a service, such as accessing the same infrastructure technology services of a traditional data center without having to invest as many resources. It is a flexible cloud computing model that allows for automated deployment of servers, processing power, storage and networking.

Does the VMware NSX Advanced Load Balancer Work With Infrastructure as a Service?

Yes. Many companies that use IaaS to host their applications can also use the VMware NSX Advanced Load Balancer to deliver the applications. The VMware NSX Advanced Load Balancer SaaS is the cloud-hosted option to deliver application services including distributed load balancing, web application firewall, global server load balancing (GSLB), network and application performance management across a multi-cloud environment. It helps ensure fast time-to-value, operational simplicity, and deployment flexibility in a highly secure manner.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

For more information see the following Infrastructure-as-a-Service resources:

Istio Service Mesh

<< Back to Technical Glossary

Istio Service Mesh Definition

Istio service mesh is an open source platform for networking for microservices applications. It provides operational control and performance insights for a network of containerized applications. Istio provides services such as load balancing, authentication and monitoring.

Diagram depicts an Istio service mesh is an open source platform for networking for microservices applications. It provides operational control and performance insights for a network of containerized applications. Istio provides services such as load balancing, authentication and monitoring.



What is Istio Service Mesh?

Istio service mesh helps make it easier to manage a distributed microservice architecture. Generally, a service mesh provides network services to microservice applications. Without the service mesh these microservices have no ability to directly communicate with one another. The Istio modern service mesh provides security, observability, and traffic management for the microservices within a particular cluster.

Istio service mesh deploys the following:

• Control plane — Manages overall network infrastructure, providing fine-grained traffic management control, observability and metrics for enforcing policy decisions.

• Data plane — Controls all network communication between microservices. It uses Envoy sidecars and Envoy proxies. Envoy is an open source service proxy designed for cloud-native applications.

How Does Istio Service Mesh Work?

The Istio service mesh sits above the application layer, providing platform-independent communication between microservices. It sends quick, reliable, efficient and secure requests by relying on Envoy proxies. It uses a single control plane that monitors an underlying data plane.

Istio modern service mesh can create a network of deployed services such as load balancing and authentication without making changes in service code.

The Istio service mesh control plane has the following Istio components:

• Pilot — Configures and programs the sidecar proxies.

• Mixer — Makes policy decisions and provides automatic metrics and logs for all route traffic within a cluster.

• Ingress — Handles incoming requests from outside a cluster.

• CA — the Certificate Authority.

The Istio service mesh control plane also handles the following:
• Automatic load balancing for HTTP and TCP traffic.
• Control of traffic behavior.
• Service-to-service communication in a cluster with secure authentication.

What Are the Advantages of an Istio Service Mesh?

• Traffic management — Controls the flow of traffic and application program interface (API) calls between services. Makes API calls more reliable. This includes circuit breaker, error injection, traffic splitting, timeouts and request mirroring for disaster recovery.

• Observability — Provides insights on performance. A dashboard offers visibility to quickly identify issues. This includes application mapping, app logging, and tracing.

• Policy enforcement — Ensuring policies are enforced and allowing for policy changes without changing application code.

• Security — Secure service communications allow for consistent enforcement of policies consistently across all protocols. These include authentication, authorization, rate limiting and a distributed web application firewall for both ingress and egress.

When to Use an Istio Service Mesh

Istio service mesh is needed when an organization adopts container applications on Kubernetes and microservices architectures. Istio makes it easier to manage microservice deployments by providing a solution for security, connectivity, and monitoring of microservices.

Istio service mesh is also good for organizations that need to manage a distributed cluster and require flexibility with traffic management. Istio service mesh does not address certain use cases. Enterprises looking to provide secure connectivity across Kubernetes clusters and ingress services to Kubernetes clusters have to look for solutions such as the VMware NSX Advanced Load Balancer that supports those services.

Does the VMware NSX Advanced Load Balancer Offer an Istio Service Mesh?

The VMware NSX Advanced Load Balancer delivers multi-cloud application services such as load balancing, monitoring, and security for containerized applications with microservices architecture through dynamic service discovery, application maps, and micro-segmentation. VMware NSX Advanced Load Balancer’s Universal Service Mesh is optimized for North-South (inbound and outbound) and East-West (usually within the datacenter) traffic management, including local and global load balancing. VMware NSX Advanced Load Balancer integrates with OpenShift and Kubernetes for container orchestration and security, and is fully integrated with Istio to provide a universal service mesh.

For more information see the following Istio service mesh resources:

Intent-based Application Services

<< Back to Technical Glossary

Intent-based Application Services Definition

Intent-based application services focus on outcomes instead of inputs, delivering a pool of software services to applications to ensure a fast, scalable and secure application experience. Application services include load balancing, application performance monitoring, web application firewalls, application acceleration, autoscaling, micro‑segmentation, and service discovery needed to optimally deploy, deliver, run, and monitor applications.

Diagram depicting intent-based application services which focus on outcomes instead of inputs. Application services include load balancing, application performance monitoring, application acceleration, autoscaling, micro‑segmentation, service proxy and service discovery needed to optimally deploy, run and improve applications.

What Are Intent-based Application Services?

Intent-based application services refer to functions like load balancing, web application firewall and container ingress that focus on outcomes instead of inputs. They work in a declarative, or outcome-based mode.

Declarative — or outcome-based — systems are the opposite of those that are imperative, or input-based. For example, an intent-based load balancer will automatically focus on the outcome of adapting to changes in traffic by auto-scaling up or down without manual intervention. An input-based load balancer requires manual reconfigurations when there are changes to the application, infrastructure or traffic.

What Are Intent-based Application Services Used For?

Intent-based application services can include but not limited to load balancing, security, analytics, web application firewall and container ingress. A software load balancer is the most common application. To be intent-based (declarative) instead of input-driven (imperative), load balancers should be designed around the following principles:

Multi-Cloud: An intent-based load balancer should be able to provide services across all environments including bare metal, VMs or containers whether they are deployed in on-prem data centers or cloud environments (public clouds like AWS, Azure, GCP or private clouds like OpenStack).

• Intelligence: Visibility and analytics are critical for ensuring security and to drive automation. An intent-based load balancer uses artificial intelligence to provide insights into applications, infrastructure and traffic. The value of automation is best manifested when combined with intelligence.

• Automation: The load balancer must integrate seamlessly with other clouds, platforms and tools. If you have to write custom automation scripts for each application or for each infrastructure environment, then you do not have an intent-based load balancer. Load balancers with software-defined architecture are designed specifically drive automation that is not possible with traditional appliances. And software-defined load balancers provide turnkey automation in any environment.

What Are the Benefits of Using Intent-based Application Services?

The benefit of using intent-based application services, such as L4-L7 load balancers, is not having to manually configure and deploy pairs of load balancers for applications. It often helps accelerate the transition to intent-based networking or as part of an organization’s digital transformation.

The initial appliance load balancers were similar to switchboards for early telephones. Every function required human interaction. Virtual editions of the hardware load balancer appliances have not been through a generational change required to adopt modern architectures. So they still require manual and error-prone input when infrastructure, applications or traffic change.

Does VMware NSX Advanced Load Balancer Offer Intent-based Application Services?

The VMware NSX Advanced Load Balancer delivers Intent-based Application Services by automating intelligence and elasticity across any cloud. Intent-based is based on a declarative model that allows you to focus on the desired business outcomes by specifying intent. It frees you from repetitive and error prone manual inputs, which represent the imperative model of the past. VMware NSX Advanced Load Balancer does the heavy lifting in the application services delivered to you with built-in analytics and full automation so you can focus on innovation. The applications services delivered include software load balancing, web application security, and ingress gateway for container-based microservices applications.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

For more information on intent-based application services see the following resources:

IP Address Management (IPAM) Definition

IP address management (IPAM) is a means of planning, tracking, and managing the Internet Protocol address space used in a network. IPAM integrates DNS and DHCP so that each is aware of changes in the other (for instance DNS knowing of the IP address taken by a client via DHCP, and updating itself accordingly). Additional functionality, such as controlling reservations in DHCP as well as other data aggregation and reporting capability, is also common.

IPAM tools are increasingly important as new IPv6 networks are deployed with larger address pools, different subnetting techniques, and more complex 128-bit hexadecimal numbers which are not as easily human-readable as IPv4 addresses. IPv6 networking, mobile computing, and multihoming require more dynamic address management.