Private Cloud Architecture

<< Back to Technical Glossary

Private Cloud Architecture Definition

Private cloud architecture pools resources in a data center in a single logical structure. Virtual machines and containerized hardware and software components enable organizations to better utilize their private cloud infrastructure and increase their efficiency.

Image portraying private cloud architecture, diagramming how a cloud providers computational and memory resources work with hypervisor and virtual machines.

Private Cloud Architecture FAQs

What is Private Cloud Architecture?

What is a private cloud architecture? Private cloud architecture is a single tenant environment dedicated to computing for one organization or entity. It is designed to offer similar benefits as public cloud services, such as scalability, flexibility, and self-service capabilities, but with added control and security.

In a private cloud infrastructure architecture, all underlying servers, storage, and networking resources, exclusively serve the organization. Either the organization or a third-party service provider also owns and manages all underlying infrastructure.

This contrasts with public cloud services, where the infrastructure is shared among multiple users and is always managed by the cloud service provider.

How Does a Private Cloud Architecture Work?

Private cloud database architecture typically involves virtualization technologies such as hypervisors. These enable users to create virtual machines (VMs) or containers atop physical hardware. Administrators can manage and provision virtualized resources dynamically, accommodating rapid, on-demand scalability and resource allocation.

Organizations can deploy private clouds on-premises within their own data centers or off-premises in a dedicated facility managed by a service provider. Organizations can also deploy private clouds using a hybrid cloud strategy—a combination of on-premises and off-premises resources.

Key characteristics of private cloud architecture include:

Single-tenancy. The cloud infrastructure is dedicated to a single organization, ensuring data isolation and enhanced security.

Control and customization. The organization has greater control over the infrastructure, allowing for tailored configurations, security policies, and compliance measures.

Resource pooling and self-service. Virtualized resources are pooled together, enabling self-service provisioning for users within the organization. This means users can request and deploy computing resources on-demand without requiring manual intervention from IT staff.

Scalability and elasticity. The private cloud works to scale and respond based on demand, so organizations can allocate resources efficiently and cost-effectively.

Enhanced security and privacy. Private clouds provide a higher level of security since they are isolated from other organizations. This can be particularly important for industries with strict regulatory requirements or sensitive data.

Benefits of Private Cloud Network Architecture

Private cloud security architecture offers several benefits for organizations. Here are some of the key advantages:

Enhanced control and security. Private clouds offer greater security compared to public clouds. Dedicated infrastructure minimizes the risk of data breaches and unauthorized access. It also allows organizations to implement custom security measures, firewalls, encryption, and access controls for sensitive data and regulatory requirements. A private cloud enables customized configurations, security policies, and network settings, which is especially valuable for industries with strict compliance needs or unique IT environments.

Improved performance. Since the resources of private clouds are exclusive to one organizational user, they deliver better performance and reduced latency. Dedicated infrastructure ensures consistent, predictable performance suitable for applications with high computational or network demands.

Customized, tailored solutions. Private cloud architecture enables organizations to design and deploy tailored, cost-efficient solutions. They can choose the hardware, storage, and networking components that best suit their workload needs and optimize performance.

Scalability and flexibility. Private clouds are elastic, allowing organizations to scale resources on demand. This flexibility enables efficient resource allocation, ensuring that computing power and storage are available as needed without overprovisioning or underutilization.

Compliance and data sovereignty. Industries such as healthcare, finance, and government have strict compliance and data sovereignty regulations. Private clouds enable organizations to maintain compliance with industry-specific regulations and ensure data sovereignty without relying on external cloud service providers.

Greater reliability. Organizations have dedicated resources and can implement redundancy measures, such as backup and disaster recovery solutions, tailored to their needs. This minimizes the risk of downtime and data loss for private clouds.

Cost optimization. While private clouds require upfront investment in infrastructure, they can offer long-term cost savings, especially for organizations with predictable workloads or a need for a high degree of compliance or control over infrastructure.

Use Cases for Private Cloud Environments

Here are some common use cases for private cloud environments:

Government agencies. Government agencies often deal with sensitive data and have strict compliance and data sovereignty requirements. Private cloud architectures enable them to maintain control over their infrastructure and data while meeting regulatory standards.

Healthcare organizations. Healthcare providers handle vast amounts of sensitive data such as patient information, and private clouds offer the security and control necessary to ensure compliance with healthcare regulations, such as the Health Insurance Portability and Accountability Act (HIPAA).

Financial institutions. Banks, insurance companies, and financial organizations handle critical financial data and must meet stringent regulatory requirements; private clouds help them meet compliance standards while controlling their data and IT infrastructure.

Educational and research institutions. Academic institutions and research organizations demand large-scale computational resources, data storage capabilities, e-learning platforms, and security for sensitive student data for compliance with regulations such as HIPAA and the Family Educational Rights and Privacy Act of 1974 (FERPA).

Manufacturing and industrial sector. Manufacturing companies often have unique IT needs, such as resource-intensive applications or large-scale IoT (Internet of Things) deployments. Private clouds offer dedicated infrastructure for managing production systems, controlling data flow, and optimizing manufacturing processes.

These are just a few examples of many possible applications for private cloud architectures.

Designing Private and Hybrid Cloud Architecture: Best Practices

Designing and building a private cloud architecture involves careful planning and consideration of various factors. Here are some best practices and private cloud architecture principles to consider:

Define clear objectives. Clearly define the specific organizational objectives and requirements, such as scalability, security, compliance, and performance. This should guide the design process and inform decisions.

Assess workload requirements. Evaluate workload requirements, including computing power, storage, and networking. Identify the characteristics of each workload, such as resource usage, performance needs, data sensitivity, and interdependencies. This assessment will help determine the infrastructure and resources required for a private or hybrid cloud architecture.

Plan for scalability and elasticity. Design the architecture to accommodate future growth and changing demands. Incorporate scalability and elasticity features to ensure that resources can be easily added or removed as needed. This flexibility will optimize resource utilization and cost-efficiency.

Consider data management. Data management is crucial for both private and hybrid clouds. Determine how the system and users will store, protect, and access data. Implement appropriate backup and disaster recovery mechanisms to ensure data availability and integrity. Consider data classification and encryption to meet security and compliance requirements.

Security and compliance. Emphasize security and compliance. Implement robust security measures such as access controls, encryption, firewalls, and intrusion detection systems. Consider industry-specific compliance requirements such as HIPAA, GDPR, or PCI DSS. Regularly update and patch infrastructure to address security vulnerabilities.

Automation and orchestration. Streamline provisioning, deployment, and resource management with automation and orchestration tools. Implement monitoring and management tools to track performance, usage, and security. This enables self-service, reduces manual effort, and ensures consistent resource configuration.

Network design and connectivity. Careful network design can ensure optimal connectivity and performance. Plan for redundancy, load balancing, and fault tolerance. Consider network segmentation to isolate sensitive workloads and implement appropriate network security controls.

Testing and evaluation. Test cloud architecture regularly to identify performance, security, or scalability issues. Conduct load testing, vulnerability assessments, and disaster recovery drills. Continuously evaluate architecture and optimize its performance to meet evolving requirements.

Collaboration and documentation. Document architecture design, configurations, and operational procedures to foster successful collaboration and facilitate knowledge transfer among different teams, including IT, security, operations, and compliance.

Plan for integration if needed. If designing a hybrid cloud architecture, plan for integration between private and public cloud environments. Ensure seamless connectivity, data transfer, and identity management, and implement hybrid cloud management tools.

Types of Cloud Computing Architectures

Having discussed what private cloud computing architecture is and how it works, how does it compare to other types of cloud computing architectures?

Virtual Private Cloud Architecture

The Virtual Private Cloud (VPC) architecture cloud computing model combines the benefits of both private and public clouds. VPC provides a logically isolated and customizable virtual network within a public cloud infrastructure.

A standard private cloud architecture allows an organization to own and manage its own dedicated physical infrastructure, including servers, storage, and networking equipment. It offers complete control over the infrastructure and allows for customization and security measures tailored to the organization’s needs. A standard private cloud can be deployed on-premises or in a dedicated off-premises facility.

In contrast, a virtual private cloud (VPC) architecture is built within a public cloud provider’s infrastructure, such as Amazon Web Services (AWS) or Google Cloud Platform (GCP). It still offers some of the characteristics of a private cloud, such as isolation, control, and security, but does not require users to manage physical infrastructure.

Here are some key differences and advantages of virtual private cloud architecture compared to standard private cloud architecture:

Ownership. In private cloud architecture, the organization owns and manages the physical infrastructure, while in a VPC architecture, a public cloud provider owns and manages the infrastructure, and the organization only has control over the virtual networking components within that infrastructure.

Elasticity and scalability. Since VPC architecture leverages the scalability and elasticity features of a public cloud provider, allowing organizations to scale resources up or down based on demand, it may offer more readily available dynamic scalability compared to standard private clouds which are self-limited as a resource.

Rapid provisioning. Virtual private cloud architectures enable rapid provisioning and configuration of resources such as virtual networks, subnets, and security groups by using underlying infrastructures that are already in place.

Global availability. Public cloud providers typically offer global availability and multiple data center regions, and with that, geographical redundancy, low latency, and improved performance around the world.

Integration with public cloud services. Virtual private cloud architectures can integrate with other public cloud services from the same cloud provider, enabling organizations to use AI/ML, analytics, databases, serverless computing, storage, and other services within the VPC environment.

Pay-as-you-go model. VPC architectures follow the public cloud pay-as-you-go pricing model so organizations pay for the resources they use, which can be more cost-effective compared to private cloud infrastructure.

Security and compliance. Virtual private cloud architectures provide strong security and isolation via access control mechanisms, virtual network segmentation, and security groups.

Managed Private Clouds

Managed private clouds are fully managed private cloud environments. That third-party service provider manages the infrastructure setup, maintenance, security, and monitoring. This offers the benefits of a private cloud while offloading the management burden of managing the underlying infrastructure to an expert service provider.

Hosted Private Clouds

Hosted private clouds are private cloud environments where the physical infrastructure is owned and operated by a third-party service provider who builds and maintains infrastructure in their data centers and offers it as a service. This frees organizations from needing their own data center infrastructure, but allows them to keep control over their own virtual infrastructure.

Managed Private Clouds vs Hosted Private Clouds—What’s the Difference?

The key difference between hosted private clouds and managed private clouds is the level of responsibility and control. The organization with a hosted private cloud retains control over the virtual infrastructure, but leaves management of the underlying physical infrastructure to the service provider or host. In a managed private cloud, the service provider takes care of both the physical and virtual infrastructure, providing a fully managed cloud environment for the organization, which handles neither set of tasks.

Infrastructure as a Service (IaaS)

IaaS is a cloud computing model where organizations can provision and manage virtualized computing resources, such as virtual machines, storage, and networking, on-demand. It offers flexibility and scalability by providing infrastructure resources as a service. IaaS can be delivered in public, private, or hybrid cloud environments. In the context of private clouds, organizations can build their own IaaS environment within their private cloud architecture, enabling self-service provisioning and management of infrastructure resources.

Platform as a Service (PaaS)

PaaS is a cloud computing model where a provider offers a platform and development environment as a service. It allows organizations to develop applications and manage them without any need to support the underlying infrastructure. PaaS provides a pre-configured computing platform with frameworks, libraries, and other tools, freeing engineers to work on development and deployment. While PaaS can be built on top of private cloud infrastructure, it is not limited to private clouds and can be offered in public or hybrid cloud environments as well.

Infrastructure as a Service (IaaS) vs Platform as a Service (PaaS)—What’s the Difference?

IaaS provides the fundamental building blocks of computing infrastructure in the cloud. PaaS takes the abstraction level a step further by providing a complete platform for developing, deploying, and managing applications.

The difference between them is abstraction and application. IaaS provides virtualized infrastructure resources such as servers, storage, and networking that organizations manage, while PaaS goes a step further by providing a complete application development and deployment platform, abstracting away the underlying infrastructure complexity.

Multi-tenant Architecture in a Private Cloud

Multi-tenant architecture is an approach that shares a single instance of a software application or infrastructure between multiple tenants or organizations. Implementing multi-tenancy in the context of private cloud infrastructure allows multiple departments or business units within an organization to share resources while maintaining isolation and security, enabling efficient resource utilization and cost-sharing.

Private Cloud vs Public Cloud vs Hybrid Cloud Architecture—What’s the Difference?

Private cloud architecture is typically hosted on-premises or in a dedicated off-premises data center and is dedicated to a single organization. It offers exclusive access and control over the infrastructure, allowing organizations to customize the environment according to their specific needs.

Private cloud vs public cloud infrastructures provide enhanced security, compliance, and customization options, making this approach suitable for organizations with strict data privacy requirements or specialized IT environments. They can be managed by the organization itself or outsourced to a managed service provider.

Public cloud architectures are cloud services provided by third-party vendors over the internet. Resources such as computing power, storage, and applications are shared among multiple organizations or tenants.

Public cloud providers own and manage the infrastructure, providing scalability, elasticity, and a wide range of services based on a pay-as-you-go model. Public clouds are cost-effective, require no upfront investment, and offer global availability, but they may have limited customization options and are subject to the provider’s security measures and compliance certifications.

Hybrid cloud architectures combine private and public clouds, allowing organizations to make use of resources across private and public cloud environments. Hybrid clouds provide flexibility, allowing organizations to optimize resource allocation, scale workloads, and take advantage of cost-effective public cloud services when needed.

For example, organizations may use the private cloud for sensitive workloads, data, and applications that require high security or compliance, while utilizing the public cloud for scalable, on-demand resources and services. However, managing integration, data transfer, and security between the two environments can be complex.

Examples of Private Cloud Architecture

There are several private cloud architecture solutions available on the market, offered by various vendors. Here are some examples of well-known private cloud solutions:

VMware Private Cloud Architecture. VMware vSphere is a virtualization platform that enables organizations to create and manage private cloud environments. It offers a suite of virtualization technologies for compute, storage, and networking, along with management tools for resource provisioning, monitoring, and automation.

Azure Private Cloud Architecture. Azure Stack is Microsoft’s hybrid cloud solution that allows organizations to extend Azure services to private data centers. It enables organizations to build and manage private clouds using Azure-consistent services and APIs, providing a consistent development and management experience across public and private cloud environments.

OpenStack Private Cloud Architecture. OpenStack is an open-source cloud computing platform that allows organizations to build and manage private clouds using a comprehensive set of services, including compute, storage, networking, and identity management.

Amazon Private Cloud Architecture. Amazon Web Services (AWS) offers the private cloud architecture solution Amazon Virtual Private Cloud (Amazon VPC) that enables organizations to create a logically isolated virtual network within the AWS cloud.

Does Avi Offer a Private Cloud Architecture Solution?

Avi’s application delivery controller (ADC) solution is designed to improve the performance, security, and availability of applications in a range of environments. Avi’s ADC provides load balancing, traffic management, SSL/TLS termination, application security, and other services, enhancing application delivery in on-premises data centers and across cloud environments. Learn more about Avi’s multi-cloud load balancing capabilities and more here.

Private Cloud Hosting

<< Back to Technical Glossary

Private Cloud Hosting Definition

Private cloud hosting refers to a private cloud computing environment dedicated to a single organization or user. This is also referred to as hosted private cloud services, dedicated private cloud hosting, and isolated access. Private cloud hosting solutions typically live in data centers, either on-premises or hosted elsewhere, but they are always used only by the single organization to whom they are dedicated.

Because hosted private cloud vs public cloud solutions provide resources such as servers, storage, and networking infrastructure that are exclusively used by one customer, they deliver a higher level of control, security, and privacy.

Image shows private cloud (single-tenant environment) vs. public cloud (where multiple customers share resources).

Private Cloud Hosting FAQs

What is Private Cloud Hosting?

Private cloud hosting provides a dedicated and isolated infrastructure for a single customer—a single-tenant environment. Unlike public cloud environments where multiple customers share resources, private cloud hosting ensures one organization uses all resources exclusively.

Private cloud hosting and the various phrases that refer to it—hosted private cloud services, dedicated private cloud hosting, isolated access—all essentially highlight the dedicated and isolated nature of the hosting environment. However, private cloud hosting remains flexible and can be deployed in different locations, including data centers operated by a service provider or on-premises within the customer’s own infrastructure.

Private cloud offers several advantages, such as enhanced security, increased control, and customization options. However, an organization that hosts its infrastructure in a private cloud may bear higher costs compared to its peers that use public cloud services, including the expenses of dedicated infrastructure and maintenance.

How Does Private Cloud Hosting Work?

Virtualization and the use of virtual machines (VMs) is foundational to private cloud infrastructure.

Virtualization technology. Private clouds are typically built on virtualization technology, such as hypervisors. Hypervisors create and manage virtual machines—isolated, independent instances of operating systems running on a physical server.

Resource allocation. Within a private cloud environment, the physical resources of the underlying infrastructure, such as CPU, memory, and storage, are divided into virtualized resources allocated based on organizational requirements. Each VM operates as if it were a separate physical server.

Isolation and multi-tenancy. VMs help isolate the private cloud hosting environment. Each VM is encapsulated and operates independently, ensuring that its allocated resources are isolated. This allows for multi-tenancy within a private cloud, enabling multiple VMs to coexist securely within the same infrastructure.

Scalability and elasticity. Virtual machines offer scalability and elasticity within a private cloud environment. As workload increases, organizations can provision additional VMs to meet demand. VMs can be easily created, scaled up or down, and retired as needed, allowing for efficient resource allocation and dynamic adjustment of computing capacity.

Management and orchestration. Private cloud management platforms provide tools and interfaces to manage and orchestrate virtual machines within private cloud infrastructures. These platforms enable administrators to deploy, monitor, migrate, and manage VMs across private cloud environments.

Flexibility and consolidation. Virtual machines decouple software from its underlying hardware, offering organizations the flexibility of running different operating systems and applications on a single physical server, consolidating workloads, and optimizing resource utilization. This also facilitates workload mobility, enabling VMs to be migrated or replicated across physical servers within the private cloud infrastructure.

Private cloud hosting systems also leverage the efficiency and flexibility of containerization technology to enable the deployment and management of applications in isolated and portable container environments. Private cloud hosting systems use containerization in similar ways to virtualization:

Resource optimization. Containers allow for efficient utilization of resources because they are lightweight and share the host operating system kernel. In a private cloud environment, containerization enables multiple containers to run on the same physical server, optimizing resource allocation and maximizing the use of computing capacity.

Horizontal scalability. Containers can easily scale horizontally, so users can add containers to handle increased demand. Private cloud hosting systems can adapt to changing workloads using orchestration platforms like Kubernetes to automatically scale containers based on predefined rules or metrics.

Isolation and security. Containers isolate applications, ensuring that each container operates independently without interference. This isolation improves security by containing any potential security breaches within a single container, limiting its impact on other parts of the infrastructure.

Application portability. Containers are highly portable, allowing applications to run consistently across different private cloud environments or even hybrid cloud setups. This enables easier migration of applications between different private cloud instances or between private and public clouds, providing flexibility and avoiding vendor lock-in.

DevOps and continuous integration/continuous deployment (CI/CD). Containerization aligns well with DevOps best practices and CI/CD workflows. Private cloud hosting systems can integrate containerization with CI/CD tools and processes, facilitating the rapid development, testing, and deployment of applications within the private cloud environment.

Container orchestration. Private cloud hosting systems can use container orchestration platforms like Kubernetes to manage and automate container deployment, scaling, and monitoring. These platforms provide service discovery, load balancing, automated rollout/rollbacks, and self-healing capabilities, making it easier to manage containerized applications in the private cloud.

What Are the Types of Private Cloud?

On-premises private cloud. An on-premises or internal private cloud is built and operated within the organization’s own data center or infrastructure. The organization completely controls its hardware, software, and networking. This type of private cloud solution offers the highest level of control and customization but requires an upfront investment and ongoing maintenance.

Bare metal servers. Private cloud hosting can exist within a bare metal environment. The private cloud infrastructure is built directly on physical private cloud servers, sometimes called bare metal servers or dedicated servers, without any virtualization layer.

Traditionally, as discussed above, private clouds are associated with virtualization technologies and virtual machines (VMs). However, the emergence of bare metal cloud solutions makes a private cloud environment that runs directly on dedicated hardware possible, bypassing the need for virtualization.

Bare metal private cloud hosting offers several benefits, including enhanced performance, improved security, and greater control over the infrastructure. It can be particularly suitable for workloads that require direct access to the hardware or have specific performance requirements that virtualization may not fully meet.

What is a managed private cloud?

Managed private cloud hosting is private cloud infrastructure that is fully or partially managed by a third-party service provider. In a managed private cloud, sometimes called a hosted private cloud, the service provider takes responsibility for the setup, configuration, management, maintenance, upgrades, support, monitoring, compliance, security, backup, and disaster recovery for the underlying infrastructure and associated services. This includes managing the physical servers, storage, networking components, and virtualization technologies.

Hosted private cloud solutions can assist in scaling resources up or down based on changing demands. These services are often governed by service-level agreements (SLAs) that define the level of service, performance guarantees, uptime commitments, response times, and support availability. 

Managed private cloud hosting allows organizations to offload their day-to-day management and maintenance tasks, allowing their IT teams to focus on value-added core business activities. Managed private cloud services offer a compromise: some expertise, operational efficiency, and improved reliability, with some of the benefits of a private cloud environment, such as control, security, and customization.

What is virtual private cloud hosting?

Private cloud hosting and virtual private cloud hosting (sometimes VPC) are related ideas, but they offer different types of underlying infrastructure and levels of user control. For example, differences in who controls the management of the cloud environment are central to these structures.

Private cloud hosting is a single-tenant approach that offers networking, computing, and storage resources to a provisioned application or organization. Virtual private cloud hosting is a multi-tenant model that offers an isolated environment inside a public cloud.

The basics of virtual private cloud hosting are as follows:

  • Infrastructure. Virtual private cloud hosting refers to a section of public cloud infrastructure that is logically isolated, typically within the data centers of the organization using that section. It is a subset of their overall public cloud environment and a kind of hybrid cloud solution.
  • Isolation. A VPC provides logical, if not physical, isolation within the public cloud infrastructure. The resources allocated to a specific virtual private cloud are separate from other VPCs in the same environment.
  • Control. The organization controls the configuration and management of their virtual private cloud hosting environment, including network settings, security policies, and resource allocation.
  • Scalability. VPC hosting offers similar scalability to public cloud services, so users can leverage the provider’s infrastructure to scale resources up or down as needed.
  • Cost. Virtual private cloud hosting typically follows a pay-as-you-go model, offering a cost savings for some users.

 

What is the Difference Between Public and Private Cloud Hosting?

The main difference between public and private cloud hosting rests in who owns and controls the underlying infrastructure and who can access and use the cloud resources:

Public Cloud Services

  • Infrastructure. Public cloud infrastructure is owned and managed by a cloud service provider such as Amazon Web Services or Google Cloud.
  • Shared resources. Multiple customers or organizations share one pool of computing resources, including servers, storage, and networking infrastructure.
  • Accessibility. Cloud services are accessible to the public over the internet. Customers can provision and utilize resources on a pay-as-you-go basis.
  • Scalability. Public clouds can scale resources up or down based on demand.
  • Cost. Users pay for the resources they consume.

 

Services such as Amazon EC2 and Google Compute Engine are public cloud offerings.

Private Cloud Hosting Providers

  • Infrastructure. Cloud infrastructure is dedicated to a single organization and can be owned by the organization (on-premises) or a third-party provider (hosted private cloud).
  • Isolated resources. Resources such as servers, storage, and networking are exclusively used by one organization, ensuring better control and security.
  • Accessibility. Private clouds can be accessed over the internet or through a private network connection, providing a higher level of privacy and control.
  • Customization. Organizations have greater flexibility to customize the cloud environment according to their specific needs and requirements.
  • Cost. Private cloud hosting typically involves higher costs as the organization bears the expenses of dedicated infrastructure and maintenance for a self hosted private cloud.

VMware, Oracle, and Cisco are all examples of private cloud hosting companies.

What Are the Benefits of Private Cloud Hosting?

Private cloud hosting offers several benefits, particularly for enterprises:

Enhanced security. Private cloud hosting offers more robust security compared to public cloud services. Dedicated resources offer greater organizational control over infrastructure and allow the implementation of robust security measures tailored to meet specific needs. This is particularly beneficial for complying with strict regulations and handling sensitive data.

Increased customization. Complete control over the cloud environment allows enterprises to customize infrastructure, networking configurations, and security policies according to their specific requirements. This level of control enables organizations to optimize the cloud environment to meet their unique business needs and maximize operational efficiency.

High performance and reliability. Isolated resources ensure more consistently high performance and minimal latency. Private cloud hosting offers backup solutions, redundant infrastructure, and disaster recovery for enhanced reliability.

Compliance. A private cloud host more effectively addresses compliance requirements. Organizations with specific concerns about compliance with industry or contractual obligations or privacy for data stored on-site and online can maintain their data while complying with relevant policies and regulations within their private cloud infrastructure. No one needs to comply with irrelevant regulations—and private cloud users are empowered to avoid such unnecessary compliance.

Scalability and flexibility. Private and public cloud environments can offer comparable scalability and flexibility. Private cloud hosting solutions can be tailored to match the growth trajectory and changing requirements of organizations. Enterprises can scale resources up or down based on their needs to accommodate increased workloads or fluctuating demands.

Enhanced privacy. Private cloud hosting offers more robust privacy, as organizations have isolated space for their resources, reducing the risk of data breaches or unauthorized access. This can be critical for businesses that handle sensitive information or intellectual property.Cost optimization. Private cloud hosting typically involves higher upfront costs and ongoing maintenance expenses compared to public cloud services, yet it may still offer cost optimization benefits in the long run, especially for enterprises. By leveraging the flexibility and control private cloud hosting delivers, enterprises can optimize resource utilization, streamline operations, and potentially reduce overall costs.

Does Avi Offer Private Cloud Housing?

Avi Networks hosts infrastructure as a service (IaaS) on both public and private cloud networks rather than traditional data centers. VMware is a global leader in cloud infrastructure, and VMware for Private Cloud hosting services empower organizations to easily pool all their servers into one resource, distributed among VMs across the enterprise to run application workloads.

There are three major types of VMware private cloud hosting supported: VPC, hosted private cloud, and managed private cloud. The best private cloud hosting option for any one user depends heavily on the specific compliance requirements and goals of that organization.

Learn more about how to host a private cloud with Avi and VMWare here.

Packet Switching

<< Back to Technical Glossary

Packet Switching Definition

Packet Switching transmits data across digital networks by breaking it down into blocks or packets for more efficient transfer using various network devices. Each time one device sends a file to another, it breaks the file down into packets so that it can determine the most efficient route for sending the data across the network at that time. The network devices can then route the packets to the destination where the receiving device reassembles them for use.

Diagram depicts a general networking architecture using packet switching to transmit data across digital networks.
FAQs

What is Packet Switching?

Packet switching is the transfer of small pieces of data across various networks. These data chunks or “packets” allow for faster, more efficient data transfer.

Often, when a user sends a file across a network, it gets transferred in smaller data packets, not in one piece. For example, a 3MB file will be divided into packets, each with a packet header that includes the origin IP address, the destination IP address, the number of packets in the entire data file, and the sequence number.

Types of Packet Switching

There are two major types of packet switching:

Connectionless Packet Switching. This classic type of packet switching includes multiple packets, each individually routed. This means each packet contains complete routing information—but it also means different paths of transmission and out-of-order delivery are possible, depending on the fluctuating loads on the network’s nodes (adapters, switches and routers) at the moment. This kind of packet switching is sometimes called datagram switching.

Each packet in connectionless packet switching includes the following information in its header section:

  • Source address
  • Destination address
  • Total number of packets
  • Sequence number (Seq#) for reassembly

Once the packets reach their destination via various routes, the receiving devices rearrange them to form the original message.

Connection-Oriented Packet Switching. In connection-oriented packet switching, also called virtual circuit switching or circuit switching, data packets are first assembled and then numbered. They then travel across a predefined route, sequentially. Address information is not needed in circuit switching, because all packets are sent in sequence.

What is Packet Loss?

Occasionally, packets might bounce from router to router many times before reaching their destination IP address. Enough of these kinds of “lost” data packets in the network can congest it, leading to poor performance. Data packets that bounce around in the network too many times may get lost.

The hop count addresses this problem, setting a maximum number of bounce times per packet. “Bouncing” simply refers to the inability to locate the final destination IP address, and the resulting transfer from one router to another instead. If a certain packet reaches its maximum hop count, or maximum number of hops it is permitted before reaching its destination, the router it is bouncing from deletes it. This causes packet loss.

Circuit Switching vs Packet Switching

Packet switching and circuit switching are the primary models for facilitating enterprise network connections. Each mode has its place, depending on the facts and user needs.

Circuit switching is most often used for voice and video calling systems—communications systems that require that users establish a dedicated circuit or channel before they can connect. A circuit switching channel is always reserved, and is in use only when the users are communicating.

Circuit switching connections might allocate one or two channels for communications. Those with one channel are called half duplex. Those with two channels are full duplex.

Circuit switching is different from packet switching because it creates a physical path between the destination and source. There is no physical path in packet switching, which instead sends packets over a variety of routes.

Advantages of Packet Switching over Circuit Switching

Advantages of Packet Switching over Circuit Switching:

Efficiency. Improved efficiency means less network bandwidth wastage. No need to reserve the circuit even when it’s not in use means the system is more efficient. A constantly reserved circuit results in wasted network bandwidth, so network efficiency tends to increase with the use of packet switching.

Speed. Optimal transmission speed, minimal latency.

Improved fault tolerance. During partial outages or other network problem times, packets can be rerouted and follow different paths. Using a circuit switching network, a single outage can down the designated pathway for the communications.

Budget. Comparatively cost-effective and simple to implement. Packet switching typically also bills based only on duration of connectivity, whereas circuit switching bills on both duration of connection and distance.

Digital. Packet switching works well for data communication, transmitting digital data directly to its destination. Data transmissions are generally high quality in a packet switched network because such a network employs error detection and checks data distribution with the goal of error free transmissions.

Disadvantages of Packet Switching over Circuit Switching:

Reliability. The packet switching process is reliable in that the destination can identify any missing packets. However, circuit switched networks deliver packets in order along the same route and are therefore less likely to experience missing packets in the first place.

Complexity. Packet switching protocols are complex, so switching nodes demand more processing power and a large amount of RAM.

File size. Packet switching is more useful for small messages, while circuit switching is best for larger transmissions. This is due to multiple rerouting delays, the risk of multiple lost packets, and other issues.

Cell Switching vs Packet Switching

Cell switching, or cell relay, uses a circuit switching network and has features of circuit switching. The primary difference is that in packet switching technology, the packets are of variable lengths, but in cell switching, packets are a fixed length of 53 bytes with a 5 byte header.

Advantages of cell switching include dynamic bandwidth, high performance, scalability, and the ability to use common LAN/WAN architecture multimedia support. Cell switching achieves high performance using hardware switches. There is no need to reserve resources in computer networks for a connection since the technology uses virtual rather than physical circuits. And after establishing a virtual circuit, you can achieve higher network throughputs thanks to minimized switching time.

What is a Packet Switched Network?

A packet switched network follows networking protocols that divide messages into packets before sending them. Packet-switching technologies are part of the basis for most modern Wide Area Network (WAN) protocols, including Frame Relay, X.25, and TCP/IP.

Compare this to standard telephone network landline service, which is based on circuit switching technology. Circuit switching networks are ideal for most real-time data, transmission, while packet switching networks are both effective and more efficient for data that can tolerate some transmission delays, such as site data and e-mail messages.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

PCI DSS

<< Back to Technical Glossary

PCI DSS Definition

PCI DSS stands for payment card industry data security standard. This global security standard for information is designed to enhance control over credit card data to prevent fraud.
All businesses regardless of size must follow PCI DSS requirements if they accept credit card payments from the five major brands. Those brands are American Express, Discover, MasterCard, Visa, and the Japan Credit Bureau (JCB). For any organization that processes, stores, or transmits cardholder and payment data, PCI DSS compliance is required.

Diagram depicts the main pillars of PCI DSS, the payment card industry data security standard designed to enhance control over credit card data to prevent fraud.
FAQs

What is PCI DSS?

In practice, PCI DSS basics include keeping consumer data safe online. Despite the sophistication of modern malware, data breaches, and cyberattacks, small businesses are held to the same basic scope of PCI DSS requirements.

The PCI DSS regulations are a group of operational and technical requirements designed to protect cardholder data. They are effectively the broader rules surrounding payment processing. The overall goal of PCI is to ensure that anyone processing, accepting, storing, or transmitting credit card data maintains a secure environment.

What Are the PCI DSS 12 Requirements?

The latest version of the requirements is PCI DSS 3.2. This version replaced version 3.1 in October 2016.
The PCI DSS compliance checklist includes 12 requirements for card industry security standards. Those 12 PCI DSS standards are spread across 6 compliance groups for PCI DSS, or 6 goals. The 12 PCI DSS compliance requirements and 6 PCI DSS compliance goals themselves are:

Goal 1: Build and maintain a secure network and systems.
Requirement 1: Protect cardholder data by installing and maintaining a firewall configuration.
Requirement 2: Configure settings and passwords rather than defaulting to vendor-supplied security parameters such as system passwords.

Goal 2: Protect cardholder data.
Requirement 3: Protect stored cardholder data.
Requirement 4: Encrypt transmission of cardholder data across open, public networks.

Goal 3: Maintain a vulnerability management program.
Requirement 5: Use anti-virus software and malware software or programs and provide regular updates.
Requirement 6: Develop and maintain applications and systems that are secure by routinely updating and patching them.

Goal 4: Implement strong access control measures.
Requirement 7: Limit access to cardholder data based on “need to know” business justification.
Requirement 8: Authenticate each person with access so they have a unique identity in the system.
Requirement 9: Restrict physical access to all sensitive data, including cardholder and workplace data.

Goal 5: Regularly monitor and test networks.
Requirement 10: Monitor and track all access to cardholder data and network resources by implementing log management.
Requirement 11: Regularly conduct penetration tests and vulnerability scans to test security processes and systems.

Goal 6: Maintain an information security policy.
Requirement 12: This policy should address information security for all personnel and include risk assessments and documentation.

What Do PCI DSS Levels Mean?

PCI DSS applies to every company or organization that stores, transmits, or accepts cardholder data, regardless of the number of transactions or the size of the business. This means that if just one donor or customer uses a credit or debit card, PCI DSS security requirements apply.

PCI DSS definitions include four PCI DSS compliance levels for validation of businesses. These levels are based on total transaction volume across a period of 12 months.

PCI DSS Level 1 businesses process more than 6 million transactions a year. Level 2 organizations process fewer transactions annually—between 1 million and 6 million. Level 3 companies process 20,000 to 1 million transactions, and Level 4 businesses process fewer than 20,000 transactions.

What is PCI DSS Outsourcing?

PCI DSS outsourcing refers to the practice of delegating some PCI DSS compliance tasks to a third party. For example, some small businesses might choose to use a PCI-compliant third-party service provider such as PayPal.

The goal is to limit their exposure, or the scope of the compliance rules that apply to them. In the PayPal example, a small business might avoid accessing and storing the credit card data of shoppers who use PayPal on its commerce website.

However, use of a third-party provider or other outsourcer does not eliminate PCI DSS requirements. In fact, in January 2019, the Security Standards Council PCI SSC updated its recommendations to include several new principles. One of them directs businesses that they should be, “Monitoring Compliance of Third-Party Service Providers.”

Why PCI DSS is Important

It is critical for businesses to remain PCI compliant because data theft and data breaches are extremely common. These issues impact all payment parties negatively in myriad ways. To protect both consumers and your own businesses from damages resulting from a data breach, ongoing compliance—not just spot checks or point-in-time validation—is essential.

Consumers expect PCI DSS compliant applications from businesses because they keep sensitive information safer. Furthermore, to access any major credit card company services, it is essential to achieve payment card industry data security standard PCI DSS best practices.

What is a PCI DSS Audit?

There is a difference between a PCI assessment and a PCI DSS audit. Merchants can perform PCI DSS assessments themselves, to simply take stock of their current practices. These are typically voluntary, and conducted in-house to bring a business up to compliance.

On the other hand, only a qualified security assessor (QSA) can perform a PCI DSS audit. The PCI security standards council provides names of qualified security assessors to the public.

Many larger Level 1 businesses will conduct a PCI DSS audit voluntarily. However, sometimes credit card companies mandate these audits for organizations of any size that have experienced data breaches. When they do, the audit is mandatory for any businesses wanting to continue to use credit card services.

To achieve PCI DSS certification after an audit, the QSA must collect all of the evidence. The QSA then explains in a report that the business is in compliance with all PCI DSS requirements. Typically the report includes a discussion of security configurations and processes. This type of recertification assessment is usually yearly.

How Does Avi Help Your Business Achieve PCI DSS Compliance?

Avi Networks enables you to secure your business’s web applications to achieve compliance—across a host of regulatory landscapes. The 2019 and 2018 Verizon Data Breach investigations found that web application attacks rank #1 and security breaches are on the rise. Achieve more with a comprehensive range of security features including web app security:

  • Compliance with PCI, HIPAA, and GDPR
  • Detailed, point-and-click audit trails and logs of application accesses and traffic flows
  • Enable more precise security policies with app insights on rule matches and traffic flows
  • Automatic scale-out, highly performant architecture enables elastic scale
  • Gain central control over security policies with point-and-click simplicity

 

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

Platform as a Service

<< Back to Technical Glossary

Platform as a Service Definition

Platform as a Service (PaaS) refers to a cloud computing model. You may be wondering: “What is platform as a service in cloud computing?” PaaS specifically provides a platform for customers to develop, run, and manage applications without building and maintaining the cloud infrastructure required to develop and launch an app.

PaaS can be delivered in three different formats. First is as a cloud service from the provider. In this configuration, the customer controls software deployment with minimal configuration options. The Platform as a Service provider supplies the networking, servers, storage, operating system (OS), middleware (e.g. Java runtime, .NET runtime, integration, etc.), database and other services to host the consumer’s application. The second PaaS configuration can be run as a private service (software or appliance) behind a firewall. And finally, the third PaaS configuration can be run as software deployed on public infrastructure as a service such as AWS.

Platform as a Service benefits include more efficient application development. Platform as a Service solutions permit the customer to focus on the application itself. With PaaS, the customer manages applications and data. The provider (in public PaaS) or IT department (in private PaaS) manages runtime, middleware, operating system, virtualization, servers, storage and networking. Development tools offered by the provider are customized according to the needs of the customer. The user can choose to maintain the software, or have the provider maintain it.

Diagram depicts the general structure of a platform as a service (PaaS) in regards to end users and applications, service provider (platform) and the hosting thereof.
FAQs

What is a Platform as a Service (PaaS)?

Platform as a Service (PaaS) refers to a cloud computing configuration that helps enterprises operate with an efficient cloud-based strategy. PaaS provides a platform for customers to develop, run, and manage applications without building and maintaining the cloud infrastructure required to develop and launch applications. PaaS permits more efficient application development since the organization can focus on the application itself.

Users of PaaS may also choose to subscribe to iPaaS, integration platform-as-a-service. This model is a set of automated tools which link applications deployed in different environments. Large business-to-business (B2B) enterprises looking to integrate on-premises applications and data with cloud applications and data, in environments such as hybrid cloud, often use iPaaS to meet this need. Companies who provide iPaaS include MuleSoft, SnapLogic, Dell Boomi, and Informatica. Microsoft and Oracle offer iPaas as well.

How Does Platform as a Service Work?

Platform-as-a-service (PaaS) is a type of cloud computing model in which a service provider delivers a platform to customers. The platform enables the organization to develop, run, and manage business applications without the need to build and maintain the infrastructure such software development processes require.

PaaS is offered via a service provider’s hosted cloud infrastructure. Users typically access PaaS offerings via a web browser. Customers pay for PaaS on a per-use basis. Some providers will charge a flat monthly fee for access to the platform and applications hosted on the platform.

PaaS can be delivered through public, private, or hybrid clouds. With a public cloud PaaS, the customer controls software deployment while the cloud provider delivers all the major IT components needed for running applications. These components can include servers, storage systems, networks, operating systems, and databases. With a private cloud offering, PaaS is delivered as software or an appliance behind a customer’s firewall, typically in its on-premises data center. Hybrid cloud PaaS offers a mix of the two types of cloud service.

Rather than replace an organization’s entire IT infrastructure for software development, PaaS provides key services such as application hosting or Java development. Some PaaS offerings include application design, development, testing, and deployment. PaaS services can also include web service integration, development team collaboration, database integration, and information security. PaaS includes multiple underlying cloud infrastructure components, including servers, networking equipment, operating systems, storage services, middleware, and databases.

All of these technology offerings are owned, operated, configured, and maintained by the service providers. Customers can avoid having to lay out investments in these foundational IT components that they might not be able to use to the fullest extent possible. PaaS also includes widely used resources such as development tools, programming languages, libraries, containers, database management systems, and other tools.

What are the Benefits of a Platform as a Service?

Platform as a Service hardware and software provide benefits like streamlining development tools, reducing infrastructure cost, working on multiple operating systems, and supporting various programming languages.

Platform as a Service benefits outlined by research firm Gartner include:

API development and management – Companies can use PaaS solutions to manage application programming interfaces as well as microservices. This includes security, development, creating new APIs, and end-to-end API management.

Business analytics/intelligence – Some PaaS solutions include tools which empower enterprises to analyze their data for business insights and patterns of behavior. These tools give the organization the required information to make better decisions and more accurately predict things like market demand for products.

Business process management (BPM) – Organizations can access a BPM platform delivered as a service through PaaS solutions. BPM suites integrate IT components needed for process management, including data, business rules, and service-level agreements.

Communications – PaaS can deliver communications platforms. This allows developers to add communication features to applications, such as voice, video, and messaging.

Databases – A PaaS provider can deliver database services to an organization, such as set-up and maintenance. Database PaaS is an on-demand, secure, and scalable self-service database model. Provisioning can be automated, as well as administration of databases, according to analyst firm Forrester Research.

Primary data management (PDM) – Primary data management (MDM/PDM) software tracks the most essential company-wide data points, providing a single point of reference for data. From this point of reference, the software provides insights related to company operations, clients, and goals. Such data might include reference data such as information about customer transactions, and analytical data to support decision making. Users can then implement that data as they see fit, keep records of data history, and make projections based on findings. Operations team, working together with IT teams, can identify the essential metrics across the entire business and pinpoint areas of concern, gauge the success of individual departments, increase productivity, and maximize ROI.

What are some Platform as a Service examples?

Platform as a Service (PaaS), as the name suggests, provides you computing platforms which typically includes operating system, programming language execution environment, database, web server etc. PaaS is more popular among developers as they can put all their concentration on developing their apps and leave the rest of management and execution to the service provider. Many cloud service providers also offer the flexibility to increase or decrease the CPU power depending upon the traffic loads giving developers cost effective and effortless management.

Examples of PaaS may include runtimes such as Java runtimes, databases such as mySQL or Oracle, and web servers such as Tomcat. Another example is Google App Engine, on which one can develop applications and let them execute on Google’s platform. Other commonly-cited examples include AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, and Apache Stratos.

Does Avi Networks, now Part of VMware, offer Platform as a Service?

Yes. Avi (now rebranded to VMware NSX Advanced Load Balancer) is purpose-built for the cloud and mobile era using a unique analytics-driven, 100% software approach. It is the first platform to leverage the power of software-defined principles to achieve unprecedented agility, insights, and efficiency in running applications. Users can access services including distributed load balancing, web application firewall, global server load balancing (GSLB), network and application performance management across a multi-cloud environment. It helps ensure fast time-to-value, operational simplicity, and deployment flexibility in a highly secure manner.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

Perfect Forward Secrecy (PFS)

<< Back to Technical Glossary

Perfect Forward Secrecy Definition

Perfect Forward Secrecy (PFS), also called forward secrecy (FS), refers to an encryption system that changes the keys used to encrypt and decrypt information frequently and automatically. This ongoing process ensures that even if the most recent key is hacked, a minimal amount of sensitive data is exposed.

Web pages, calling apps, and messaging apps all use encryption tools with perfect forward secrecy that switch their keys as often as each call or message in a conversation, or every reload of an encrypted web page. This way, the loss or theft of one decryption key does not compromise any additional sensitive information—including additional keys.

Determine whether forward secrecy is present by inspecting the decrypted, plain-text version of the data exchange from the key agreement phase of session initiation. An application or website’s encryption system provides perfect forward secrecy if it does not reveal the encryption key throughout the session.

Diagram depicts a perfect forward secrecy (PFS) encryption system used to encrypt and decrypt information frequently and automatically.
FAQs

What is Perfect Forward Secrecy?

Perfect forward secrecy helps protect session keys against being compromised even when the server’s private key may be vulnerable. A feature of specific key agreement protocols, an encryption system with forward secrecy generates a unique session key for every user initiated session. In this way, should any single session key be compromised, the rest of the data on the system remains protected. Only the data guarded by the compromised key is vulnerable.

Before perfect forward secrecy, the Heartbleed bug affected OpenSSL, one of the common SSL/TLS protocols. With forward secrecy in place, even man-in-the-middle attacks and similar attempts fail to retrieve and decrypt sessions and communications despite compromise of passwords or secret long-term keys.

Compare Backwards vs Forwards Secrecy

Perfect forward secrecy guards against future compromises of past sessions, which could cause the loss of sensitive data such as passwords or additional secret keys.

Backward secrecy helps “self-heal” compromises of past sessions and the loss of sensitive data from them. This is confusing because both do focus on data from past sessions, but forward secrecy is preventative while backward secrecy is mitigating. For example, the Signal protocol uses the self-healing Double Ratchet Algorithm to achieve backward secrecy.

How Does Perfect Forward Secrecy Work?

Encryption perfect forward secrecy enables entirely private, short-term key exchanges between a client and the server.

Normally, web servers secure communication sessions with special encryption keys. Whenever a client wants to talk to the server, the client generates a pre-primary secret and uses the server’s special key to encrypt it. Both users then continue the rest of the chat, encrypting it with this pre-primary secret.

Only people who know the original key of the server can decrypt what client and server discuss. The network team, for example, supports the server and must monitor communications to assist in its task of tracking down bugs.

Without perfect forward secrecy, an attacker can spy on the server’s communications unobserved. This is because the server uses the same key to encrypt each pre-primary secret with each client.

If the server secures communications with perfect forward secrecy, every time a new client starts a conversation with the server, the two generate a unique pre-primary secret that is totally private. It is also ephemeral, and only lasts for that one communication. The client never sees the long-term key, and a hacker is limited to only what is shared during that one conversation.

Consider this hypothetical example of a basic instant messaging protocol using perfect forward secrecy:

  • Step One: X and Y each generate a pair of asymmetric, long-term, public keys and private keys. They use an already-authenticated channel to verify the public-key fingerprints, or verify them in person. The verification process establishes to a high degree of certainty that the public key’s claimed owner is also its actual owner.
  • Step Two: X and Y securely agree on an ephemeral key for the session using a key exchange algorithm such as Diffie-Hellman. They authenticate each other with the keys from Step One during this process.
  • Step Three: X uses the session key negotiated in Step Two to encrypt a message with a symmetric cipher and sends that encrypted version to Y.
  • Step Four: Y decrypts the message with the key from Step Two.

Step One never repeats. Instead, the process repeats starting from Step Two for each new message sent. Depending on the conversation, X and Y’s roles as sender or recipient may switch. It is this generation of new session keys for each message that achieves forward secrecy.

Even if Step Two is compromised at some point, that key is only good for one message. A compromise of Step One would also leave messages intact—although it might enable an attacker to impersonate X or Y moving forward, leaving future messages vulnerable.

Benefits of Perfect Forward Secrecy

There are many benefits to perfect forward secrecy. Brute force attacks can eventually penetrate even very secure encryption, given enough time and computing power to try combinations of security keys. Without forward secrecy, encryption keys are used for sessions—entire batches of transactions.

Brute force hacking demands extensive time and resources, but that level of return of sensitive data makes it worthwhile. Perfect forward secrecy guarantees brute force attacks won’t be as worthwhile.

Generating a unique session key for each transaction limits hackers to obtaining data from one exchange per successful attack. A server protected by perfect forward secrecy is simply a less appealing target for a hacker, because it demands more effort and time. There’s also no future value in such an attack, because the server with PFS generates a new set of Diffie-Hellman parameters per session.

Does Blockchain Have Forward Secrecy?

Perfect forward secrecy protects future compromises of the passwords or secret keys from past sessions. With forward secrecy in place, previously recorded and encrypted sessions and communications cannot be retrieved and decrypted by an attacker who compromises long-term secrets keys in the future.

This is critical for a blockchain use case. A leaked key has the potential to compromise a significant amount of assets in a blockchain scenario since all data is stored forever.

VPN Perfect Forward Secrecy

VPN perfect forward secrecy simply refers to the use of perfect forward secrecy by VPNs. PFS makes VPN connections more secure, though it can reduce speed slightly in some cases.

Perfect Forward Secrecy Protocols

Several major protocol implementations provide perfect forward secrecy, at least as an optional feature, including SSH, IPsec (RFC 2412), and the IM library and cryptography protocol, Off-the-Record Messaging.

In Transport Layer Security (TLS) 1.3, the ephemeral Diffie–Hellman key exchange supports perfect forward secrecy. OpenSSL provides forward secrecy with elliptic curve Diffie–Hellman key exchange.

The Signal Protocol supports forward security with the Double Ratchet Algorithm. However, WPA does not provide perfect forward secrecy.

How to Enable Perfect Forward Secrecy

Perfect forward secrecy works on sites that use either SSL or TLS sessions. Both cryptographic protocols allow secure connections to be created, but neither determines the encryption cipher to be used or mandates the actual key exchange.

Instead, to enable perfect forward secrecy, the user and server machines must agree upon the encryption type. Therefore, when configuring forward secrecy, set your servers up to make compliant cipher suites available:

  • Ephemeral Elliptic Curve Diffie-Hellman (ECDHE)
  • Ephemeral Diffie-Hellman (DHE)

The key exchange must be ephemeral, meaning the server and client will generate a unique set of Diffie-Hellman parameters and use the keys just once per session. The exchange-related encryption is deleted from the server after the transaction ends, which ensures that any given session key is almost useless to hackers.

If possible, select Elliptic Curve DHE suites. These are faster than the standard DHE counterparts.

To determine whether perfect forward secrecy is enabled, refer to the security details of a site. If it is using “ECDHE” or “DHE” then it is currently using forward secrecy.

Most modern servers are already configured for perfect forward secrecy, but if your server is not, complete the process in four steps:

  • Locate the SSL protocol configuration.
  • Add the protocol to your configuration.
  • Set the SSL cipher. Ensure you enforce the ordering of your ciphers by using ‘SSLHonorCipherOrder on’ in Apache and ‘ssl_prefer_server_ciphers on;’ in nginx.
  • Restart.

Importantly, it is easy to configure perfect forward secrecy incorrectly. A common error is to simply enable support for DHE or ECDHE without actually enforcing the ordering of the ciphers. Simply enabling them doesn’t mean the server is using perfect forward secrecy.

In addition, prioritize perfect forward secrecy over other security methods to ensure it works properly. In some cases you may need to halt other types of security so weaker forms of encryption don’t take priority in error, allowing for FREAK attacks and other SSL/TLS vulnerabilities.

It is also important to halt long duration session tickets or session IDs. These hold onto session information on the user’s side for extended periods of time—sometimes until the system is rebooted.

When to Use Perfect Forward Secrecy

In November 2014, Sony Pictures experienced a major security breach in which hackers stole their servers’ private keys and SSH keys. This makes the case for perfect forward secrecy in that the attackers could use these stolen keys to decrypt confidential data Sony may have collected in the past.

Any current sites should support PFS. Perfect forward secrecy is valuable against attackers who may be able to achieve READ access, but not WRITE access. In other words, an attacker who can undertake cryptanalysis of the underlying ciphers being used and modify the way the session key generator functions may be responsible for failed forward secrecy. For example, large quantum computers are up to breaking these ciphers in a reasonable amount of time.

However, in most cases perfect forward secrecy separates the confidentiality of past conversations and any compromise of a long-term secret key successfully. This begs the question: why don’t all websites support PFS? There are several reasons organizations have for failing to implement perfect forward secrecy.

Lack of infrastructure support and lack of browser support are among the reasons. PFS demands specific combinations of SSL settings, particularly the ephemeral Diffie-Hellman cipher suite, which it deploys for the key exchange. Most current web servers and OpenSSL support PFS and the Diffie-Hellman ciphers. However, servers and infrastructure that don’t can still benefit from PFS thanks to newer, software-based load balancers.

Performance impact of perfect forward secrecy is another issue, in that enabling PFS can reduce the SSL performance of sites by more than 90 percent. PFS is also approximately three to four times more computationally expensive for traditional RSA keys. However, PFS adds little to no overhead for newer elliptic curve cryptography certificates, and so long as your SSL decryption infrastructure has capacity or can scale capacity on demand, performance won’t be a problem even if you are still using RSA 2k certificates.

Implementation complexity has been a barrier to achieving perfect forward secrecy in the past, but implementation is easier than ever with modern load balancers and other packaged solutions. To implement PFS ideally, use an ECC certificate to negotiate fast SSL/TLS encryption with PFS, and simultaneously leverage an RSA certificate as a backup for compatibility with older browsers.

Waiting for a compelling event in the form of a data breach or attack is a mistake.

Does Avi Networks Support Perfect Forward Secrecy?

Why don’t all websites support perfect forward secrecy? Because many load balancers are not equipped to drive PFS efficiently. In some cases, enabling forward secrecy can reduce SSL performance on a site by more than 90%.

The Avi Networks innovation is unlimited SSL performance scaling and massively improved PFS performance. Learn more about the Avi Networks platform here.

For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos.