Enterprise Application Security

<< Back to Technical Glossary

Enterprise Application Security Definition

Enterprise application security, or enterprise level appsec, is the process of securing applications to prevent breaches. The basics of enterprise application security involve measuring how severe vulnerabilities are with CVSS scores, and implementing risk response protocols and patch management.

According to the Open Web Application Security Project (OWASP), the most critical risks and common security vulnerabilities include broken access control, code injection, cryptographic failures, and security misconfiguration.

To prevent these and other risks, enterprise application security best practices include threat modeling to identify potential vulnerabilities and risk assessment to assess the likelihood of particular types of attacks.

Image depicting internet moving to vulnerability assesment, then scanning & detection, then protection & validation before arriving at the corporate network.

Enterprise Application Security FAQs

What is Enterprise Application Security?

Enterprise-grade application security threats may be device-specific, network-specific, or user-specific.

Device-specific threats. Many device-specific threats to enterprise application security exist. BYOD personal devices used for work or other connected personal devices on the enterprise network represent a point of threat. Insecure applications and OS vulnerabilities can be a port for malware injection. Third-party applications such as Facebook Messenger or WhatsApp are also threat sources for iOS organizations.

Network-specific threats. These place all users and devices at risk. Unsecured network connections such as WiFi can expose all connected devices on the network to cyber attacks and threats to supply chains. Using VPNs can mitigate some damage, but network monitoring systems and malware protection are preferred.

User-specific threats. Cyber attacks may arise from both negligent and malicious employees. Even unwittingly, negligent employees can put the organization at risk by clicking on suspicious links, revealing confidential credentials, falling for phishing attacks, and other errors.

Types of App-Specific Enterprise Application Security Threats

Awareness of potential app-specific threats may help mitigate against them. Here are some of the common types of app-specific enterprise application security threats:

Injection flaws. Hackers inject malicious queries into database systems to extract information or corrupt the database.

Broken authentication. A broken authentication system is vulnerable to brute force attacks.

Exposed sensitive data. Passing sensitive data like authentication credentials and credit card information unencrypted can lead to phishing.

Security misconfiguration. Unsecured default and incomplete configurations, and open cloud storage are technically misconfigurations that can cause hacks.

Unsecured deserialization. Applications that deserialize, or convert data into objects, malicious or untrusted data, leave the system vulnerable to injection of malicious serialized Java objects.

Component vulnerabilities. Devices or components with known vulnerabilities expose IT infrastructure to attacks.

Enterprise Application Security Requirements

The SAMM (Software Assurance Maturity Model) framework of the Open Worldwide Application Security Project (OWASP) offers open source tools for prioritizing application security development. Organizations can follow SAMM as a guideline for operational assessment during the secure software development lifecycle (SSDLC).

The SAMM toolset helps teams produce graphical representations of the existing enterprise application security capabilities. Using this, the team can then assess how they perform security testing, manage threat modeling, conduct secure code reviews, and manage bugs to resolution.

Although some organizations begin with dynamic application security testing (DAST) just before release, this can lead to delays and extra work for developers. Analyzing the code as it is written in static form speeds the process and reduces workload.

An over-dependence on either static or dynamic testing or either/or approach to application security is not likely to work for most enterprise-level organizations. Without a more holistic enterprise application security approach in place at every stage, CISOs receive application penetration test results too late in the development lifecycle and return with these findings to teams that are poorly-versed in security, and poorly-situated to fix vulnerabilities.

Organizations that focus solely on post-development testing will almost certainly allow some defects to slip through. Given the tremendous pressure software teams under deadline face, this is especially troubling if widespread flaws are identified or major changes are needed.

The best approach is to deploy enterprise application security software throughout the process. Test early and test often with static application security testing (SAST) during coding and DAST later to catch errant flaws. Both should be followed up with design risk analysis, enterprise application security architecture risk analysis, evaluation of security metrics, and other more mature SDLC testing.

Enterprise Application Security Models

The National Institute of Standards and Technology (NIST) cybersecurity enterprise application security framework or model is a powerful set of guidelines that help organizations develop and improve their cybersecurity posture. The framework sets forth recommendations, rules, guidelines, and standards for identifying, detecting, and preventing cyber-attacks for use across industries and is the gold-standard for creating cybersecurity programs.

The framework categorizes all cybersecurity capabilities, daily activities, processes, and projects into these 5 core functions: identify, protect, detect, respond, and recover.

Enterprise Application Security Best Practices

There are a number of best practices that are part of enterprise application development security:

Adopt the OWASP Top 10. To minimize risk to web applications and produce more secure code, organizations should adopt the OWASP Top 10.

Implement a secure software development lifecycle (SDLC). Integrate security into each aspect of the existing development process, including with more secure software, a more secure design with improved coding processes, and significantly reduced costs as a result of early detection and mitigation of vulnerabilities.

Educate stakeholders. Human users are frequently the weak point or source of breaches. Educated users on the use of multi-factor authentication, antivirus programs, VPNs and firewalls, and the danger of shared passwords and other secrets.

Conduct regular code reviews. Conduct penetration testing annually based on a well-defined scope and a snapshot of the system at a specific point of time.

Implement a strict access control policy. Give IT admins centralized control over organization-wide access and restrictions to networks, devices, and users.

Force strong user authentication. Most data breaches are caused by weak passwords and compromised credentials. The IT team should enforce strong user authentication with access control and policy tools.

Encrypt all data. Unencrypted data is vulnerable to phishing, exploitation, and extraction. In-transit data should be secured using SSL with 256-bit encryption. Protecting stored data with encryption and application-level access control can prevent data exploits.

Update just in time. Time updates to software, firmware, and applications with a proper process. Release the update first in the test environment, adjust for any breakdowns, and only roll out the update across the organization in pieces.

Identify all points of vulnerability. Document all elements in the IT ecosystem, including applications, hardware, and on-premise and cloud-based network elements to improve monitoring and tracking and create transparency.

Make security part of the business lifecycle. Security analysis, repair, testing, and evolution should be part of the business lifecycle. Training and drills for employees as well as tests for software, applications, and hardware should all be part of the IT team’s ongoing tasks.

Does Avi Offer Enterprise Application Security Services?

With Avi Networks consistent, secure enterprise application delivery is easy. One application services platform can frictionlessly deploy any app, anywhere.

The VMware NSX Advanced Load Balancer (Avi) delivers multi-cloud security to protect applications and microservices from today’s threats and improve ingress control. Avi’s comprehensive solution provides network and application security with a context-aware web application firewall (WAF) to protect against all forms of digital threats. Visibility through advanced analytics and security insights helps customize a comprehensive enterprise application security policy per application, microservice, or tenant.

Envoy

<< Back to Technical Glossary

Envoy Definition

The Envoy proxy is an extended version of an Istio ingress controller, and for this reason is sometimes thought of as an Envoy ingress controller. The lone Istio component to interact with traffic on the data plane, the Envoy high-performance proxy was developed in C++ and designed to moderate inbound and outbound services for the Istio service mesh. Envoy is an open-source edge and service proxy.

Image shows the envoy proxy - an extended version of istio ingress controller.

Envoy FAQs

What is Envoy?

The two main challenges that present when organizations move toward microservices and distributed architecture are networking and observability. The service mesh Envoy was created at Lyft to cope with these challenges.

The high performance C++ distributed Envoy Proxy was designed both for single applications and services and as a universal data plane and communication bus for large microservice architectures. Similar to both hardware and cloud load balancers, Envoy runs in a platform-agnostic way and abstracts the network, offering common features and service traffic in a visible infrastructure flow of Envoy mesh. This strong service mesh is logically most comparable to a software load balancer.

Although there are various traditional and tested L4 and L7 proxies such as NGINX and HAProxy, Envoy has several additional benefits:

  • Developed for modern microservices
  • Translates between HTTP-2 and HTTP-1.1
  • Proxies any TCP protocol
  • Proxies all raw Envoy data, databases, and web sockets
  • SSL enabled by default
  • Built-in dynamic service discovery and load balancing
  • Dynamic configuration of Envoy network, adding of hosts, mapping of requests from clients to services

 

Weighted round-robin. Selects each available upstream host in round-robin order.

Weighted least request. Load balancer selects different algorithms based on weights.

All weights equal. An algorithm selects the host which has the fewest active requests based on a configuration.

All weights not equal. Load balancer shifts to weighted round-robin schedule and dynamically adjusted weights based on request load at the time of selection when multiple hosts in the cluster have different load balancing weights.

Ring hash. The load balancer consistently implements hashing upstream to hosts based on a value.

Maglev. The Maglev load balancer implements consistent hashing upstream to hosts to generate a searchable table based on either minimal disruption to the table or protocol routing rules to hash on.

Random. The random load balancer may offer better performance compared to round-robin when there is no Envoy health check policy in place.

Envoy Benefits and Features

Users can enforce policies based on service identity with Istio Envoy. Envoy proxies deploy as sidecars next to services, using built-in features such as staged rollouts with %-based traffic split to logically augment those services. Envoy sidecars abstract the network from the core business logic.

Envoy provides an advanced load balancing mechanism for distributed applications because it is a proxy rather than a library. Envoy can also be used as a network API gateway.

Envoy API traffic management capabilities include resiliency features such as timeouts, automatic retries, load balancing, circuit breakers, coarse and fine grained limiting, observability and metrics, rate limiting via an external rate-limiting service, outlier detection, request shadowing, etc. Robust Envoy metrics about traffic reveal insights for users with tools such as Grafana.

Envoy is self-contained and platform-agnostic, offering high performance despite a small memory footprint. Envoy proxies form a mesh outside of the application that is transparent, out of process architecture.

The Envoy architecture offers various benefits over the traditional library approach. Envoy’s strong service to service communication works in any language, including C++, Go, Java, Python, PHP, etc. This allows Envoy to control all network communication between users and platforms transparently, upgrade dynamically, and makes it simpler to deploy across a distributed structure.

Envoy is a network proxy at L3 and L4, meaning it facilitates communication in the network and transport layers. Envoy supports HTTP L7 filter layer tasks, including rate limiting, buffering, sniffing Amazon’s DynamoDB, routing/forwarding, a raw TCP proxy, serving as an HTTP proxy, a UDP proxy, etc.

Envoy supports protocols for upstream and downstream communication and translation of communication between protocols, HTTP/1.1, HTTP/2, and HTTP/3. Envoy also supports gRPC.

Envoy service discovery involves a layered set of dynamic configuration APIs that provide dynamic updates such as backend clusters, cryptographic items, host information, HTTP routing rules, and listening sockets. Static Envoy configuration files can replace some layers for simpler deployment.

In HTTP mode, Envoy can redirect requests based on parameters such as authority, content type, path, runtime values, and other factors. This allows users to deploy Envoy as a front/edge API gateway.

The Envoy health check allows users to automatically perform active checks of all the cluster services based on health check data and discovery.

Envoy Use Cases

An Envoy sidecar in Kubernetes has a variety of uses, but here are two common ones:

Handle service-to-service communications with an Istio Envoy sidecar. An Envoy sidecar proxy can also enable communication among services as an L3/L4 application proxy if the Envoy proxy instance has the same lifecycle as the parent application. This allows users to extend applications across various technology stacks.

Use Envoy proxies to route traffic as an API gateway. The Envoy proxy accepts inbound traffic as a front proxy sitting between the application and the client request, collating request information and directing it as needed inside a service mesh. The Envoy proxy offers authentication, load balancing, traffic routing, and monitoring at the edge. Envoy monitoring system capabilities are intended to offer granular visibility and greater actionable insight.

Leverage Envoy to achieve scale. Achieve faster response times by routing requests to a read-only cluster through a graded configuration on Envoy.

How to Use Envoy Proxy in Kubernetes

Kubernetes, Envoy, and Istio are related open-source technologies that manage distributed systems.

Kubernetes or K8s is a container orchestration platform. Kubernetes manages clusters of instances and deploys containerized applications. In Kubernetes, service pods hold Envoy docker containers. Envoy proxy kubernetes capabilities facilitate load balancing, scaling, and persistent storage, and enable the platform to operate as a sidecar. Envoy sidecars in kubernetes run in front of each service instance in front of every container and are reverse proxies that provide all non-business-logic services such as encryption and security.

Envoy is the service/edge proxy described within this glossary.

Istio is a service mesh or fabric that is built or layered on top of a platform like Kubernetes or Envoy. It offers a uniform way to manage, connect, secure, coordinate, and observe services and sidecars for them.

Working together, these technologies allow for more efficient and secure management and orchestration of distributed applications. Highly sophisticated, enterprise-scale businesses are now implementing their IT architecture entirely as service mesh that can independently and rapidly heal failures, reroute traffic, and shift loads across any region or cloud service provider without any human intervention.

Find Envoy documentation here.

Envoy Proxy Alternatives

There are a few alternatives to Envoy, NGINX, Avi Networks, HAProxy, and Rust Proxy among them. Read on to learn why Avi is the best choice—as an alternative to the entire service mesh.

Does Avi Offer an Envoy Proxy Alternative?

Yes. As part of a distributed microservices architecture, cloud-native applications often run in containers. Kubernetes deployments are the de-facto standard for orchestration of these containerized applications.

Exponential growth creates sprawl, an unintended outcome of microservices architecture. This spiraling growth presents numerous challenges inside a Kubernetes cluster, including encryption, routing between multiple versions and services, authentication and authorization, and load balancing.

Building on Kubernetes allows the service mesh to abstract away how inter-process and service to service communications are handled, as containers abstract away the operating system from the application.

Avi integrates with the Tanzu Service Mesh (TSM) which is built on top of Istio with value added services. By expanding on the TSM solution, Avi offers north-south connectivity, security, and observability inside and across Kubernetes clusters, and multiple sites and clouds. In addition, enterprises are able to connect modern Kubernetes applications to traditional application components in VM environments and clouds, secure transactions from end-users to the application, and seamlessly bridge between multiple environments. Avi also offers WAF protection that is far more comprehensive than Envoy WAF capabilities.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

Equal-Cost Multi-Path Routing (ECMP)

<< Back to Technical Glossary

Equal-Cost Multi-Path Routing Definition

Equal Cost Multi-Path (ECMP) is a network routing strategy that enables packets of traffic from the same flow or session—traffic with the same destination IP address and/or source—to transmit across multiple best paths with equal cost. This technique fully utilizes bandwidth along links to the same destination that would otherwise remain unused, increasing throughput and load balancing traffic.

Any time routing technology forwards packets, it must decide which next-hop path to use. To make a decision, the device takes packet header fields that identify flows into account.

ECMP is a method for routing traffic based on the principle of identifying and using next-hop paths of equal cost based on hash ECMP algorithms and routing metric calculations. The network gives multiple best paths of the same cost the same metric values, network cost, and preference. The ECMP process through the routing table then identifies a set of routers, an equal-cost multi-path (ECMP) set, and each is an equal cost next-hop address towards the destination. ECMP and most routing protocols may be used together because ECMP demands a local, per-hop decision about only the next hop destination that each router addresses independently.

Although ECMP may increase bandwidth substantially by load-balancing traffic across multiple best paths of equal cost, in practice there can be problems in deployment if the right decisions are not taken.

By default, ECMP load balancing is set as Per-Destination Load Balancing. This kind of load balancing is a type of per-flow load balancing that guarantees that even if multiple ECMP paths are available, packets for a given source and the host pairs for their destination will take the same path.

Per-Packet Load Balancing is also possible. This kind of load balancing determines which path each packet will take to reach its destination IP using a round-robin method. For some kinds of traffic, such as Voice over IP (VoIP), this is inappropriate, because the sequence in which packets arrive is important.

Image shows equal-cost multiple path (ECMP) in networking deploys multiple equal-cost routes to load balance traffic (sessions) to the same destination.

Equal-Cost Multi-Path Routing (ECMP) FAQs

What is Equal-Cost Multi-Path Routing (ECMP)?

Equal-cost multiple path (ECMP) in networking deploys multiple equal-cost routes to load balance traffic (sessions) to the same destination. This process takes the place of selecting and adding one of the ECMP routes from the routing table to its forwarding table and ignoring the other routes unless there is failure along the selected route.

Enabling ECMP functionality on a virtual router also allows the firewall to use all available bandwidth to the same destination efficiently and leave no links unused. It also shifts traffic dynamically to another ECMP member should a link fail, rather than taking time to select an alternate route or waiting for the routing protocol or RIB table to do so—all reducing downtime.

For routes to be considered equal they must be the same in terms of routing protocol and cost. For example, an OSPF route and a static route are from different sources, so they are not considered equal for ECMP load sharing. Only the best route is installed in the routing table if two routes from the same protocol are unequal.

ECMP hashing prevents out of order packets by functioning on a per-flow basis. ECMP hashing does not keep a record of flow states or hashes to each next hop. ECMP hashing does not record these and does not guarantee the equality of traffic sent to each next hop. All packets with the same source and destination IP addresses and ports always hash to the same next hop.

As a per-hop decision limited to a single router, ECMP can be used to load-balance traffic over multiple paths with most routing protocols and may provide substantial increases in bandwidth. Various routing protocols, including Border Gateway Protocol (BGP) and the Open Shortest Path First (OSPF), allow ECMP routing.

How Does ECMP Work?

ECMP software features enable the open shortest path first protocol (OSPF) to add routes with equal costs and multiple next-hop addresses to a given destination on the routing switch in the forwarding information base (FIB).

In an OSPF domain for a particular destination network, there are several types of ECMP routes:

  • Intra-area routes lead through the same OSPF area to the destination
  • Inter-area routes lead through another OSPF area to the destination
  • External routes lead through another AS to the destination

 

Multiple ECMP next-hop routes cannot mix intra-area, inter-area, and external routes.

Load balancing distributes traffic across links based on various parameters. ECMP per-flow load balancing uses Layer 3 routing information to distribute the packets. When the router identifies multiple paths to a destination, it updates the routing table with multiple entries.

With per-flow load balancing, the router can use many routes to achieve load sharing across many source-destination host pairs. Even if multiple paths are available, packets for a particular source-destination host pair take the same path, while traffic streams for different pairs usually diverge.

The benefits of per-flow load balancing are mainly the cost-effective, elastic scaling. However, ECMP load balancing also has its drawbacks. For example, neither per packet load balancing nor label load balancing are supported. And BGP multi-path, with or without PIC Edge, is not supported with ECMP.

ECMP vs LACP: What’s the Difference?

ECMP and link aggregation control protocol (LACP), a layer 2 technology, are both related to load balancing. LACP helps provide redundancy in case of a network failure. And although both attempt to achieve redundancy, they do so in different ways at different points in the network.

Which is optimal depends on where the organization must achieve redundancy. For example:

To avoid blackholing traffic with configuration errors use PAgP; if that’s unavailable, ECMP can be used. Use ECMP where both ends are not managed. But where speed of detection and repair of bugs on devices and link failures is more critical, use LACP, which identifies and repairs failures more quickly.

How to Configure ECMP

ECMP is supported by multiple routing protocols. For routing IPv4 traffic use OSPFv2. In an OSPFv2 network, configure ECMP by specifying the number of equal-cost paths the router should choose with the maximum-paths command. This traditional value has been set to 4, but newer Cisco IOS versions allow users to configure up to 32.

For routing IPv6 traffic use OSPFv3, which retains most of the underlying OSPF functionality. Use the maximum-paths command to define ECMP inside the OSPF process, as ECMP configuration remains mostly the same.

Each router in the network will need to be configured one by one because maximum-path values apply to each router individually.

Border Gateway Protocol (BGP) also supports multi-path capabilities, but BGP unlike ECMP, BGP demands two commands, for eBGP and iBGP peerings, to achieve this multi-path routing capability.

Does VMware NSX Advanced Load Balancer Offer ECMP Load Balancing?

Yes. Vantage can manage virtual service load balancing capacity by dynamically scaling it out or in on additional or fewer Service Engines (SEs). The primary SE for the virtual service coordinates traffic flow distribution amongst the secondary SEs by default, including itself.

Vantage can take advantage of Contrail’s ECMP support on OpenStack with Contrail, and manage the orchestration of ECMP routes as part of virtual service placement.

Home ECMP functionality at the upstream edge router such as Juniper MX or the Contrail vRouter on the host hypervisor. Learn more here.

For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos.

Elliptic Curve Cryptography

<< Back to Technical Glossary

Elliptic Curve Cryptography Definition

Elliptic Curve Cryptography (ECC) is a key-based technique for encrypting data. ECC focuses on pairs of public and private keys for decryption and encryption of web traffic.

ECC is frequently discussed in the context of the Rivest–Shamir–Adleman (RSA) cryptographic algorithm. RSA achieves one-way encryption of things like emails, data, and software using prime factorization.

Diagram graphs the elliptic curve equation y=x³ + ax + b.
FAQs

What is Elliptic Curve Cryptography?

ECC, an alternative technique to RSA, is a powerful cryptography approach. It generates security between key pairs for public key encryption by using the mathematics of elliptic curves.

RSA does something similar with prime numbers instead of elliptic curves, but ECC has gradually been growing in popularity recently due to its smaller key size and ability to maintain security. This trend will probably continue as the demand on devices to remain secure increases due to the size of keys growing, drawing on scarce mobile resources. This is why it is so important to understand elliptic curve cryptography in context.

In contrast to RSA, ECC bases its approach to public key cryptographic systems on how elliptic curves are structured algebraically over finite fields. Therefore, ECC creates keys that are more difficult, mathematically, to crack. For this reason, ECC is considered to be the next generation implementation of public key cryptography and more secure than RSA.

It also makes sense to adopt ECC to maintain high levels of both performance and security. That’s because ECC is increasingly in wider use as websites strive for greater online security in customer data and greater mobile optimization, simultaneously. More sites using ECC to secure data means a greater need for this kind of quick guide to elliptic curve cryptography.

An elliptic curve for current ECC purposes is a plane curve over a finite field which is made up of the points satisfying the equation:
y²=x³ + ax + b.

In this elliptic curve cryptography example, any point on the curve can be mirrored over the x-axis and the curve will stay the same. Any non-vertical line will intersect the curve in three places or fewer.

Elliptic Curve Cryptography vs RSA

The difference in size to security yield between RSA and ECC encryption keys is notable. The table below shows the sizes of keys needed to provide the same level of security. In other words, an elliptic curve cryptography key of 384 bit achieves the same level of security as an RSA of 7680 bit.

RSA Key Length (bit)
1024
2048
3072
7680
15360

ECC Key Length (bit)
160
224
256
384
521

There is no linear relationship between the sizes of ECC keys and RSA keys. That is, an RSA key size that is twice as big does not translate into an ECC key size that’s doubled. This compelling difference shows that ECC key generation and signing are substantially quicker than for RSA, and also that ECC uses less memory than does RSA.

Also, unlike in RSA, where both are integers, in ECC the private and public keys are not equally exchangeable. Instead, in ECC the public key is a point on the curve, while the private key is still an integer.

A quick comparison of the advantages and disadvantages of ECC and RSA algorithms looks like this:

ECC features smaller ciphertexts, keys, and signatures, and faster generation of keys and signatures. Its decryption and encryption speeds are moderately fast. ECC enables lower latency than inverse throughout by computing signatures in two stages. ECC features strong protocols for authenticated key exchange and support for the tech is strong.

The main disadvantage of ECC is that it isn’t easy to securely implement. Compared to RSA, which is much simpler on both the verification and encryption sides, ECC is a steeper learning curve and a bit slower for accumulating actionable results.

However, the disadvantages of RSA catch up with you soon. Key generation is slow with RSA, and so is decryption and signing, which aren’t always that easy to implement securely.

Advantages of Elliptic Curve Cryptography

Public-key cryptography works using algorithms that are easy to process in one direction and difficult to process in the reverse direction. For example, RSA relies on the fact that multiplying prime numbers to get a larger number is easy, while factoring huge numbers back to the original primes is much more difficult.

However, to remain secure, RSA needs keys that are 2048 bits or longer. This makes the process slow, and it also means that key size is important.

Size is a serious advantage of elliptic curve cryptography, because it translates into more power for smaller, mobile devices. It’s far simpler and requires less energy to factor than it is to solve for an elliptic curve discrete logarithm, so for two keys of the same size, RSA’s factoring encryption is more vulnerable.

Using ECC, you can achieve the same security level using smaller keys. In a world where mobile devices must do more and more cryptography with less computational power, ECC offers high security with faster, shorter keys compared to RSA.

How Secure is Elliptic Curve Cryptography?

There are several potential vulnerabilities to elliptic curve cryptography, including side-channel attacks and twist-security attacks. Both types aim to invalidate the ECC’s security for private keys.

Side-channel attacks including differential power attacks, fault analysis, simple power attacks, and simple timing attacks, typically result in information leaks. Simple countermeasures exist for all types of side-channel attacks.

An additional type of elliptic curve attack is the twist-security attack or fault attack. Such attacks may include invalid-curve attacks and small-subgroup attacks, and they may result in the private key of the victim leaking out. Twist-security attacks are typically simply mitigated with careful parameter validation and curve choices.

Although there are certain ways to attack ECC, the advantages of elliptic curve cryptography for wireless security mean it remains a more secure option.

What Is an Elliptic Curve Digital Signature?

An Elliptic Curve Digital Signature Algorithm (ECDSA) uses ECC keys to ensure each user is unique and every transaction is secure. Although this kind of digital signing algorithm (DSA) offers a functionally indistinguishable outcome as other DSAs, it uses the smaller keys you’d expect from ECC and therefore is more efficient.

What is Elliptic Curve Cryptography Used For?

ECC is among the most commonly used implementation techniques for digital signatures in cryptocurrencies. Both Bitcoin and Ethereum apply the Elliptic Curve Digital Signature Algorithm (ECDSA) specifically in signing transactions. However, ECC is not used only in cryptocurrencies. It is a standard for encryption that will be used by most web applications going forward due to its shorter key length and efficiency.

What is Elliptic Curve Cryptography Used For?

The VMware NSX Advanced Load Balancer’s software load balancer offers an elegant ECC solution. The VMware NSX Advanced Load Balancer fully supports termination of SSL– and TLS-encrypted HTTPS traffic. The VMware NSX Advanced Load Balancer’s support for SSL/TLS has included support for both RSA and ECC keys without the need for any proprietary hardware. See documentation for Elliptic Curve versus RSA Certificate Priority within the VMware NSX Advanced Load Balancer.

Read these blog posts to learn more about elliptic curve cryptography:

 

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

Elastic Scale

<< Back to Technical Glossary

Elastic Scale

Elastic Scale Definition With elastic scale, data centers are able to adapt to increases in application traffic by rapidly adding load balancing and application resources.

Image depicting elastic scale from application clients (end users) to load balancers then adapting to scale and distribute resources in response to traffic in the data center and application servers.
FAQs

What is Elastic Scale?

Elastic scaling is the ability to automatically add or remove compute or networking infrastructure based on changing application traffic patterns. Elastic load balancer auto scaling is used to automatically adjust the amount of resources (for instance, number of load balancers) that are allocated to deliver an application in response to changes in traffic patterns.

How Does Elastic Scale Work?

Elastic scaling works using an initial launch configuration and scaling policies. If key performance parameters are met, the load balancer will add or remove instances of itself to ensure consistency application delivery. This ensures the load balancer will continue to respond to requests during times of rapid demand change or if infrastructure performance degrades.

How Does Elastic Scale Apply to Load Balancing ?

Load balancing routes traffic from overloaded servers to servers that have capacity. When elastic scaling is applied to load balancing, it can automatically determine how to route traffic and to spin up additional instances if workloads exceed capacity. As more load balancing servers are brought online, capacity to handle peak traffic increases. This rapid deployment of new load balancers to handle bursty traffic is called elastic scale. The same applies as demand falls, and application delivery servers are taken offline. This is only possible with software load balancers. Hardware load balancers must be commissions to handle the peak load, and sit idle at other times.

What are the Benefits of Elastic Scale ?

• Better fault tolerance – for example, Elastic Scale in AWS environments can detect when an server is unhealthy, terminate it and launch an instance to replace it.

• Better availability – elastic scaling helps ensure that an instance has the capacity to handle the current traffic demand.

• Better cost management – elastic scaling can adjust the capacity as needed. Money is saved by only paying for instances that are used when they are needed.

Does VMware NSX Advanced Load Balancer offer Elastic Scale?

Yes. Elastic scaling is a core characteristic of the VMware NSX Advanced Load Balancer that allows it to automatically create (scale out) or delete (scale in) SEs to adjust capacity based on end-user traffic and virtual service health scores. Since the VMware NSX Advanced Load Balancer is software-defined it is able to offer highly elastic load balancing and application services.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.

For more information on Elastic Scale see the following resources:

Elastic Load Balancer

<< Back to Technical Glossary

An Elastic Load Balancer (ELB) has the ability to scale load balancers and applications based on real-time traffic automatically. Due to elasticity, it’s usually implemented as a software load balancer. It uses system health checks to learn the status of application pool members (application servers) and routes traffic appropriately to available servers, manages fail-over to high availability targets, or automatically spin-up additional capacity.

Diagram depicting elastic load balancing in the process from application servers through the Avi elastic load balancer through the cloud to the end users.
FAQs

What is Elastic Load Balancing?

Elastic Load Balancing scales traffic to an application as demand changes over time. It also scales load balancing instances automatically and on-demand. As elastic load balancing uses request routing algorithms to distribute incoming application traffic across multiple instances or scale them as necessary, it increases the fault tolerance of your applications.

How Does an Elastic Load Balancer Work?

Elastic Load Balancing scales your load balancer as traffic to your servers change. It routes incoming application traffic across instances automatically. The elastic load balancer acts as the point of contact to incoming traffic, and by monitoring the health of instances, the elastic load balancing service can send the traffic requests to healthy instances.

Does VMware NSX Advanced Load Balancer Offer a Elastic Load Balancer?

Yes. The VMware NSX Advanced Load Balancer features an 100% software elastic load balancer that can scale automatically via the built-in application performance monitoring capabilities in the VMware NSX Advanced Load Balancer. With on-demand scaling of load balancers and the ability to trigger the scaling of backend application servers through ecosystem integrations with orchestration platforms, the VMware NSX Advanced Load Balancer delivers a better end user experience. By scaling up or down automatically in response to traffic patterns, the platform eliminates the common practices of over-provisioning application load balancing capacity that is a challenge with hardware, appliance-based load balancers.

For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos or watch Scaling Out Load Balancing in vCenter How To Video here:

For more information on elastic load balancers see the following resources: