Press Release New Media

Avi Networks cautions on need to look under the hood of load balancers

As enterprises increase their load balancer footprint to keep up with escalating numbers of applications, the need for a controller becomes critical, according to Avi Networks.

The company’s senior manager Chris Heggem says that a controller is the way to automate the management and lifecycle of load balancers across data centres and clouds.

Yet most well-known load balancing vendors do not have a controller, they have an instance manager.

The architecture behind hardware and software load balancing appliances was developed in the late 90s and early 2000s — before cloud and before containers. Each appliance has a combined control plane and data plane.

The control plane is where IT can regulate each individual appliance’s traffic management, security and policy functions. The data plane simply carries the traffic.

Most software load balancers share this same architecture, with a control plane and data plane unique to each appliance.

Say, for example, an organisation has 50 load balancer appliances (hardware or virtual, doesn’t matter) in data centres and clouds, and 50 control planes need to be managed individually. Since this is rather a headache, vendors bolted on an instance manager to help manage their load balancers.

Instance managers essentially allow SSH or connection into each appliance and configure traffic management, security and policy commands. The control plane still resides on each individual appliance, and the administrator is still the brains of the operation.

Not much has really changed.

For instance, the same mental gymnastics are necessary when IT has to ask questions like, ‘Do we have a load balancer deployed in this environment? Does this load balancer have capacity? Is it configured to work with the instance manager?’

If there is no load balancer deployed (or a need to deploy a new one), the instance manager cannot help. IT has to deploy the load balancer to the environment manually , identify the host to install it on, and configure it through the instance manager.

For anyone familiar with the ‘pets vs. cattle’ analogy, appliance load balancers are pets.

They need to be hand fed and cared for regardless of whether there is traffic to justify their existence. The instance manager simply helps manage the care and hand-feeding of each appliance from a centralised dashboard – and it is still a lot of work.

Certain load balancing technology originated in the cloud and container age, though its advances can be easily applied to traditional applications on virtual and bare metal environments. The newer tech is different as it has separated the control plane entirely from the load balancer.

A load balancer, or Service Engine, resides in the data plane and takes all commands from a centralised brain for application services across all environments. That brain is the controller, which works just like an SDN controller or the Kubernetes Controller.

Unlike instance managers which can manage only pre-existing resources, a new technology controller can spin up new service engines and leverage machine learning to react predictively to changes to application and network context.

For an environment without a load balancer, the controller will deploy one. It’s not even necessary to tell it where and on which host.

Does the load balancer have capacity? It doesn’t matter - again, the new tech controller will scale up and down to meet an organisation’s needs.

In the pets vs. cattle analogy, the new tech service engines are cattle. IT won’t give much thought about managing them, as that’s the controller’s job.

An active load balancing fabric is managed by the controller.

There are no active-standby pairs or over-provisioned load balancers waiting for use. And if there is a failure, the controller self-heals to maintain high availability by ensuring that applications receive the services they need based on intent, not available instances.

The load balancing industry has been dormant for decades, but advances in application architecture and multi-cloud environments are forcing enterprises to re-evaluate their load balancing providers.

The fact that many vendors are claiming to have controllers is a recognition of customers' needs for this architectural model that supports modern applications. But many of these products don't truly measure up.

If a vendor claims to have a controller, ask the following simple questions:

  1. Can the controller automatically place a virtual service on any load balancer and plumb the connections to the pool servers in any data centre or cloud?
  2. Can it automatically heal/recover load balancers from a catastrophic failure without the need for an active-standby pair of appliances?
  3. Can it automatically scale up or scale down load balancing capacity based on real-time traffic patterns?


Without this functionality, a load balancer will not be able to keep pace with a business.

This email address is being protected from spambots. You need JavaScript enabled to view it.