Cisco ACE: Basic HTTP Load Balancing

The ACE (Application Control Engine) is Cisco’s replacement for the CSS and CSM load balancers in their data center product line.  It comes in both a module (or “blade”) for the Catalyst 6500 switch and as a standalone appliance.  This post will cover the basics of configuring an ACE to load-balance a farm of HTTP servers.  Subsequent posts will cover advanced features such as session persistence, health checks, and more.

Assumptions

  1. The ACE has been configured (possibly using the setup wizard) with interface and trunking options.

  2. You are deploying the ACE in “routed mode”, e.g. the ACE is the default gateway for the backend servers and the VIPs live on a different network on the “outside” interface.

  3. You have three web servers, WEB1, WEB2, and WEB3 all listening on port 80.

Configuration

Unlike a router, the ACE is a “deny by default” device.  You must explicitly permit any traffic entering the ACE from the network.  Thus, we need an access list (ACL) to allow traffic to our HTTP virtual IP (VIP).

access-list VLAN1 extended permit tcp any host 1.1.1.100 eq www

Next, we need to define our backend servers.  The “inservice” keyword is the ACE equivalent of the “no shutdown” command for an interface.  If you forget it, things won’t work.

rserver host WWW1
  ip address 2.2.2.101
  inservice

rserver host WWW2
  ip address 2.2.2.102
  inservice

rserver host WWW3
  ip address 2.2.2.103
  inservice

Now we need to define a health check, so that the ACE can determine if each backend server is functional and should receive traffic.  We’ll use a very basic HTTP service check at this point. We configure the probe to check each server every 10 seconds and accept the default behavior of marking a server as “failed” if it fails 3 checks. Also by default, the ACE will use an HTTP GET request for the root or “/” URL. That’s fine for this example. Finally, we tell the ACE that a server must respond for at least 60 seconds before it is marked as “back up” after a failure.

An important note: the HTTP probe must have an expected status code or range of codes defined. If you omit this statement, your backend servers will never come up!

probe http HTTP_PROBE
  interval 10
  passdetect interval 60
  expect status 200

Now that we have our backend servers defined, as well as a probe to check their status, we can join them together into a server farm. Again, don’t forget to “inservice” each rserver, or it won’t come up.

serverfarm host HTTP_FARM
  probe HTTP_PROBE
  rserver WWW1
    inservice
  rserver WWW2
    inservice
  rserver WWW3
    inservice

We need to tell the ACE about the virtual IP (VIP) on which we want it to listen. This is done with a class-map.

class-map match-all HTTP_VIP
  2 match virtual-address 1.1.1.100 tcp eq www

Next, we need to define our load-balancing policy, to tell the ACE what to do with traffic once it hits the VIP. In this case, we just direct it to the server farm defined above.

policy-map type loadbalance http first-match HTTP_POLICY
  class class-default
    serverfarm HTTP_FARM

The last piece we need is something to tie the policy to the VIP. We do this with a policy-map of type “multi-match”. For convenience, we configure the VIP to respond to ICMP echo request (pings) as long as at least one backend server is up.

policy-map multi-match VIPs
  class HTTP_VIP
    loadbalance vip inservice
    loadbalance policy HTTP_POLICY
    loadbalance vip icmp-reply active

Finally, we need to apply our policy to the “outside” interface of the ACE, bringing up our VIP. We also need to apply the ACL we created above to allow the HTTP requests inbound.

interface vlan 1
  description Public Network
  ip address 1.1.1.1 255.255.255.0
  access-group input VLAN1
  service-policy input VIPs
  no shutdown

That’s the end! You can grab the full configuration here.