Translate

Wednesday, October 24, 2012

Tentang ADC Application Delivery Controller ..

application delivery controller

Good question: We own a load balancer/ADC. Why not just use it to accelerate our site?

FEB

Once you've been in an industry for a long time, you start hearing the same patterns in client meetings. They seem to have a checklist of objections that they must all get from the same website. (I can't find the site, so maybe only CTOs and VPs get access.)

Last week, I was in a client meeting and got lobbed a nice softball sales objection that I hear all the time:

"We own load balancers/ADCs (from F5, Citrix, Radware, A10, Cisco, Brocade, Riverbed, etc.) Why shouldn't we just use them to accelerate our site?"

I relish this question because it lets me pontificate on one of my favorite subjects: all of the differences in culture, technology, and focus between the network players and those of us who toil at the highest layer of the stack.

After answering the question for the client, I realized that I've never written a post on ADCs (aka application delivery controllers), what they offer when it comes to performance, and whether or not they're worth it when it comes to delivering a faster end-user experience. So here goes…

Caveat: I can already see the hate mail piling up. To be clear, I'm NOT saying you don't need an ADC. I think many sophisticated customers need an ADC. We own a number of them, and they are mission-critical to our business.

The point of this post is to:

  • demystify web acceleration and front-end optimization from an ADC perspective
  • look at ADC features from the perspective of real end-user performance
  • see if an ADC provides any end-user performance value beyond what a properly configured web server provides under normal circumstances.

Defining the web acceleration space

Solutions in this space go by many names: load balancer, application delivery controller, traffic manager, and so on. These days, most people are using the term "application delivery controller" (ADC). An ADC is an appliance or software that sits in front of your web servers in your datacenter. It sees all of the traffic to and from your web servers. Originally, solutions in this space were simple products that were designed to distribute workload across multiple computers. In recent years, the market has evolved to include more sophisticated features.

Here are just a few of the things ADCs do well:

  • load balancing
  • improving the scale of server infrastructure
  • server availability and fault tolerance
  • security features
  • layer 7 routing
  • a whole host of other infrastructure services that are vital to today's modern network designs

For more, my friends Joe and Mark at Gartner do a great job of describing the main players and highlighting capabilities in the Magic Quadrant for application delivery controllers. While Strangeloop is included in this Magic Quadrant, we are not an ADC. Unlike ADC solutions, we focus on front-end optimization. But we frequently get lumped in with ADCs because, like them, we have an appliance that sits in front of servers and does good stuff for web traffic. The true ADC vendors that matter are F5, Citrix, Radware, A10, Cisco, Brocade, and Riverbed (Zeus).

Let's clarify our terms: What "performance" means for ADC vendors vs. what it means for the FEO community

Now let's focus on the performance aspect.

ADCs do not focus on front-end problems, yet the words they use are very similar — and in some cases identical — to the words used by the front-end performance community. We need clarification.

The term "performance" is often used very differently by ADC vendors, so let me be clear: what I am talking about is how fast a page loads on a real browser for a real person in a real-world situation. Although this seems obvious, I constantly hear the terms "performance" or "acceleration" used to represent scale or offload (when functionality is "offloaded" from the server and moved to a device instead), or in some cases technical minutiae that has no discernible impact on the performance of web pages for real users.

From a performance perspective, a typical ADC helps mostly with back-end performance. Back-end performance optimization means optimizing how data is processed on your server before it's sent to the user's browser. ADCs provide most of their benefit by offloading jobs from the web server, which in turn allows the web server to focus all of its energy and horsepower on serving pages.

While ADCs contribute to the smooth functioning of a site's back end, those of us in the performance community have established and re-established that the major problem with user-perceived web performance is not in the back end. According to arecent post by Steve Souders, between 76% and 92% of end-user response time is at the front end. While server overload can happen during crazy traffic spikes (like those experienced by florist sites on Valentine's Day morning), the fact is that most web servers are not overloaded most of the time, thus most of these offload solutions don't really help user-perceived web performance on a day-to-day basis.

When we do see backend problems, they're rarely load related and are more often issues with costly database lookups or other application-logic issues that cause back-end time to be greater than average. And let's not forget that, when there are back-end problems, they almost always contribute to the time to first byte of the HTML. Static objects are rarely affected by the same issues that typically show up with backend problems.

If you apply core performance best practices via an ADC, do you get a faster end-user experience than you would via your server?

I took four typical performance optimizations and applied them using both an ADC and a traditional web server under normal load. The goal was to get a side-by-side, before-and-after look at whether or not the ADC delivered a faster end-user experience than the server.

1. Compression

Compression involves encoding information using fewer bits than the original. Compression reduces payload which reduces download time. In the web world, it is often done using gzip or deflate.

I have been talking about compression for years. It really helps front-end performance. Compression is available for free on all modern web servers, so the question here is: do I gain any speed by turning on compression in my ADC versus the compression I would have via my web server? Compression helps reduce the download time (the blue bar in the waterfall below).

Observation: We don't see any material difference when using compression in an ADC versus compression on a web server.

Conclusion: Overall benefit is minimal to none when compared to a normal web server.

2. Multiplexing to the back end

ADCs (as well as some standalone acceleration solutions) use a technique called "TCP multiplexing" (also called HTTP multiplexing, TCP pooling, and connection pooling), which frees the server from having to maintain many, many concurrent connections from many, many clients. Connections take up resources on the server, and servers can't hold open a very large number of connections at once. Gradually, the server degrades with more and more concurrent connections. With each new connection, after some threshold, it takes a tiny bit more time to open a new one. Ultimately, the server will probably get to a point where it can't open new connections, so the new connections just hang or get rejected, depending on how the stack handles it.

ADCs handle the concurrent connection problem up front, and allow the server to only have to deal with a smaller number of very long-lasting connections, which the ADC maintains and manages with each of the servers. The main benefits of the ADC in this case are consistency and predictability.

The experiment below was designed to highlight how much TCP multiplexing helps from an immediate end-user performance perspective.

We turned multiplexing off and on for a production site under normal load, and observed the performance benefits. (This site represents a typical mid-tier e-commerce site: 75 html page requests per second, which equates to roughly 200 million page views per month.)

Without multiplexing:

With multiplexing turned off, we see an average time to first byte on the HTML of 169 ms.

With multiplexing:

We then turned on multiplexing, waited for an hour, and sampled the site again. With multiplexing turned on, we see an average time to first byte on the HTML of 168 ms.

Observations: Concurrent connection management, set-up, and tear down in most modern web servers is efficient. In some edge situations where the servers are under serious load, this feature will show a bigger benefit, but for the most part our findings were that multiplexing to the back end had limited overall performance value. (Note: I'm not going to get into the offload value in this post, but this is not to say that multiplexing is worthless. Remember: I'm only speaking to the end-user performance benefit.)

Conclusion: Immediate acceleration benefit for most sites is minimal to none. However, it's a good idea to turn on multiplexing anyway, since it will help with edge cases (such as being hit by a flood of traffic). It also helps shield the server from spikes and unusual loads, and provides a more predictable performance environment for the servers.

3. Object and page caching

Object and page caching are aimed at offloading the server and reducing time to first byte. These features are offered by most vendors, but they're often hard to configure, and so they're rarely used. Instead, many customers who use a CDN will "offload" caching responsibility to their CDN or to a standalone cache in their network (e.g. Varnish).

When you look at this technique from a waterfall perspective, you see that the performance benefit comes from reducing time to first byte for three reasons:

  • The ADC may return objects faster than your web servers.
  • Offloading the work of serving those objects would allow the web server to focus on serving dynamic content and further improve time to first byte (TTFB) on dynamic objects.
  • Back-end issues can cause the HTML to take a long time for the server to generate, in which case caching the HTML for the page will help, especially with TTFB. But keep in mind the HTML is static enough to be cacheable, even if it's for a few minutes at a time.

For example, the time to first byte on the HTML (highlighted in the red box) may decrease with page caching.

Observations: In most cases, I see very little difference between the time to first byte on objects or pages served from an ADC cache versus a web server cache. Page caching can mask back-end issues that cause slow HTML generation, but the page can be cached either at the ADC or the server itself. In many cases, the offload benefit can help busy web farms, but an ADC cache versus your web server cache is not going to buy you much benefit. Also, as mentioned above, for most public-facing sites, site owners rely on the CDN to provide caching.

Conclusion: Performance benefit on most pages <50 ms.

4. TCP/IP optimization

Every vendor will claim a "state of the art" TCP/IP stack and "hundreds" of improvements to make sites faster. Most of these optimizations fall into two categories:

  • Expanding buffer sizes and detects low latency to manage congestion
  • Ways to limit packet loss and recovery in the case of dropped packets

Obviously, a good TCP/IP stack is important. The question here is: does it materially affect performance when measured against a modern web server with a standard configuration?

As the waterfall below shows, the TCP/IP stack improvement would affect the time to first byte as well as the download time.

Specific to HTTP, implementing keep-alive connections to maintain longer TCP/IP connections with clients is also something ADCs do. We all know the benefit of keep-alive connections, and we know that they help front-end issues since the cost of setting up and tearing down new TCP connections is minimized for the browser. But, like most other issues here, modern web servers are pretty good at using keep-alive connections with clients. So the ADC isn't improving page performance or helping front-end issues any more than the server would do this with a few configuration tweaks that are probably in place anyway.

Observation: Each vendor seems to have a different approach to what comes out of the box. Overall, the TCP/IP stacks of the vendors are much better than out-of-the-box web server stacks, and they do make a difference. The difference is in line with my observations about the TCP stack improvements presented by dynamic site acceleration (DSA).

Conclusion: Performance gain under normal circumstances <150ms

Summary

Performance best practice
Performance gain when implemented using an ADC vs a web server under normal load
CompressionMinimal to none
TCP multiplexingMinimal to none (but still good to use in case of traffic spikes)
Object and page caching<50 ms
TCP/IP optimization<150ms

So… what to do with this information?

You need to understand what an ADC will do for you and how it will help user-perceived performance. Buying it for security, scalability, and offload is a very different decision than if you want it purely for acceleration or think it will make your site load 40% faster.

It is very common to lump the benefits of offload and the benefits of site acceleration into one category. You need to separate these categories.

If you're considering an ADC purchase solely to make your pages faster for your users, I recommend following these steps:

  1. Determine if your web servers are modern and have the capacity to handle your request volume. Also ensure they are using compression and object caching.
  2. Get waterfalls from different locations using WebPagetest. Check out the HTML bar (usually the first bar) and see if reducing the server think time (green: assume 10%, only if you can cache your HTML) and the download time (blue: assume 5% in North America and Europe and 25% in other parts of world) actually brings you a benefit.
  3. Research the different vendors, or email me. (I know them all and would be happy to help.)
  4. Pick a few ADCs to try.
  5. Test each vendor yourself using an open-source tool like WebPagetest. Be wary of the tests the vendors send you as these often do not reflect how the site performs for a real end user.

My opinion: I remain a skeptic that ADC can help with end-user performance in any meaningful way.

I see ADCs more as a savvy way to commoditize offload devices than really helping with front-end performance.

As an acceleration play, I see that ADC vendors have fallen way behind. Their tools are very expensive and, when it comes to performance, they offer small incremental performance gains.

I do see a lot of potential for this area of the technology stack. I think it has some promise, but there's been very little innovation in recent years. There have been no truly exciting developments since Cisco acquired FineGround in 2005 and F5 acquired its Web Accelerator product in 2006. These products, like most others in the space, have not evolved in 5-6 years.

I'd like to be more convinced. If anyone has real-world performance data with compelling evidence that ADC performance gains are significant compared to a well-configured web server under normal circumstances, I'm all ears.

Related posts:

Why doesn't our industry talk more about performance and productivity?

OCT

Ever wondered how much time people in your company spend waiting for internal apps to load? One of our clients did some math on this, and they got some eye-opening results:

They calculated that, in just one small department of 20 people, those people spent a total of 130 hours per month waiting for the pages of a single internal web-based application to load. That's 130 hours (in other words, almost an entire month's worth of work hours for one person) that could have been spent making sales, responding to clients, fighting with the photocopier – basically doing anything more productive than staring at a screen.

In 2008, Aberdeen released a benchmark report called Application Performance Management: The Lifecycle Approach Brings IT and Business Together. This report stated that application performance issues — including internal app performance— could hurt overall corporate revenues by up to 9%.

At the same time, Aberdeen also found that the average organization was using six business-critical applications, with plans to rollout four more, bringing the total up to ten applications by 2010.

Let's imagine a not-so-crazy scenario

Now let's apply Aberdeen's findings to my customer findings at the top of this post. Imagine that this same department of 20 people isn't using just one application, but instead is using ten. Imagine that four out of ten of these apps are experiencing significant performance problems.

The results:

  • Instead of waiting 130 hours, the people in this department spend a total of 520 hours a month waiting for pages to load.
  • That's the equivalent of the total number of work hours for 3.5 employees.
  • In monetary terms: that wait time equals about $9,500 per month, or $114,000 a year (based on an average annual salary of $32,700, according to  U.S. Bureau of Labor Statistics, January 2010).
  • Instead of a 20-person department, extrapolate these numbers throughout a larger enterprise, such as Amazon. With more than 33,000 employees in total, let's assume that half are desktop workers. Following the rationale above, poor app performance could cost Amazon upward of $7.8 million a month, or $94 million a year.

Crazy? No. A bit facile? Yes. But only a bit. While employees are waiting for apps to load, they could be doing other things, such as making calls and multitasking with other slow apps. But bear in mind usability expert Jakob Nielsen's findings about response time and human behavior10 seconds is about the limit for keeping a user's attention focused on a non-responsive dialogue. After 10 seconds, even the most efficient worker has to struggle to re-focus on the task at hand. Repeated interruptions are a huge detriment to productivity.

We need more performance + productivity case studies.

We all know about the impact that faster page load had on revenue for Amazon andShopzilla. But there are some inspiring lesser-known case studies that demonstrate the relationship between improving internal app performance and employee productivity.

In one case study, Hydro-Québec was experiencing some serious performance pains with a shared CAD app, with people in remote offices suffering 30- to 60-second delays between every mouse click. Not surprisingly: "Response time was ugly," according to Daniel Brisebois, Hydro-Québec's IT advisor. After application response time was improved by 10- to 25-fold, Hydro-Québec reported benefits such as fewer errors, a faster engineering cycle, and enhanced data integrity.

In another case study, this one for the Hilton Grand Vacations Club, a division of the company that sells timeshares to international customers, the company was experiencing major latency issues that caused delays of up to 30 minutes for a vital contract-processing app. Hilton's senior director of technology applications, Rich Jackson, said that "As you can imagine, with the customer is sitting in front of you while you are waiting on a computer process, it's not an ideal situation. When you're processing a contract, time is of the essence. You don't want delays due to technology issues." After accelerating the app, Jackson said that "The contract process has been reduced from more than 30 minutes to just a minute or two. It has a huge effect on customer and employee satisfaction."

These are good stories, but we need more.

Why don't we WCO folks talk more about performance and productivity?

Off the top of my head:

  • In the early days of our industry, we distanced ourselves from the application acceleration folks in order to carve out our own niche.
  • We share information using tools like WebPagetest and the HTTP Archive, which are only relevant to the public web.
  • Talking about money is a lot sexier than talking about productivity.

In the months to come, I'm going to do my part to make productivity as sexy as money. If you have productivity stories to share, I'd love to hear about them.

Related posts:

Why the performance measurement island you trust is sinking

JUL

I want to start this post with a little story. (Self indulgent, I know. But I never do this, so humor me this one time.)

The enlightenment currently going on in our industry reminds me of an allegory told in a book called Flatland, written in 1884 by Edwin Abbot. Flatland is a two-dimensional world whose inhabitants are geometric figures. The protagonist is a square.

One day, the square is visited by a sphere from a three-dimensional world called Spaceland. When the sphere visits Flatland, however, all that is visible to Flatlanders is the part of the sphere that lies in their plane: a circle. The square is astonished that the circle is able to grow or shrink at will (by rising or sinking into the plane of Flatland) and even to disappear and reappear in a different place.

The sphere tries to explain the concept of the third dimension to the two-dimensional sphere, but the square, though skilled at two-dimensional geometry, doesn't get it. He can't understand what it means to have thickness in addition to height and width, nor can he understand that the circle has a view of the world from up above him, where "up" does not mean from the north.

In desperation, the sphere yanks the square up out of Flatland and into the third dimension so that the square can look down on his world and see it all at once. The square recalls the experience:

An unspeakable horror seized me. There was darkness; then a dizzy, sickening sensation of sight that was not like seeing; I saw space that was not space; I was myself, and not myself. When I could find voice, I shrieked aloud in agony, "Either this is madness or it is Hell." "It is neither," calmly replied the voice of the sphere. "It is Knowledge; it is Three dimensions: open your eye once again and try to look steadily." I looked, and, behold, a new world."

The square is awestruck. He prostrates himself before the sphere and becomes the sphere's disciple. Upon his return to Flatland, he struggles to preach the Gospel of Three Dimensions to his fellow two-dimensional creatures.

What does this have to do with performance?

I've sugarcoated this message in the past. Now I want to come right out and say it:

There is a very good chance that the measurements you trust to tell you how fast your site is are wrong.

I'll go one step further (and risk losing a few friends in the CDN and ADC space) and say this:

If you're a site owner, many members of the performance industry are intentionally misleading you.

In other words, it's in many performance vendors' best interests to keep you living in Flatland.

Let me tell you a very common story:

I was talking with a customer who runs a very large ecommerce site. He's been told by all his trusted performance advisors (analytics company, performance measurement company, large CDN company and/or large load balancer company) that the gold standard for measurement is a graph based on synthetic backbone tests, which looks something like this:

Click to see larger version

If you've ever had a similar conversation with one of your performance vendors, I'm going to bet that he or she has told you some or all of the following things:

  • This graph represents the home page performance of your site.
  • The test is a representative average of many geographical locations.
  • The test methodology is rarely exposed but, when dissected, the vendor tells you it is the industry standard, tested using real modern browsers.
  • The vendor assures you that this is the "safe island" upon which all companies measure performance.

When pushed, the vendor may also show you a waterfall that looks something like this:

Click to see larger version

I routinely encounter customers that have been led, by the very experts they trust, into believing that their site performance can be measured by tools like this. It can't.

Three reasons why you can't always believe the experts you trust

1. INDUSTRY BENCHMARKS ARE WRONG.

I've written about this here, so I won't rehash it all again. Short version: Benchmarks are based on backbone tests, which only tell you how fast your site loads at major internet hubs. Because you're skipping the "last mile" between the server and the user's browser, you're not seeing how your site actually performs in the real world. As I noted in my earlier post, your backbone test may tell you that your site loads in 1.3 seconds, when in the real world it actually takes anywhere from 3 to 10 seconds. That's a huge discrepancy.

2. MAJOR PERFORMANCE VENDORS HAVE CONVINCED SITE OWNERS TO TIE BONUSES FOR KEY EMPLOYEES TO BACKBONE TEST RESULTS.

This practice is so widespread and accepted that I have spoken to employees at big companies who tell me that they don't even care what their real-world performance is anymore, because the bonus they get only depends on how their backbone test performance compares to the industry benchmarks.

3. THESE SAME PERFORMANCE VENDORS CONTINUE TO CLING TO THESE TEST RESULTS BECAUSE IT JUSTIFIES THEIR HIGH MARGINS.

This is the saddest part. I have spoken at length to large companies — trusted performance experts — who know that these tests are a worthless measure of real-world performance. But they continue to sell them as the gospel truth because, as I've been told, "This is the only safe island we have. Without a standard, no matter how untrustworthy, we could not command high margins for our products."

Diabolical scam? Or good intentions gone wrong? A bit of both?

I want to be clear about this: I'm not criticizing the companies like Gomez that make these tests. I am, however, extremely critical of performance vendors who use these tests as a means of communicating the value of their product.

Backbone tests were originally developed to do two things:

  • Monitor uptime/downtime – Telling you if your site is up and running.
  • Spot performance trends – If you see a spike in your graph, then something's probably wrong somewhere, especially if the spike lasts a while.

Monitoring downtime and spotting performance trends remain valid use cases for backbone tests. Measuring actual page speed, however? Not a valid use case. So why do some performance vendors pretend different?

In recent years, interest in front-end performance has boomed. At the same time, there was a void in measurement tools. At the time, backbone tests filled this void.They were able to do this because, as long as some basic assumptions went unquestioned, they felt close enough that people accepted them without question.

Why backbone tests are deeply flawed measures of real-world performance

Backbone tests are flawed performance indicators because they rely on several basic assumptions that simply are not true:

Measurement factorWhy it mattersWhat SHOULD be testedWhat IS tested
LatencyIn the real world, we know that most people are, at best, 20-30ms from the closest server.Synthetic clients should be exposed to last mile latency that mimics real-world users.Synthetic clients sit at the elbow of the internet and experience almost no latency. (The waterfall above shows the first asset coming down in 0.002s. Ha!)
BandwidthIn the real world, most users have limited bandwidth (2-5Mbps) and are on cable/DSL/ADSL lines.Synthetic clients should be exposed to last mile bandwidth limitations, just like real-world users.Synthetic clients enjoy warm, cozy homes in plush tier 1datacenters with nearly unlimited bandwidth to the world
Stopping the clockWe need to know when to stop measuring, so we know how fast our pages are.Actual browser events are the closest indicators to page speed.  Pages are finished loading when the on-load event fires, which is often way before the last resource is served. In fact, from a user's perspective, pages are often finished loading much earlier than the onload event.Pages are finished loading when the last resource is served.  (You know all those JavaScripts everyone keeps telling you to defer? Totally immaterial if you test like this!)
Where the resources arePages often have resources all over the place, near and far; particularly for longtail or low hit rate content.A simulation of what a real user would experience.  Resources that are likely to be near the user should be near it and those likely to be far should be far.For CDN customers, the resources are conveniently located and always available in the rack next to the test box.
(Some CDNs have gamed this system and ensured that their pops are conveniently located next to the servers that perform these types of test.)
BrowsersThere are many browsers out there, and the modern ones are iterating pretty quickly.The browsers available for synthetic testing should be close to the browsers that are used in the real world.  And they should keep up.We still see many IE6 agents. Many of the agents that report as IE8 actually work as IE7 agents.
JavaScriptJavaScript is a big part of the web. Its effect on the browser and page speed can sometimes be significant.Synthetic clients should run client-side code exactly like a normal browser would and report back on the client-side processing impact and its effect on the overall speed of the page.Many tests we see don't run JavaScript properly.

So how do you get a true measure of your site's speed?

Many different solutions exist to help you see the truth about exactly what is going on in the real world — from free tools like WebPagetest to real end user monitoring tools. (There are too many to discuss here, but you can find them by Googling "real user monitoring". When you're analyzing tools, be sure to check out how each one measures the last mile, the trip between the server and the browser).

While there is no absolutely perfect tool out there (and when I find one, I'll be sure to let you know about it), I can assure you that these tools are better than the backbone tests you've been fed so far.

Related posts:

WPO? WCO? FEO? Unmuddying the web performance waters

JUN

With Velocity coming up fast, now seems like a good time to write a post I've been meaning to write for a while, clarifying some of the language we use in our industry.

Bur first, a little context:

A couple of weeks ago, I was in London, talking with the Web Performance Meetup crew there about the history and taxonomy of the performance space. Stephen Thair has done an awesome job of summarizing that aspect of the session (which you can read about onhis blog, so I won't go into it in detail here), but suffice to say for my purposes today that performance solutions have, historically, fallen into two major groups:

  • Delivery solutions, such as content delivery networks, dynamic site accelerators, and load balancers, which focus on delivering the exact content the server produces from the server to the browser as quickly as possible.
  • Transformation solutions, which change the content itself, optimizing it so that it renders faster in the user's browser.

For a long time, from 1993 till about 2006, delivery solutions ruled the performance marketplace. However, when Steve Souders announced four years ago, in his book High Performance Websites, that up to 85% of performance problems happen at the front end — at the browser level — it opened the door for transformation solutions. And this is when the waters started to get a bit muddy.

Hopefully, these definitions will unmuddy them:

Web performance optimization (WPO)

First public appearance: May 7, 2010

This is a catch-all term that applies to all performance solutions, whether delivery or transformation based. Steve Souders coined this term just over a year ago, and because Steve works in the front-end space, those of us who also work in this space quickly picked it up and ran with it. But as my colleague Fred Beringer pointed out recently:

website performance optimization

Fair enough. There have been efforts to introduce more specific terminology into the transformation industry, such as…

Front-end optimization (FEO)

First public appearance: January 10, 2011

Dan Rayburn wrote a really excellent post in which he defines front-end optimization:

FEO might sound similar to another subject I have written about lately, dynamic site acceleration (DSA), but it's very different. DSA's focus is to bring network resources closer to the user by prefetching or caching files. FEO makes the content itself faster. DSA makes page resources download faster. FEO reduces the number of page resources required to download a given page and makes the browser process the page faster. For example, analysis shows that popular sites like CNN, who already use a CDN, can double current performance by implementing FEO.

In other words, FEO is a transformation solution, as I described near the top of this post. However, different people have different interpretations of what constitutes the "front end", so even this term might not be clear enough.

Web content optimization (WCO)

First public appearance: January 18, 2011

This is the term we here at Strangeloop have settled on (for now, anyway) as the clearest, most specific definition of what we do. We coined it with Akamai when we announced ourpartnership at the beginning of the year, and it still feels like a pretty good fit:

Web Content Optimization technology takes HTML that has been optimized for readability, supportability and maintainability and, while retaining these benefits, transforms it to HTML that is optimized for fast page rendering. This involves implementing numerous best practices such as rewriting object names, re-ordering when and how objects are rendered, re-ordering when scripts are executed, and optimizing content based on the requesting browser.

It may not be a perfect term — we don't accelerate content such as video or Flash, for example — but it's the best we've found so far. (Alternately, we've also bandied about the term "web code optimization" to see how it feels, but it strikes us as perhaps too limiting.)

Just looking back at the time frame over which these terms have emerged makes it clear that our industry is moving fast, and our terminology is still in flux. (Though some of us have probably used our pet terms so much in our writing that we're pretty attached to it for SEO purposes. :) ) But at the end of the day, we'll serve ourselves and our customers better if we can give the clearest definition possible for what we do.

Related posts:

How much performance value do ADCs/load balancers actually provide in the real world?

NOV

In the past, I've singled out CDNs as targets for discussing what performance value they provide. But when I talk to companies about web performance, they don't just talk about what their CDN is doing for them: they talk up their ADC, too.

To clarify: ADC (aka application delivery controller) is a blanket term coined by Gartner to include the entire family of load balancing products and services. Companies use ADCs for several reasons: scale, security, availability. In recent years, with the emergence of site speed as a priority, ADC providers have begun to tout the performance gains they provide.

There's no doubt that ADC providers offer value in scale, security and availability. But it recently struck me that no one has taken a close look at how well ADC players actually deliver web performance. Today is a good day to start.

The Gartner Magic Quadrant for ADCs is coming out shortly. In honour of that prestigious list, let's take the ten companies from last year's list — along with their own featured ADC case studies, taken straight from their sites — and see how they stack up.

Methodology

The home page for each site was tested on DSL for IE 7, primarily using Webpagetest's server location in Dulles, VA, except for where noted otherwise. (If you're wondering "Why IE 7?" please read this.) I ran three tests per site. The median results are presented here.

For the purposes of this project, I focused on these performance criteria:

  • Start render and load times for first and repeat views
  • Keep-alive score
  • Compress text score

I wasn't able to find appropriate case studies for Cisco and Array, so I was confined to testing their own sites.

And it goes without saying that this is by no means a formal study. Consider it more of a revealing bird's eye view.


FIRST
VIEW
REPEAT
VIEW
PAGESPEED
SCORE
VENDORCUSTOMERLoad timeStart renderLoad timeStart renderKeep-aliveCompress text
A103.4640.8521.9010.471AF
GenieKnows7.6822.0464.3371.458FF
Shopzilla1.0750.6640.6260.415AA
University of the Arts London*2.7472.2961.4331.166AF
Array17.9917.96711.9374.522AF
Barracuda17.8759.96311.3915.938FF
Royal College of Physicians*3.8982.5021.991.553AC
Brocade22.0338.39315.1865.841AF
Petroleum Institute**8.2433.5577.2343.201AF
San Diego County Credit Union6.1542.1793.7821.733AF
Limelight12.6350.7653.390.33CD
Cisco9.6833.5362.830.625FF
Citrix3.7882.662.5692.136AA
Live Nation11.0675.2233.0511.449AF
MRMV9.3041.9424.2841.619AF
Crescendo14.3726.5769.2486.464AF
Carfax5.0811.7983.5991.22BD
Peirce College5.5271.3762.8231.029AA
F55.0843.1091.2580.879AA
Epson8.9413.3292.0820.854AB
Transplace3.5012.2674.4022.432AF
Averitt3.8612.181.3520.896AA
Radware4.0971.2951.8790.327AA
AccuWeather8.5922.4746.4221.327AA
Ace Hardware12.3112.1783.112.035AD
Computershare2.2080.8661.1550.851AF
Play653.9262.1163.0132.116AF
Zeus3.3072.0592.2180.502AF
Gilt Groupe3.110.6730.9460.164AA
STA Travel*10.9370.9964.4620.745AA
Triboo***6.962.7915.1482.942AA
Averages7.7242.9234.1631.846

Observations

  • Page load and start render times were all over the map. The average load time for first-time visitors was 7.724 seconds— pretty mediocre.
  • Surprisingly slow page load times (9+ seconds) from half the providers tested: Cisco, Brocade, Array, Crescendo and Barracuda.
  • Only four of the providers got As in both keep-alives and compress text, two of the easiest performance gains.
  • Two of the providers, Cisco and Barracuda, both got Fs in keep-alives and compress text.
  • Overall, half the sites failed to enable keep-alives.

Some mitigating factors

ADC vendors can't force their customers to use all the product's features. If a customer has turned off compression and keep-alives, the vendor can't be held totally accountable for this. Though if compression and keep-alives were never turned on — something we can't know unless the customer tells us — to me this is a vendor mistake. I also consider it quite telling if a vendor hasn't turned on these features for their own sites.

Conclusions

There were no perfect winners. Cisco comes across as the poorest performer: its own site is slow and doesn't have keep-alives and compression enabled, and it presents no ADC case studies to prove that its own performance is a fluke. Array is next lowest, for similar reasons. Then Barracuda (whose customer site, interestingly, performs much better than its own), and then Brocade.

Zeus comes off as the best of the lot: decent load times, plus it was the only provider whose case studies all passed both keep-alives and compression. However, it failed to enable compression on its own site, which seems like an incredible oversight. F5, Citrix and Radware are middling, followed by A10.

This begs several questions

Why spend so much money on a device and then not use the basic features to enable keep-alives and compression? Are the products too complicated? Are customers not getting the support they need to use them properly?

If we're seeing results like these, but these sites are being touted as success stories, what's the disconnect? Are there different definitions of performance in play? If so, what are they?

A great big fat caveat

As I mentioned at the top of this post, companies use ADCs for several reasons not related to performance: scale, security, and availability. It's overly simplistic to suggest that you choose your ADC provider based solely on its ability to provide web performance. You need to take a lot of other things into consideration: core functionality, usability, support, price.

All I'm proposing is that when you're creating your ADC scorecard, add "web performance" to your list of criteria and make sure you ask whether or not they eat their own dogfood.

*Test conducted from Gloucester, UK.
**Test conducted from Delhi, India.
***Test conducted from Paris, FR.

Related posts: