Doesn’t save bandwidth but might make users happy

Posted: November 18th, 2004
Filed under: Web Speed and Performance


As you likely know we here at port80 are very concerned about improving Web site performance.  All the time we are considering new techniques for code optimization to be used in w3compiler or improvements to our server side products that address the mantra – deliver less, less often.  However, recently some in the group have been questioning the mantra and occasionally sending more than is necessary by utilizing the idea of prefetching.

The basic sense of prefetching is to use the idle time the user has looking at a page to fetch the objects they are likely going to need in a moment.  The prefetched items are put in a local disk cache and then if the user looks at the next choice, the page pops right up.  Of course pre-seeding the cache is a dangerous game because it is also possible the user may not make the choice you expect.  In this case, your bad guess doesn’t necessarily cost the user anything (unless they pay by bandwidth use) since they wouldn’t have noticed the background download; however, your site and server did more work and consumed bandwidth that never really gets used.  Numerous research projects have attempt to automatically determine what to pre-seed a cache with by looking at a site’s stats log dynamically and adding prefetch information, but unfortunately there will always be users who don’t fit the normal browse pattern so you are bound to download stuff you don’t need to in order to get the usability gain of improved perceived site speed.

Yet rather than get too worried about implementation details now take a look at prefetching.  Probably the easiest way to do this is using Mozilla – see Link Prefetching FAQ We’d be interested in hearing any thoughts customers have on prefetching technology as we have been toying with the idea at p80 off and on for some time.

No Comments »

My Compressed Pages are Stale on IIS

Posted: November 15th, 2004
Filed under: Web Speed and Performance
Tags:


The HcCacheControl metabase setting is a sneaky setting. Here’s why: Recently it was discovered that when a Windows server was updated from Windows 2000 to Server 2003 (rather than a fresh install) this HcCacheControl metabase setting was mysteriously assigned a value of “max-age=86400” when HTTP compression was enabled. This caused some weird behavior. It appeared as though enabling compression caused the server to serve stale pages–changes to newly updated pages were not being displayed when requested.

But the source of this problem has nothing to do with compression. In fact, it doesn’t even lie inside the server. It’s the browser. And it’s the broswer just doing its job. With this metabase setting set, IIS was sending compressed pages with a Cache-Control: max-age=86400 header. This was preventing the browser from making a request to the server for a 24 hours. Forcing a ctrl+F5 refresh will get the fresh data. Otherwise it comes from the browser cache.

Done on purpose as part of a site-wide caching policy, this could be a very smart move. Taking control of these cache related headers leads to great performance improvements and saving server resources. Having them changed without your knowing is sure to lead to confusion.

If you recognize weirdness like this, always check your headers. It can save a lot of troubleshooting time.

No Comments »

A Very Brief History of Internal IIS Compression

Posted: November 12th, 2004
Filed under: Web Speed and Performance


Internal compression in IIS 5 has been plagued by both server-side and client-side bugs.

The server-side bugs that make internal IIS compression unusable in most real-world environments include:

  • The 2048 Bytes bug.
    (It’s misleading because it categorizes it as a browser problem, when in fact it can be corrected on the server, as compression filters do.)
  • and to a lesser degree bugs like the Uppercase Characters bug.

There are also numerous client-side bugs that make internal IIS compression, as it is, practically unusable in most environments. Andy King outlines the nature of these bugs in his short article on HTTP compression.

The root of all of these browser bugs is that browsers sometimes lie; they claim to handle gzip decompression in all cases (in their Accept-Encoding header) when in practice they will choke in a limited number of cases. For example, they will break when trying to decompress a specific file type or–even weirder–they might confuse a specific cached compressed file for a decompressed file and try to render the compressed content. In addition to the longstanding, well-known lying browser bugs, new ones continue to creep up, although they are becoming more and more obscure.

The server-side bugs were fixed in IIS 6. Lying browsers however are still widespread, so the key in implementing compression on IIS 6 is getting a product that includes the browser sensing feature. A product like ZipEnable provides this preconfigured browser sensing that allows IIS 6 compression to work in the real world.

Another impediment to implementing compression on IIS has been an inadequate GUI.
The ability to configure at multiple levels becomes crucial with web applications of any complexity. Being able to easily include/exclude specific files, directories and sites oftentimes makes the difference between using/not using compression. The built-in GUI for IIS compression has historically made this so difficult–actually impossible–that the user’s only choice was an all or nothing one. Users would often encounter one of these many bugs, then shut off compression and write it off altogether, forfeiting the many benefits that it offers. In short, the combination of server-side bugs, client-side bugs and an inadequate GUI have sabotaged many efforts to implement internal IIS compression. These conditions are what have given rise to third-party compression products like httpZip for IIS 4/5 and ZipEnable for IIS 6.

No Comments »