There are three initial questions most users have when evaluating an HTTP compression tool like httpZip:
- When should I expect compression to happen?
- How can I tell when compression is happening?
- How can I tell how much compression I am getting?
When should I expect compression to happen?
In order to receive compressed content, the browser must inform the server that it is capable of decoding such content. It does this by sending the Accept-Encoding HTTP request header, specifying which compression algorithms it can handle:
Accept-Encoding: gzip, deflate
As a general rule, httpZip looks for this header in the request and will not compress the response unless it is present. The value can be either gzip or deflate or both.
When the Accept-Encoding header is present, the response becomes a candidate for compression. However, depending on a variety of other conditions, it still might not be safe or worthwhile to compress this particular response, or to do so at the maximum level. Therefore, serious compression tools like httpZip employ several additional tests. In the case of httpZip, these tests are based on a variety of configuration settings:
- The MIME type of the content being returned to the browser: Certain MIME types do not benefit from compression at all.
- Known browser limitations: While a given type of content may be compressible, certain browsers might have difficulty decompressing it, and so there need to be exceptions to the MIME type settings for these browsers.
- Static versus dynamic files: The added processing burden of dynamic page generation technologies like ASP leaves fewer resources available to do compression in a timely fashion on the resulting content. This can be partly offset by using a lower compression level for file extensions associated with dynamic content, effectively trading some potential size savings for faster processing.
- Size of the file being requested: If a file is very small or very large, the cost in processor time to compress it can exceed the benefit of any reduction in size.
httpZip goes through each of the above tests to make certain that a request should be compressed and, if so, at what level. Obviously, one should expect compression (at a given level) only if the request passes these tests. For example, content of type text/css is by default not compressed for Internet Explorer, while ASP pages are compressed at a reduced level for all clients.
All of these httpZip configuration settings can be inspected or changed in the Advanced Configuration Preferences section of the httpZip property sheet.
Back to Top
How can I tell when compression is happening?
Our free online Compress Check tool is a good place to start. While you could trust httpZip's (or some other product's) own reporting mechanism, a better way to answer both of these questions is to look at the conversation between a browser and a server at the HTTP level. Since browsers generally do not display protocol-level data, you will need a tool that lets you see what the browser usually keeps hidden. Fortunately, there are a number of options here:
- Use a protocol analyzer. Tools such as Trace Plus from WebDetective are like specialized network monitors that focus on specific applications and protocols. WebDetective lets you trace the HTTP requests from, and responses to your browser, providing much the same view as a full-blown packet sniffer while running on the client side and automatically filtering out irrelevant network traffic.
- Use a browser extension such as LiveHTTPHeaders for Firefox or Chrome. These tools provide functionality similar to that found in WebDetective, but they reside directly in the browser itself. These tools are probably the most convenient way to inspect response headers as you navigate a site. Here is a 200 OK blog post on other HTTP inspection tools of note.
- Use an HTTP troubleshooting tool such as the WebFetch utility from Microsoft to simulate a browser's interactions with the server. In this case, you will have to add the necessary HTTP request header (
Accept-Encoding: gzip, deflate) in order to trigger compression on the server. On the other hand, this makes it easy to toggle the client's acceptance of compression, to see if the server responds accordingly: simply make some requests that include the header and some that do not. Microsoft also offers the WCAT (Web Capacity Analysis Tool) and Web Application Stress Tool among others in the IIS 6.0 Resource Kit for HTTP load and stress testing.
- Use a network packet sniffer. If you are running Windows NT 4.0 or 2000 Server, you can use the Network Monitor tool (netmon.exe) which ships with these systems as an optional component. Network packet sniffers allow one to inspect network traffic at the protocol and data levels. By focusing on the HTTP portion of this traffic, one can examine the relevant headers and data as they are exchanged between browser and server in the course of compression testing.
Once you have decided on a tool, the next step is to use it to trace or issue a request for a compressible resource and inspect the response to see if it was in fact compressed -- and, if so, how many bytes were saved by compression.
In common with other compression tools, httpZip adds a Content-Encoding header to compressed responses, which lets the browser know that it must decompress what follows using a particular algorithm:
In general, the presence of this Content-Encoding header (or Content-Encoding: deflate, based on how you have httpZip configured) is a reliable indicator that the response was compressed. In addition, some tools (such as netmon) allow you to see the raw (that is, pre-compressed) data in the entity body part of the response (the part following the headers), making it obvious when compression is happening: The data shown by the sniffer will not be the human-readable ASCII text you are used to seeing when you "view source" on a page, but a series of illegible characters.
back to top
How can I tell how much compression I am getting?
While httpZip and some of its competitors have logging and reporting mechanisms that will give this information, it is always a good idea to be able to verify gains from compression independently. Fortunately, this is easy to do once you are able to inspect the HTTP traffic between client and server.
The Content-Length header gives the length of the HTTP message's entity body in bytes. This header is always present in compressed responses, and is usually present in uncompressed responses as well. In a compressed response, it should have a value smaller than when the same file was requested without compression (either because compression was not enabled on the server-side, or because the Accept-Encoding header was not sent in that particular request):
This header gives the length of the entity body in bytes. Given the length of the original entity body without compression, it is trivial to calculate the percentage of bytes saved. Suppose for example that a request whose entity body was 20,000 bytes long prior to compression, drops to 5,000 bytes with compression:
(20000 - 5000) / 20000 = .75
The compression savings in this case is 75%. This savings can now be compared to that achieved with other HTTP compression tools to estimate which tool will give greater bandwidth savings over time.
Naturally, for a complete evaluation one would also want to test the behavior of different HTTP compression tools under load and stress, in order to determine whether the compression savings seen in single-request testing like this would hold up when the Web server is handling the traffic expected in the production environment. We hope that your evaluation of httpZip, the leading compression tool for Microsoft IIS Web servers, goes very well -- let us know where we can help!
back to top