HTTP compression solves Microsoft IIS Web server performance issues by sending less data to the browser in every response, speeding up applications while reducing bandwidth.
Understanding HTTP Compression
Settings, Migration, File Formats, Scripting, and Programming Languages
Optional HTML/CSS Code Optimization
Servers, Browsers and Other Important Topics
Results and Reporting
Compatibility with Other Port80 and Third Party Products
What about decompression?
Any modern browser which supports compression will automatically take care of the decompression. Most modern, and all of the most common browsers (Internet Explorer, Mozilla, Firefox, Netscape, Safari), support both GZip and Deflate encoding.
httpZip also includes a rich set of exclusions
to handle known instances of poor browser decompression implementations, so httpZip will only compress content for browsers that can decompress the response.
Does httpZip recompress every page upon request?
Not unless you want it to. Otherwise, httpZip serves pre-compressed files from its built-in cache directory and compresses content only when it changes. This significantly improves server performance over other comparable products that do not have a built-in cache for compressed pages. The value of using such a cache is enhanced transmission speeds and reduced server loads.
It is worth adding that httpZip features an in-memory compressed content cache, which means faster access to cached content than on a disk cache.
httpZip's compression cache also has an option for the caching of dynamic compressed files like ASP, ASP.NET, ColdFusion, Perl, PHP, and JSP. If you have dynamic files with unique query strings or cookie values, you will be able to safely cache compressed versions of these files. If you do not have query string- or cookie value-unique dynamic pages, do not use this option. You will find these options under the Caching tab of the httpZip Settings Manager.
httpZip also provides an option to cache streamed or chunked content. This option is located in the Settings Manager under Caching tab. Select "Enable caching of streamed files." under Other Options, and press Apply. This will buffer and essentially undue the streaming/chunked transfer encoding of this data, but the value is that it is then compressed and cached in that format to reduce undue processing power.
What type of compression is used in httpZip?
There are two basic types of compression encodings that most modern browsers accept: GZip and Deflate. httpZip, ZipEnable
and IIS built-in compression, and in fact, any origin server software or hardware HTTP compression solution all apply these two encoding encodings.
GZip (GNU zip) and Deflate, from the HTTP specification point of view, are two very similar lossless compression encodings. Both wrap Huffman encoded and Lempel-Ziv encoded (LZ77) compressed data, which is usually and confusingly referred to as "raw Deflate" or simply Deflate encoded data. Deflate (from the HTTP high level) is sometimes considered to be a better encoding, as it utilizes a more compact data integrity check, but developers have tended to use GZip over Deflate given the confusion over the Huffman and Lempel-Ziv encoded (LZ77) compressed data being called "raw Deflate" or Deflate as well.
You don't have to worry about these details because httpZip checks the HTTP headers sent in each request to see which type of compression the requesting browser will accept. If the browser accepts both forms of compression, httpZip will always select GZip compression over Deflate. Older browsers only accept Deflate, and httpZip will use this compression if GZip is not accepted by a browser.
If you want to know all the details, please contact us
, and we can answer any question on HTTP Compression you might have.
Why is there a Deflate-only option in httpZip?
httpZip will compress responses for browsers requesting GZip or Deflate encoded or compressed content. httpZip now has a Deflate-only option because Port80 Software customers have requested the ability to limit response to Deflate only because they can control which browsers are used to consume their particular application - and, in a small sense, Deflate is more efficient than GZip.
-- so Deflate-only is a good option here if you run into this issue.
When would I want to use httpZip instead of IIS built-in compression and ZipEnable?
In general, you should use the integrated compression functionality built into IIS on Windows Server 2003 for compression, safely deployed and managed by ZipEnable
. However, there are some important cases in which you will not be able to get the job done with IIS's native compression engine -- and where you should use httpZip and ISAPI-based compression on IIS
- Controlling compression timing collisions with other ISAPI filters or DLLs: You cannot adjust when IIS built-in compression takes place in the request/response life cycle. As such, you may have a third party ISAPI filter operating on a request that collides with IIS native compression to produce a broken or uncompressed page. Alternatively, as an ISAPI filter, the timing of httpZip compression can be adjusted (in IIS Manager, Properties, ISAPI Filters) to correct this issue. This will affect you if you are running Akamai's Content Distribution Network (CDN), any ISAPI filters such as third party URL rewriters, Java platforms such as BEA WebLogic, IBM WebSphere, Tomcat or any other application server or program that modifies URLs with a DLL or ISAPI filter.
- Caching of dynamic compressed files: httpZip's compression cache avoids needless recompression to conserve CPU resources and serve previously compressed pages faster. You can do this for static files on IIS, but there is no compression cache for dynamic files like ASP, ASP.NET, ColdFusion, Perl, PHP, and JSP. If you have dynamic files with unique query strings or cookies and infrequently updated content (like a press release or text in a content management system), take advantage of httpZip's compression cache for dynamic files. If your dynamic pages do not have unique query string or cookie values, do not use this option.
- Reporting: httpZip provides a complete view into compression and bandwidth savings via HTML reports, whereas there is no reporting in IIS native compression. Here's a tip to do reporting manually -- add an IIS log entry for Bytes Sent (in IIS Manager, Properties, Web Site, Enable logging, Properties, Advanced, then enable logging for Bytes Sent (sc-bytes)), then compare this data against the original file sizes for ad hoc reporting.
- Controlling compression by MIME type: IIS built-in compression uses the file extension and/or location to determine whether a file should be compressed or not. If you need to control HTTP compression by MIME (for example, if you have an application file like ASP or an ISAPI filter that is outputting content with different MIMEs), httpZip's granular controls for compression by MIME type are the only way to accomplish this on IIS.
- Easy migration and settings transfers (to new server or between virtual servers): IIS built-in compression does not allow you to import or export settings from one Web server to another, or to copy settings from one tuned virtual server or Web site to another for compression. This can be easily accomplished with httpZip.
- Optional code optimization for better savings: httpZip offers optional HTML and CSS code optimization, which can reduce file sizes by an additional 5% to 10% prior to compression, based on w3compiler technology. IIS native compression and other Microsoft products do not offer this technology.
- Support for decompression on clients running Symantec's Norton Internet Security software: httpZip is one of the few compression solutions on the market that can handle browsers running in conjunction with Norton Internet Security. IIS will not compress requests from browsers on clients also running Norton Internet Security, so httpZip is the ideal solution here. Learn more.
What is the difference between httpZip and ZipEnable?
provides ISAPI-based HTTP compression for IIS 6, 7, and 7.5 on Windows servers. ZipEnable
is a configuration tool to manage and safely deploy the built-in compression engine in IIS on Windows Server 2003. Put simply, httpZip is a stand-alone compression engine for use with IIS, whereas ZipEnable offers better control of the compression engine built into IIS.
There are many flavors of IIS, and Port80 Software covers HTTP compression on all Microsoft Web servers.
With the launch of Windows Server 2003 and IIS, Port80 Software carefully evaluated IIS native compression and recognized that Microsoft made compression functionality a priority. IIS ships with a compression engine built into the core Web server code, allowing for safe and fast HTTP compression with almost zero resource penalties. As part of the core Web server code, IIS native compression processes files faster than any ISAPI filter, making it the fastest HTTP compression solution available on any Windows platform.
Port80 Software recommends IIS native compression managed with our ZipEnable tool, the only solution for browser compatibility and configuration of IIS built-in compression. Browser compatibility sensing is vital -- without it, you may send compressed content to a browser that cannot decompress it, generating an error for the end user in rendering the file or possibly crashing their browser.
The idea of a GUI interface to manage IIS's "under the hood" metabase settings evolved out of what httpZip customers appreciate most: configuration. While you can control IIS compression settings programmatically by editing the intricate metabase, there is no detailed graphical user interface provided for IIS compression configuration or a wizard to help you get started. Port80 Software, leveraging experience with compression and IIS, built ZipEnable as a low cost tool for safe deployment and management of IIS built-in compression.
Sometimes, you will need to use httpZip on IIS for HTTP compression, as the built-in compression engine in Windows Server 2003 does not cover every scenario or have all the features you may want in a compression solution. Learn more about when to leverage httpZip compression to accelerate IIS Web servers
On IIS, does httpZip work in IIS 5.0 Isolation Mode (or backwards compatibility mode)?
No. We have left that work to ZipEnable and IIS native compression for the time being.
If you were to clear that "IIS 5 Isolation Mode" checkbox in IIS, a new folder ("Application Pools") would appear in the ISM at the same level as the Web Sites and Web Service Extensions folders. In native mode, IIS sites always run inside an Application Pool, and each App Pool is serviced by one (or more) worker processes (instances of w3wp.exe). This is versus having everything running inside inetinfo.exe and/or dllhost.exe. These worker processes provide the application with true process isolation (a crash in one won't destabilize the others), resource quotas, health monitoring, recycling & failover protection, and so on.
Unfortunately, all that added sophistication required a total rewrite of the IIS internals for IIS. This broke backwards compatibility for certain old-style COM and ISAPI applications. Even more unfortunately, IIS 5 Isolation mode is not really just the old IIS 5.0 code paths (the changes were too radical for that), but rather an emulation mode with its own quirks. So, even though httpZip runs just fine on IIS 5.0 (always has), it does not run properly in this emulation mode, where IIS 6 is pretending to be IIS 5.
If this feature is very important in your estimation, please let us know
, and our development team will definitely consider it for a future version.
How does httpZip determine what files to safely compress?
There are all sorts of references out there to "smart compression" technologies -- our process is very simple and logical. httpZip will only compress what can or should be compressed based on series of tests:
- httpZip scans for the Accept-Encoding header in the user's request (its value must be either GZip or Deflate or both). If present, the process continues.
- Top-level MIME type of the would-be response is checked. Is this a top-level MIME type that httpZip is configured to compress? If so, the process continues.
- The browser list for the requesting User Agent is reviewed. If there is a match, httpZip checks to see if the browser is listed as being able to handle the would-be request's top-level MIME type. If yes, the process continues.
- The previous two steps are repeated for the MIME sub-type.
- If all of these checks have been passed, and there are no exclusions for the request, then httpZip will compress the Web server's response.
The compression settings for top-level and sub-level MIME types, as well as browser exclusions/exceptions can be seen clearly under the Compression tab of the Settings Manager.
I'm a hosting provider. Is compression realistic for all the different sites I host?
As bandwidth expenses are of particular concern to hosting providers, HTTP compression is especially valuable to ISPs and Web hosters. However, hosting providers must be able to configure compression for any number of different sites, with different content, using different technologies, and to be able to add new sites with a minimum of hassle.
httpZip includes a set of compression defaults designed with hosting providers in mind. The conservative default settings used in httpZip will allow you to confidently and quickly set up compression for new sites on your servers.
And, of course, httpZip licenses cover a single IIS Web server, unlimited virtual servers/sites. So you can add compression for as many new clients as your servers can handle.
How can I import and export httpZip settings from one server to another?
httpZip can import and export settings for an entire Web server with multiple sites or virtual servers using .zeg files (Port80 Registry Files), which are text files with registry key settings. A .zeg file exported from httpZip will include all compression attributes for the Default Settings in httpZip for that Web server and the compression attributes for any virtual server or Web site that you see listed in the httpZip Settings Manager. These files can be used to migrate settings from one httpZipped server to another or to load a previously exported .zeg file.
To import settings, open the httpZip Settings Manager. Select File, then Import Settings. A dialog will pop up for you to navigate to your .zeg file. Click Open, and you will receive a confirmation dialog that the import was successful. The Default Settings attributes and attributes for any Web sites or virtual servers with the same names as those listed in your .zeg file will be imported and applied to httpZip. If you have sites with names other than those listed in the .zeg file, httpZip settings attributes will be removed for those additional sites, essentially erasing any tuning you have done for that site, so be careful.
To export settings, similarly open the httpZip Settings Manager. Select File, then Export Settings. A dialog will pop up for you to name and place your .zeg file in a particular location. Click Save, and you will receive a confirmation dialog that the export was successful.
Note: Licensing, system state information and master installation directory will not be transferred by an import or an export of a .zeg file.
Can I copy httpZip settings from one site to another on the same server?
Open the httpZip Settings Manager. Select the site that you want to copy setting from (a green arrow will show up next to that site). Hold the CTRL key, and highlight one or multiple sites, then press the Apply button. All settings attributes from the first site you selected will be copied to the site(s) selected.
Does httpZip work with scripting languages like ASP, ASP.NET, ColdFusion, Perl, PHP, and JSP?
Yes, httpZip will compress scripted pages transparently for ASP, ASP.NET, ColdFusion, Perl, PHP, and JSP.
Does httpZip work with other application servers?
httpZip has been tested with IIS running in conjunction with IBM WebSphere, BEA WebLogic, and Tomcat.
Should I configure httpZip to cache dynamic pages such as ASP, ASP.NET, and CFM?
It depends on the role of these pages in your site. The operation of the httpZip cache is very similar to that of any other HTTP caching mechanism, and, as a general rule, dynamic pages make such caching more difficult. This is because HTTP caching is date-sensitive, while the content of a dynamic page can change without changing the file's modification date -- for example through a database query or server-side include.
If you have dynamic pages that vary based on unique query string parameters or cookie values, you can enable httpZip caching for pages of that type, and then enable "dynamic caching," which will cache a unique copy of the page each time it is requested with a new set of query parameters. The Cache dynamic content feature can be found under the Caching tab of the Settings Manager.
Keep in mind that the httpZip cache is a server-side cache for compressed files; for browser and proxy cache control management, you will want to check out CacheRight
httpZip now compresses Bitmap and PNG files by default. What about other images like GIFs and JPEGs?
httpZip will compress any type of file that you like -- it is the most configurable of HTTP compression tools for IIS -- but we have very safe settings for image compression to avoid known browser decompression issues. You will notice in the main Compression tab that we do compress two image file types: BMP and PNG (in the case of Internet Explorer 7). These files are not usually compressed when created, and so they benefit from compression.
Port80 did not add GIF and JPEG files to default compression in httpZip for a good reason -- these files are already compressed with lossless compression when they are created in an image editor (like Photoshop or Acrobat). You can compress JPEG files by adding the MIME sub-type "jpeg" under the top-level MIME type "image" under the Mime Types window on the Compression tab of the Settings Manager (and similarly add GIF files for compression in httpZip), but the files may actually increase in size or the images may be distorted from duplicative compression. Our default file types were chosen for safety of compression and for maximum performance of the Web server, so we excluded the problematic GIF and JPEG files from compression. Feel free to test these files for httpZip compression in your own application environment, especially if you can control the type of browser requesting the application (usually in a business-to-business context) and have tested GIF or JPEG compression yourself.
Other companies claiming to compress GIF or JPEG images on the fly are dealing in lossy compression, not the lossless compression used by httpZip -- the images will be smaller in size, but the negative effect is that you lose data (bytes, as "lossy" describes in its name itself), and images get distorted. We recommend that developers and designers optimize their GIF and JPEG images as they create them in image editors -- the only way to truly optimize an image file for transmission over the Web.
httpZip now compresses PDF files by default. Are there any limitations to PDF compression with httpZip?
After reviewing the browser issues with PDF compression (Portable Document Format or .PDF files), Port80 decided that there was a viable possible solution and added default support for PDF compression in httpZip. The only limitation to consider is that httpZip will only compress PDFs over 15,000 bytes or 14.65 kb in size. This limitation was put in place due to irresolvable browser decompression bugs for files smaller than this size. Otherwise, you should now see some compression benefits for speed and bandwidth with httpZip's default PDF compression support.
Can compression help speed up and ease the strain of my RSS blog feeds?
Yes, compression can be a big help in mitigating the strain from traffic surges caused by RSS content updates.
To further reduce file sizes, httpZip includes a built-in parser for HTML/CSS Code Optimization that eliminates white space and comments, and in the case of HTML, converts tag names to lower case. Removal of this unnecessary data reduces file size by 10% on average size files. HTTP encoding will then execute the majority of the file's compression. The unique combination of HTTP compression and the optional HTML/CSS Code Optimization makes httpZip the leader in bandwidth savings on Microsoft IIS platforms.
We recommend testing the HTML/CSS compression to see how your pages handle white space and comments removal before moving to a live serving environment. The integrated white space remover can degrade throughput (requests per second) substantially, especially on large pages. HTML/CSS Code Optimization is not enabled by default, and we strongly recommend that it be used only for pages that are also being cached by httpZip.
To enable a file for optional HTML/CSS compression on httpZip, follow these steps:
- The file must be in HTML or CSS format.
- Open httpZip Settings Manager.
- Under the Optimization tab of the Settings Manager, select Enable Code Optimization.
- You may then select white space, comments, and lower case tag names individually for HTML and CSS.
- We recommend that httpZip's compression cache is turned on when you use this feature to minimize processing penalties after the first request (select Caching, then Enable Caching of httpZip Compressed Content, and then OK). HTML and CSS are included on the cacheable file types list under default settings.
- Press Apply on main interface to apply your changes to httpZip. HTML and CCS files will now be optionally compressed on the first request and then cached in the compression cache.
For a complete code optimization solution for developers (code optimization is best done offline by developers before pages are being served), check out w3compiler
Won't removing comments and white space from HTML/CSS potentially break my pages?
Many HTML reduction methods break pages, but we have made very conservative assumptions about what can be reduced and what cannot be. Optional HTML/CSS Code Optimization testing on complex and even broken HTML shows this type of compression is safe unless you have extremely malformed HTML. In these cases, you will more likely get no savings rather than a ruined page. However, despite significant testing, there may be edge cases in which HTML/CSS Code Optimization can break pages.
Again, we recommend testing the HTML/CSS compression to see how your pages handle white space and comments removal before moving to a live serving environment. If you have a troublesome candidate for examination, submit it to email@example.com
, and we'll take a look at it right away.
Why should I bother with HTML/CSS Code Optimization if there is even a slight chance of a problem?
We believe that transporting white space and other non-functional items across the Web is pointless. Letting your end users see a nice "View Source" doesn't improve your site. Furthermore, comments and other information leakage in your code should be removed as they can sometimes assist potential hackers -- source sifting is a common technique used by hackers. Use the Optional HTML/CSS Code Optimization in httpZip to obscure any sensitive information -- keep your "comments" to yourself for your benefit, not an intruder's.
Please try out the w3compiler
I use XHTML and strict validation. Will HTML/CSS Code Optimization ruin that?
No, optional HTML/CSS Code Optimization is designed to preserve standards-oriented pages perfectly, and pages should validate before and after white space and comments removal.
Will httpZip slow my server or use more system resources?
The current generation of httpZip is the most efficient ISAPI HTTP compression solution available today and will make compression easier to deploy, tune, and manage.
httpZip has a small memory footprint and utilizes a minor amount of processing time to compress pages, thus the vast majority of users will not see a noticeable degradation of throughput from compression tasks. That being said there is an inevitable trade-off between compression savings and processor utilization. As one attempts to compress larger and larger files, at some point the cost in processor utilization will overwhelm any benefit from smaller payloads. Similarly, if an extreme number of users simultaneously request content that needs compressed, the processor will become overwhelmed.
However, significant degradation of throughput will occur only under extreme traffic and request loads outside of what your Web server likely sees. That being said, as HTTP compression interacts with numerous other server-side processes and is located at a critical juncture in the request-response cycle, we also saw isolated cases in which memory and CPU usage did become a serious concern. In response, we have optimized code paths within httpZip to provide maximum compression performance with minimal resource penalty. As most properly configured servers have more than enough CPU processing power, the point at which compression noticeably degrades throughput has been pushed from unusual traffic loads to only the most extreme conditions like a DoS attack.
Beyond code path optimizations, httpZip includes other features which help ease server strain. The built-in compression cache eliminates redundant compression tasks for static and many dynamics files, freeing up the server to handle even more requests. Adjustable compression levels ensure that, especially for dynamic content, compression tasks are always balanced against processor utilization. As a final safeguard against overtaxing the processor, httpZip also includes minimum and maximum file size thresholds. This is because the application of the compression algorithms (GZip or Deflate) to increasingly large or small entities does not scale in a linear fashion. As discussed below, these file size limits allow httpZip to intelligently choose which otherwise compressible files are not worth the processor time.
httpZip doesn't compress very small or very large files by default. Why is this?
This feature (labeled Response Size Limits under the Exclusions tab), is related to the above question regarding processor utilization. There is an inevitable trade-off between compression savings and processor utilization, since the application of the compression algorithms (GZip or Deflate) to increasingly large entities does not scale in a linear fashion. In short, as one attempts to compress larger and larger files (under a given load), at some point the cost in processor utilization will overwhelm any benefit from smaller payloads.
To deal with this fact, httpZip has minimum and maximum file size settings beyond which it will not attempt compression (by default 500 bytes and 1MB, respectively). Like every other parameter in httpZip, this one is configurable by the user so that it can be adjusted to their server conditions. Lowering the maximum file size will change the trade-off in the direction of lower overall compression and a reduced burden on the processor(s).
Similarly, under the Compression tab, httpZip affords the ability to adjust compression level by both MIME type and, for dynamic content, by file extension. While content such as text/html for example is normally compressed at maximum levels, in the case in which that content is dynamically generated, the compression level is by default adjusted downward, on the assumption that the processor is already bearing an additional burden of helping to generate the page.
Have you tested httpZip for stress and load?
Yes, we have. Using a script that makes requests for a special set of both static and dynamic pages of sizes varying from 10kb to over 100kb, stress tests are run on a server with httpZip enabled and on the same server with httpZip uninstalled. The most important results to Port80 are the average requests per second and the bytes received rate. This provides a comparison of performance and bandwidth usage between a site running httpZip and a site running without httpZip.
Repeated testing reveals that, during tests of equal duration, the throughput (average number of requests handled by the server per second) decreases only under the most extreme traffic and load conditions in which CPU resources become exhausted. The bytes received rate (average KB/sec) decreases by a factor of 10, indicating substantial bandwidth savings despite the higher throughput. Naturally, at equal throughput levels, bandwidth savings would be still greater.
Load testing has included scripts run on suites of both dynamic and static test pages, and on local testing copies of commercially deployed Web sites. Scripts are run for durations of between 10 and 24 hours. The bytes received rates for these tests are compared against the stress test result values to check for degradation. These values have remained consistent for tests of long duration. A requests per second rate is generally maintained at 30+ during these tests, representing a load of over 2.5 million requests a day. Test servers are monitored for stability throughout these long-duration tests and are checked for excess resource use over time (memory leaks, unreleased file handles, etc.).
Does httpZip compress over HTTPS (SSL) connections?
Yes. httpZip compresses files over both HTTP and HTTPS (SSL), and decompression is safe over HTTPS as well.
Does httpZip work with caching technologies like proxy servers and content distribution networks like Akamai?
Yes, httpZip supports the use of the Vary header and will work flawlessly with standards-compliant proxy servers and CDN systems.
For background, there is a built-in tension (though not an irreconcilable conflict) between the two ways of improving performance: (shared) caching and compression. Because compression requires content negotiation (that is, because the origin server has to make sure the client can decompress compressed content), a shared cache like a proxy server or edge servers used by CDNs cannot simply assume that compressed content is cacheable. A user agent or browser incapable of decompressing the resource might get the compressed version. This is no different in principle from the problems caused by dynamic content; in a shared cache's ideal world, all resources (or all URL names) have a single representation only, and in the past, this made compression and shared caching incompatible.
Fortunately, there is a mechanism in the HTTP 1.1 standard to help shared caches deal intelligently with negotiated content (such as compressed content) -- namely the Vary header. Vary allows any process on the origin server to tell the proxy cache a) that a given representation of a resource was the result of negotiation and b) which HTTP header(s) was used to do the negotiation. The proxy can then serve the cached result only to those subsequent requests whose relevant headers match those of the request that drove the original negotiation.
Learn more about Akamai and httpZip.
What is "Chunked Transfer Encoding" and what does it have to do with compression?
Chunked Transfer Encoding is a feature in HTTP 1.1 which "modifies the body of a message in order to transfer it as a series of chunks, each with its own size indicator, followed by an OPTIONAL trailer containing entity-header fields
." This improves the Time to First Byte, or perceived page load speed, particularly in the case of large file sizes delivered to low-bandwidth connections.
Not surprisingly, delivering content incrementally makes compression more difficult. Several of our competitors do offer a buffering option which emulates true compression. However, httpZip and IIS native compression are the only two solutions on the market which offer true, transparent Chunked Transfer Encoding for streamed content.
You may select to Compress, or Buffer then Compress chunks, under the Advanced tab of the Settings Manager.
httpZip also provides an option to cache streamed or chunked content. This option is located in the Settings Manager under Caching tab. Select "Enable caching of streamed files." under Other Options, and press Apply. This will buffer and essentially undue the streaming/chunked transfer encoding of this data, but the value is that it is then compressed and cached in that format to reduce undue processing power.
What about browser compatibility? Does httpZip work with all browsers?
Most modern browsers support HTTP/1.1 and content-encoding. If a browser tries to request a page and does not indicate that it supports such features, httpZip will not apply compression, and normal pages will be delivered. Furthermore, we have added in rules to remove common browser-related problems with certain content so that, upon proper installation, httpZip will work without incident in any Web browser.
What are Browser Exceptions? You just said httpZip worked with all browsers.
Visitors to your Web site are using a wide variety of different browsers and versions, and it is your responsibility to make sure that you send each user content that his/her particular browser can handle. While most browsers can handle compressed content for most file types, there are always exceptions.
Some competing products claim to use "smart" compression that does all of the browser sensing and excluding for the administrator. The problem with this approach is that there are many additional optimizations that are missed when browser exclusions cannot be configured by the administrator. With httpZip, you decide what is appropriate and safe for your Web applications and users.
In addition to a rich set of exclusions, httpZip makes it easy to edit browser compatibility settings. Known browser exceptions can be seen for each top- and sub-level MIME type under the MIME types menu on the Compression tab of the Settings Manager.
I heard that Symantec's Norton Internet Security interferes with decompression on the client/browser side? Does httpZip handle this?
Many compression solutions, both software-based like httpZip and hardware appliances, have had issues with browser decompression on the Windows client when certain Symantec
software tools are running, most notably Norton Internet Security
(essentially a software firewall product).
We have developed a unique server-side solution in httpZip to avoid this decompression bug in Symantec's software so that you can safely send compressed content to compatible browsers running on a client with Norton Internet Security enabled. This feature works in the default version of httpZip and incurs a very minor performance penalty. If you would like to disable the Norton Internet Security client-side fix in httpZip, please contact Support
Will HTTP compression hurt my search engine rankings in any way?
No, compression is completely compatible with search engines and their bots -- you have nothing to worry about here.
In fact, compression adds more throughput, increasing the likelihood that a search engine robot will be answered with responses when your Web site is spidered, and some smarter search engines like Google even prefer
compressed content, as it is less data (and therefore cheaper) for them to download and index.
What kind of compression and performance improvement should I expect from httpZip?
Depending on your page content, you page size may be reduced to as little as 2% of the original size (98% reduction in size)
. Of course, this depends greatly upon how your content is authored, so actual compression may vary. Using httpZip's optional HTML/CSS Code Optimization for white space and comments removal will also reduce page content, bolstering compression ratios by up to 10%. Our clients have found that they also get 5% to 15% better compression rates
from httpZip than from comparable competitor's software that they tested.
Our online Compression Check tool
will give you an estimate of your potential savings with HTTP compression.
How do I know if I am getting any compression savings from httpZip? What if I have trouble with httpZip compression reporting?
httpZip reporting is provided in HTML format that displays file size reductions and other useful information related to HTTP compression. Under the Reporting tab of the Settings Manager, you can view the report by virtual server in a Web-browser and also access a snapshot of the Web-based report in HTML. Actual reports are stored in memory and displayed when a request is made like this example: http://hostname/httpzipreport/report.htm, but the snapshot may be useful for management/archival purposes. On the Reporting tab, you also have the ability to control access to the Web-based report via IP restrictions, clear the log history, and also set a period in which to refresh the logs.
Troubleshooting httpZip Reports Issues:
If you have trouble accessing the httpZip reports or have what seems to be partial data in the httpZip reports, please make sure you had administrator-level credentials when you installed httpZip and that IIS Logging itself is turned on (httpZip relies on this functionality in IIS). Also, as the correct local IP to domain name mapping references may not accessible, you may have an issue with accessing the reports from within the httpZip Setting Manager's Reporting tab. If this happens, just navigate to the reports directly in a Web browser by following the http://hostname/httpzipreport/report.htm convention.
You should review our httpZip Evaluation Guide
for more third-party tools to confirm compression is taking place correctly independently as well.
How can I prevent outsiders from accessing httpZip's online reporting or using it to fingerprint my server?
There is an IP address list accessible from the Reporting tab that will allow you to restrict access to the online reporting mechanism and, more generally, to prevent that mechanism from disclosing the fact that you are running httpZip or, by extension, IIS.
By default, any browser will be able to access the report for a site with httpZip turned on by following this convention: http://hostname/httpzipreport/report.htm. You can control who accesses the reports by inputting allowable IP addresses in the Reporting tab. Once the "Permit All" checkbox has been disabled on this tab, anyone who attempts to access the online reports from an IP address not on the allowed list will receive a 404 error. This is done on purpose to make it appear as though httpZip is not running on your Web server (should you choose to control access to the reports which will divulge the directory/file structure of your site). Please consider this feature carefully when configuring httpZip for the security of your Web site or application.
httpZip may be causing trouble or is incompatible with something on my Web server. How can I troubleshoot the issue beyond httpZip standard reports and Port80's online compression tool?
To improve our ability to diagnose problems we have included a debug mode in httpZip with a diagnostic report to help you out if you run into any problems. Please contact Port80 Software support
to access the debugging facility and receive any help you may need.
Does httpZip work with other Port80 Software products?
Yes it does. CacheRight
is an ideal companion to httpZip if you are interested in additional bandwidth and speed improvements. For a complete code optimization solution, check out w3compiler
I am testing a few different compression tools at once. Is there any issue to be aware of here?
When testing compression solutions, both hardware and software, many compression tools do not play well together and can cause collisions and issues when used simultaneously to compress a response.
Make sure to disable compression (at every site level it has been turned on at in the past) or even uninstall any other compression tools before you evaluate httpZip. You may want to visit the default IIS compression interfaces as well and turn everything off first before testing httpZip. If you decide to stop testing httpZip and move to another tool like ZipEnable for IIS compression management, make sure everything has been disabled, or you could find that you cannot get compression to work or may even crash your Web server. Please contact Support
if you have any questions related to evaluating HTTP compression.
Does httpZip work with Akamai products?
has developed the world's largest content distribution network, essentially offloading requests to a local server for bandwidth-intensive files like images and videos (or any file the Akamai client wants to offload to their network).
For example, a user/browser in Singapore requesting content from an origin server in San Diego has to travel many hops in the Internet to download the content for that site. However, if the San Diego Web server was on Akamai's network, the request could be answered from a closer server in Singapore that is part of Akamai's network, an Akamaized server that has essentially pre-downloaded the content from the origin server in San Diego. Result? The Singapore browser gets that page loading much faster than before, when the request had to be routed to all the way to San Diego.
Many of the Internet's largest sites rely on Akamai for speed, but there has always been a catch. Akamai's network has not been compatible with origin server HTTP compression. Port80 Software has invested the time to solve this problem with httpZip. Our ISAPI compression solution for IIS plays well with the Akamaizer filter that is required on their clients' origin IIS servers - the result is that you can now compress the text output, dynamic and static files, that Akamai's network usually does not optimize and still leverage the offloading benefits of their network.
IIS is included in or runs in conjunction with many Microsoft products. Will httpZip work with SharePoint, ISA, OWA and versions of IIS in other Microsoft products?
Yes, we support SharePoint, Outlook Web Access on Exchange 2000, and versions of IIS embedded in products like Small Business Server.
Microsoft's Internet Security and Acceleration Server (ISA) running as a gateway to httpZip compressed IIS servers requires a bit of setup, but you can make this work easily.
Does httpZip interfere with other third party IIS products, or vice versa?
httpZip has been tested extensively for interference with other IIS enhancements, and particularly with the memory and CPU utilization enhancements since version 3.0, you should be able to deploy httpZip regardless of what other third party tools are also on your IIS servers.
However, since every environment is different and every software interaction impossible to predict, httpZip and all Port80 Software products have technical support
. Should you encounter problems, we will work with you to find a solution.
I am having trouble using httpZip with another ISAPI filter that rewrites URLs (like ISAPI Rewrite). Is there a way to make these two filters interoperate successfully?
In most cases, yes. Both commercial and custom-coded ISAPI filters that do URL rewriting can interact badly with httpZip, but the problem is almost always attributable to the order in which the two filters operate on the request. If, like httpZip, the URL rewriting filter has a "High" priority, the problem can often be solved by placing the URL rewriting filter above httpZip in the Internet Services Manager's ISAPI Filters tab and restarting IIS. In the case of a custom-coded URL rewriting filter, it may sometimes be necessary to change that filter's code in order to alter its priority, or even move its decision logic to an earlier event notification. However, the problem nearly always has a straightforward solution -- please contact us
if you need help finding it.