httpZip 3.8.4 Now Available

Posted: March 6th, 2008
Filed under: Uncategorized

Minor releases rarely get headlines, but why not? They are important as well!  

This interim release of httpZip has important changes for improved reporting and swatting a few bugs, so if you are a Zip customer or a just checking out HTTP compression, download the new bits. However, get ready for a major httpZip upgrade in the future, as 64 bit is on the way. We are working on getting the code out to our beta testers as soon as we can.

So, if you aren’t on the Zip bandwagon yet, remember that even with massive broadband penetration, your Web users always want more speed (

More to come,

No Comments »

New Web Security Tool: ServerDefender AI

Posted: October 22nd, 2007
Filed under: Uncategorized

Hello there,

We have been mentioning a new Web app firewall since this spring at TechEd, and we are happy to announce that the new ServerDefender Artificial Intelligence (or ServerDefender AI for short) Web application firewall for Microsoft IIS Web servers (and the app layer) is ready for your review at our site!

Building on the security layers of defense from ServerMask and LinkDeny, ServerDefender AI offers solid attack signatures from SANS, OWASP and our own research to protect against the myriad attacks we have all come to know and loath like SQL injection, buffer overflows, cross-site scripting (XSS) and request forgery (CSRF), directory traversal, zero-day, brute force, dictionary, denial of service and others (here is our new Port80 review of Web app attacks and countermeasures).  ServerDefender AI then employs a learning Web app firewall that maps your normal traffic and then begins to use this data to detect and block anomalies. You can get involved in this training process or set the AI to train without your supervision. This Web app firewall not only protects IIS, but your app server layer(s) like .NET, .asp, .php, .cfm, Java server pages, etc. — and of course your database’s precious gems as well.

Other key features include request throttling (controlling frequency of requests to a given page to block bots and other attacks), network-layer IP blocking (threats never even get to IIS), threat management options (a configurable framework to customize Web app sec), notification alert management (get paged, texted or even stop IIS in the event of an attack), detailed logging (you will be surprised how many hacker requests your site is getting right now), and much more.

We kept the price low for a Web app firewall that works at $649.95 per IIS instance, and this is a first tool from Port80 Software to offer centralized deployment (install/activate) and management from a single console.  Server Defender AI will even stop IIS in a very bad hack attempt (based on your preferences). Check out ServerDefender AI today, and let us know what you like and what we can change to make this security solution more robust.  

At the same time, we are working on a sister Web app firewall, ServerDefender Vulnerability Protection (VP), that will focus on input sanitization and attack vectors like error information leakage — adding effective Web app security with less frills than ServerDefender AI, but at a lower price (we are targeting $349.95/server for ServerDefender VP).

We appreciate your interest and feedback on ServerDefender AI and IIS security. Let’s make it a bad day for hackers, indeed.

Port80 Software

P.S. We also have a current promotion ($600 down from $850) on the ServerMask ip100 security appliances for intrusion defense and anti-reconnaissance.  Check out this deal for better network-edge defense that works in tandem with Port80 origin server security tools (and your existing hardware firewall and IPS solutions)!

No Comments »

Prognostications Galore: Web 3.0 and the Rise of an Interplanetary Internet

Posted: October 22nd, 2007
Filed under: Uncategorized

We are all for futurism here at Port80. 

It is important to look ahead to what is coming, and make dreams become reality.  So, when Vint Cerf says we will have an Internet capability in space by 2010, it worth noticing…  This type of vision makes sense, given the success and number of unmanned, robotic experiments taking place in our solar system right now.  Pretty cool stuff:
But, when we start to hear Web 3.0, we have to laugh.  Ask 10 smart folks in our industry about Web 2.0, and you are liable to have 10 different definitions.  It is less than a decade since the golden days of the Dot Com MOAB, but with all the real work we have to do serve complex Web apps today, do we need more snake oil sales blabber?:

Have a good one,

No Comments »

Web Server Wars Religious, Not Productive

Posted: August 22nd, 2007
Filed under: Uncategorized

Last week, we released our updated surveys of Fortune 1000 Web sites and the Web technology they use to deliver their sites.

Port80 Software has been conducting this survey since 2003, when we felt there was so much negativity out there on Microsoft IIS.  Apache, long loved in the open source community and more widely used than IIS, and the Apachephiles themselves were always kicking sand in IIS’ face.  So, IIS’ ongoing lead in our Fortune 1000 survey as the Web server of choice (it still leads with 55% share in July 2007) has been a kind of a counterbalance against Netcraft’s surveys that had promoted the concept that Apache is so much more widely used on the Internet than IIS — so much so that you needed to have your head examined if you were still running IIS.   Or so the headlines told us…

Recently, IIS has been adding sites in Netcraft’s survey relative to Apache, gaining on the open source superstar, and this has upset folks in that camp.  Having blasted IIS for years, it must sting a bit to have the tables turned.  Our latest survey only strengthens the argument that IIS is on the rise, and the much-anticipated IIS7 release in Windows Server 2008 probably won’t help the numbers for Apache in the future.

But this story is really all old news.  True technologists know that is not so much the platform that you are building upon but rather what Web site or application you are building, that makes the difference.  Yes, we are an IIS shop, and yes we have stoked the fires of this somewhat religious Web server battle for supremacy, but at heart we know that IIS and Apache offer two different ways to serve Web content, two different ways to skin the Net cat, so to speak. Also, it is important to note the small but steady rise in alternate Web servers in the Fortune 1000 survey which demonstrates that IIS and Apache are not the only players here.

Port80 Software was recently quoted as saying that IIS is more difficult to tune and manage than Apache — we do not believe that.  Rather, this is the common perception among those unfamiliar with IIS or already in the Apache camp to say that “Apache is more secure” or easier to administer.  From Port80’s perspective, it is not what you serve with, but how much you configure, add on new functionality and work to solve tech and business problems.

We will keep producing our survey, as long as Netcraft is out there, to provide an alternative perspective.  But that’s all it is — one more slice of a very complicated Internet, one more story of technology in use, one more stat in your inbox.

Back to work,


1 Comment »

LinkDeny 1.1 Released with Some Fixes and Features

Posted: August 22nd, 2007
Filed under: Uncategorized

We released LinkDeny 1.1 last week with some fixes and new features worth the mention.  For customers and test downloaders, this is a freebie update:

Changes in LinkDeny version 1.1 (8/2007):

1. Updated GeoIP.dat file for most current IP address and geographic data.
2. Altered type of HTTP redirection for this Action option from temporary redirects (302) to permanent redirects (301).
3. Added feature to update IIS logs with proper HTTP status code when LinkDeny action is taken like 404 error response or 301 redirection response.
4. Fixed various bugs in the Time Limit Test, both functional and UI.
5. Fixed UI bug that caused crash when many LinkDeny rules were loaded in the Settings Manager.

Also, we are working hard to release ServerDefender Artificial Intelligence (AI) in the next few weeks, the first of our two new upcoming Web app firewalls…

More to come,

No Comments »

ISAPI Install and Uninstall Help

Posted: July 2nd, 2007
Filed under: Uncategorized

Sometimes, customers ask us, “Why do you spend so much time explaining installation?”  It’s a valid question.

Most software companies do not focus on installation as much as we do at Port80 Software, but then again most software in the world does not plug into the IIS Web service…

You know we are the biggest fans of IIS around, but the 2.0 ISAPI interface to the 6.0 IIS Web server software has its limitations. Installation in the face of multiple third party filters and gracefully stopping IIS to make sure an ISAPI can even be installed are chief among them…

Long story short, there are many ways to fail in any ISAPI installation.  Our new Install Notes pages has some updates to make it easier to install and even uninstall Port80 Software tools, but these tips should come in handy for anyone trying to install any ISAPI filter.

We look forward to your feedback, and hope these tips make any ISAPI installation a bit smoother for you!


1 Comment »

Web Security Threats, General to Specific (personally, we like the details)

Posted: June 29th, 2007
Filed under: Uncategorized

We came across a business-focused article on Web security today at

Here’s an excerpt that caught our attention:

Pete Boden, senior director for MSN and Windows Live security, echoes the views of many longtime executives. He argues that a lot of application security problems boil down to the same fundamental source: data input; that is, what people type into an application. Tightly control what can or can’t be entered–or “validate” in industry parlance–and you can eliminate the major access point for security breaches.

“If you classified Web vulnerabilities and took out all of those that are related in some form to input validation, I think you’d have a very small number of vulnerabilities left,” he said. “I contend that 80 percent of the vulnerabilities that we see are input validation errors.”


The Microsoft answer (better development tools) or the industry standard answer from the article (better industry cooperation, better-trained developers) are all well and good, but while we wait for those utopias to arrive, there is a rapidly-growing amount of vulnerable “Web 2.0” code getting deployed.  This is where the upcoming ServerDefender Web app firewalls from Port80 Software will help — one of their key features is input sanitization to cover/help Web developers who should be focused on functionality (you guys have enough to worry about) — and to keep out the hackers looking for a way in at the same time…

A more general critique of the article (and of articles of this type) would be that the Web 2.0 talk seems pretty airy and uninformed.  For example, there is no mention of security issues with popular Ajax libraries, issues that affect many sites in specific, but issues for which there are current solutions as well (see this 200 OK post). 

Instead, we get a lot of business-analyst-speak about whether MS or Google or Yahoo will do the right thing.

Do we have time to wait for Web security to standardize from the top players down?  Or should we fight the good fight now with the tools on market, and those to come soon like ServerDefender?  What do you think?

More to come,

No Comments »

Port80 at TechEd 2007: The IIS7 Cometh (In Force)

Posted: June 15th, 2007
Filed under: Uncategorized

One of the coolest new features coming in Windows Server 2008 (formerly Longhorn Server) isn’t really a feature — it’s a whole new version of Windows.

The feature is called “Server Core”, and it will only take one-sixth of the disk space of a normal Windows 2008 installation. Designed to not need as many patches or hot fixes, “it’s a version of Windows that does not, in fact, use windows”, but rather leverages the command line for rapid administration and management.

When Port80 Software arrived at Microsoft’s TechEd conference last week, we had no idea that IIS7 was going to become the lucky number seventh “Server Core” installation option in the upcoming Windows Server 2008 operating system. This designation finally puts IIS on the level with Windows server core features like Terminal Services, Network Access Protection, Virtualization, Server Management and Backup, and Server Core/BitLocker, and is designed to get Internet Information Services/IIS Web servers up and running quickly and securely in a command-line-only environment.  This was announced in the TechEd keynote address by Microsoft’s senior vice president Bob Muglia, drawing intense applause from the crowd.

Some of the Port80 Software team in the audience fainted…

Microsoft IIS, long the “red headed stepchild” of Windows, has informally become one of the most popular and widely deployed Web servers that deliver the World Wide Web.  Now, with IIS7 formally becoming a Server Core player, the news could not be better for customers. IIS Product Manager Brian Goldfarb said at TechEd that this will effectively cement IIS as a principal feature of Windows Server into the foreseeable future.

Projects like Windows Communications Foundation (Indigo) and lingering bad religious wars over the Apache vs. IIS choice had left the impression in some folks’ minds that IIS may go the way of the dodo bird and be replaced by other systems. This ain’t happening, and as IIS’ Brian Goldfarb said, “You have to have Port80.”

Of course, Brian:  If you are on Microsoft IIS Web servers, you got to have Port80 Software!

Actually, Brian Goldfarb said, more precisely, “You have to have port 80,” and IIS is HTTP/HTTPS on Windows.

We at Port80 are excited that IIS is here to stay and has been elevated from that Web server “everyone gets for free” in Windows to the world class, core server role that we all know Internet Information Services/IIS is in our real-world deployments of Web sites and applications.

Kudos to Microsoft for boldly keeping a product brand that has often been incorrectly attacked in the press, maligned by zealous Apache-philes, and yet the product most corporations rely on to deliver Web content everyday.  With new extensibility and modularity features baked into IIS 7, it is only going to get better and better.

Port80 has already demoed the first IIS 7 Web application firewall at TechEd 2007, ServerDefender, and we will have our current tools ported to IIS 7 and Windows Server 2008 by early 2008, just in time for the next Windows server OS. 

200 OK to that!

More to come,
Port80 Software


Port80 at TechEd 2007: Drum Roll for the XBOX 360 Winner!

Posted: June 15th, 2007
Filed under: Uncategorized

Thanks to everyone who stopped by Port80 Software’s booth at TechEd 2007 in Orlando last week.

We delayed our drawing on-site, as we had so many people leave business cards at the booth, and we wanted to get everyone into the drawing.


Drum roll…. and the winner is:

Jason Cornellier
Ford Motor Company


Jason, we will be sending your brand new XBOX 360 to you next week!

Everyone else, thanks so much for visiting Port80 and make sure to take advantage of your 20% TechEd discount on Port80 Software tools..  Also, let us know where we can help you directly or indirectly with Microsoft IIS Web servers and HTTP solutions!

Best regards,
Port80 Software

No Comments »

Scaring Up Some Traffic

Posted: May 15th, 2007
Filed under: Uncategorized

Sensationalism gets readers and “clicks” no matter the medium, but it can be highly dangerous… Yesterday, we saw a wonderful post of what not to do from Ryan Narine at ZDNet (

Now it appears Ryan has “uncovered” what headers and JavaScript can do and declared it to be trouble with a capital T.  You see he cites a “hacker” named “RSnake” who released a funny little tool called “Mr-T” which uses JavaScript to look at your browser’s basic characteristics and a server-side program to look at your HTTP request headers and IP address.

Holy information leakage, Batman!  BAMM!  They know your IP address!  POW! They know your screen resolution!

OK, please forgive the tiniest bit of sarcasm there coming from people who actually believe in stemming UNNECESSARY information disclosure, but the stuff that is being disclosed here is actually the most innocuous stuff, vital information disclosure that is fundamentally NECESSARY to run a modern Web site.  Imagine the users out there who resort to feature removal within the browser or, even worse, who are frightened into buying “tools” to turn off needed HTTP headers and JavaScript. Sheesh. The effects of this OVERreaction will be staggering — watch your site’s caching go away, compression stop working, Flash support detection failing, and on and on — all from some unsubstantiated and misdirected FUD (fear, uncertainty, and doubt – so glad we took that business jargon class). 

The worst part about this ZDNet article is that there IS ACTUALLY legitimate information leakage happening, but it is only ever so briefly discussed:  it is sniff your history (like what sites you have visited)… The idea was originally promoted by our pal Jeremiah Grossman at WhiteHat Security (  RSnake had done some hot new research here showing the use of CSS to do the same thing and that is the news of the article. But the reality of general browser information disclosure has been known for years, and folks out there on them Internets have provided much better and more comprehensive examples of what you can do (such as  Some of that stuff is definitely not good to disclose…

More fascinating to us is that this idea is a decade old… it is called Browser Sniffing and Capability Detection.  In fact, one U.S. company makes a product ( that uses this data for legitimate means, and thousands of Web professionals have done this themselves for years (Port80 folks included).  And PS, for example, how do you think we know whether to compress, cache or block a request, anyways?  A big part of it is browser detection, plain and simple.

We love ZDNet, we really do, but lots of mainstream tech sites and even print magazines really seem to like to go to these security or hacker conferences and report on things without any verification (  They just don’t seem to get that up and coming “hackers” (or folks who want to start security consultancies) go to these conferences and say all sorts of wild things to get press and street cred (it’s the way of the jungle). We seem to remember other self-serving fear mongering information being fed to journalists in the mainstream press a few years ago, journalists who didn’t follow up on the facts for a different reason… Hmmm, can’t quite remember the circumstances and outcome about that, oh well…   : ) 

The unfortunate part of ZDNet’s reporting here is that there really is great stuff going on that isn’t reported from these conferences because it is too difficult for a person to understand in five seconds or is apparently unlikely to happen.  Pssst… this is the stuff that ends up being the cause of real security problems, but then again reporting on that might not sell too many clicks because it just ain’t scary enough!

Enjoy good health,

No Comments »

Great Minds Cache Alike

Posted: May 4th, 2007
Filed under: Uncategorized

Over at his very interesting blog, Yahoo! Web services guru Mark Nottingham has a terrific presentation he did at XTech 2006 called Web 2.0 on Speed [PDF].  He summarizes his basic insights as follows:

  • Static Web servers are fast.
  • Web Caches can help distribute applications.
  • A little bit of AJAX spice goes a long way.

Well, we at Port80 couldn’t agree more! — especially about the wisdom of techniques like publish-to-static and using HTTP’s built-in expiration-based caching.

Of course, back when we were saying this things several years ago, in various fora, they seemed to carry a little less weight, and we recall getting quite a few blank stares from supposedly-seasoned Web pros.  But, then, we aren’t responsible for writing Web 2.0 specs and making Yahoo! Web services run right, so it’s understandable that the same or very similar ideas might not have had the same mojo.  We don’t really mind… 

All in all, it’s just nice to have our point of view about Web performance confirmed by such a distinguished authority.

Have an excellent weekend!


No Comments »

Using LinkDeny for Powerful Web Link Redirection Management

Posted: May 3rd, 2007
Filed under: Uncategorized

If you have a Web site, you use Web redirects or redirection  (HTTP 301 and 302) to save dead links, recapture and redirect traffic, and to make URLs easy to type (yet get browsers to deeper, long URL content).  Most of the time, this is managed in source code by a developer (scope out dead page code, put in a redirect to new link), and newer content management systems provide some basic redirection features.

So, where is the LinkDeny connection? You may not at first think that a security tool designed to protect against image and file leeching (and provide access control for sites) could ever be useful to manage traffic via redirects or add any value to the redirects you already have – the tool denies links, doesn’t it?

Well, if you look at how LinkDeny works, you will find that the granular access control of IIS Web server content, managed by details in the client or browser request (like IP address, referring URL, country or geographic location, length of user session type of web browser existence of cookie, or the http request header type or its content), can be used to make redirection of traffic more intelligent than a simple code redirect or a CMS-based redirect.

Here’s how you can do redirections with LinkDeny — and really add some power, logic, and delegation to your redirection efforts. Our goal in the steps below is to redirect the example link to (these are not real sites or links but are just to be used for an example — we will show you a live example afterwards).  Let’s take a look at the steps and options to turn LinkDeny into a redirection Swiss Army knife:

1) After installation of LinkDeny, go to the LinkDeny Settings Manager, and clear all of the default rules, as they are security focused — you can keep them safely, but if you only care to redirect links, they are not necessary.

2) Add a new rule.  In the rule Add/Edit dialog, choose the selector type of Path -> Exact Match -> URL. Type in your match string (this is the part of the URL after your domain, after the first /, where you ID the link you want to redirect: /foo in this example). You can also use Wildcard to have one rule deal with match string cases like and; your match string for that would be /foo* in the LinkDeny rule. 

3) Set the Rule type as Deny Requests. Yes, it is counter-intuitive, but here we want to deny that traffic from getting to /foo and do something else with it…  redirect it!

4) Select the User-Agent test -> Edit Details -> Add.  We are using the simplest method to capture all traffic by selecting the User-Agent or Web browsers test, basically applying this rule to all requests for /foo. You could also use the Referer test to manage traffic redirects based on the referring site that sent the traffic, or even do HTTP Request Conformance Test to look for the presence of a specific header to judge what to do with the traffic.  That is one unique benefit of LinkDeny redirection: you can add some user-driven context to a redirection, because you may not want all users to be redirected to the same place.  In the general case, however, the User-Agent test is a good, all-purpose LinkDeny “test” for redirection.

5) Choose the Wildcard match option in the User-Agent test dialog, and type in “*”, without the quotes.  Here, we are telling LinkDeny to apply this rule for all browsers and search engines (engines do use the User-Agent header in search bot requests, so there is no search or SEO downside).

6) Select the Action Taken On Denial as Redirect, and in this case add in the target URL for the redirection, (the link can be on your own site or you could redirect anywhere with an HTTP or HTTPS link – your choice).

7) Save and Apply your new settings in LinkDeny.

8) Now, test the redirect link, which should make a request to the example redirect to…


At the time of writing this blog post, we wanted to provide a client example using LinkDeny for redirection management in production — from National Council of Teachers of Mathematics’ Web site,  A new site design left some older links like in the cold, and they were generating errors (and hindering users from getting to the right place).  With LinkDeny on the job, they now have the above link going to  Redirection accomplished!

Once you have set up one rule, from there you can either continue to use the Settings Manager GUI in LinkDeny to add new rules for each URL you want to redirect, or you can edit the Rules.ld file in the Web root of the site/virtual server with the redirection link… Just copy an old rule for the syntax for each URL you want to re-direct, changing the redirection URL and the target URL in the new rules.  This can be done remotely, just by editing the Rules.ld file, so there no need to terminal/remote desktop into the server to change a rule — just edit in the Web root or use FTP to swap out the Rules.ld file.  If you trust the marketing folks, you can even let them manage their own redirects (Yeah, right, keep marketing folks off the server and code if you can, but it is nice to have the option to delegate control and let them manage redirection on their own – again, it is nice to have choices!).

For more help on LinkDeny and specifically what all the codes are in the Rules.ld file code, please see the LinkDeny documentation at or contact us.

There is only one current downside: all LinkDeny redirects are currently 302 redirects, which means they are temporary.  Best practices in SEO say redirects that are real should be 301 redirects or permanent redirects.  An update to LinkDeny will soon allow you to control this by rule…  For now, the redirects are fine as 301s, and search engines will index the redirect, but Port80 will get this feature for 302s in LinkDeny soon so that you have the control you need, long-term.

We did not think as many folks would want to have this type of granular control over redirection specifically, but we are happy to report LinkDeny is on the redirection job.  Please contact us if you any questions on LinkDeny or anything else IIS and HTTP!

Best regards,

No Comments »

Replace GIFs with PNGs for Image Serving Performance

Posted: April 3rd, 2007
Filed under: Uncategorized

Andy King over at has developed a pretty convincing case for why the use of PNG (Portable Network Graphics) images over GIFs to boost Web site performance.  They maintain the clarity of GIFs with smaller file sizes and have a number of other benefits.  Check out Andy’s complete analysis:

One other benefit of PNG files is that they can be HTTP compressed (or compressed further) as well at the Web server with none of the issues with associated with larger file sizes when you try to compress a GIF or JPEG.  Given browser issues, we have revised httpZip’s default settings to include compression of PNG responses for Internet Explorer 7 clients…  they can handle compression flawlessly for PNGs, per our testing at Port80 Labs.  Bitmaps have also always been good HTTP compression candidates…

Have a good one,

1 Comment »

I’m not down, you are!

Posted: April 2nd, 2007
Filed under: Uncategorized

Uptime is king. We at Port80 agree 100%. If we were professional sports players or coaches, we’d even agree 110% (if that makes any sense at all).  However, what exactly is Internet “uptime”, in the perfect sense? Can your server be “up”, and yet your Web site or domain be considered “down”?  Of course, any hiccup any ol’ place along the network between you and the observer or client makes you look down. Then consider that people may hit your site from all over the world — add it all up, and that’s a whole lot of potential for instability or perceived instability.

Of course, most site uptime observation (that you pay attention to) is done by your own monitoring system or maybe from fairly stable monitoring points on the Internet at large (if you employ some third party service).  Now, even in the later case, you really don’t see what everyone sees: it is better than just pinging yourself, but even having outsourced monitoring doesn’t cover everyone out there who could access your site.  You can’t reliably see what some random pocket of some ISP or phone company is doing to its customers in Akron, Ohio because some of their routers went crazy, unless you are in Akron. What about the DNS server that crashes at a local big company?  Maybe there’s a misconfigured proxy at the apartment building with the shared T1? Or your “borrowed” neighbor’s wi-fi goes offline mid-request? On and on, the possibilities add up, leading to an old saying in a new context: “Never will there be a day on the Internet that there won’t be some problem some place.” Of course, this is a safe statement, as you could say the same for electrical power. It really does go off (not just when there is a solar storm) and much more often in some parts of the world than you might think.

Now, why all this ranting and raving about uptime and trying to understand from whose perspective we should measure this stuff from? Well, a relatively new social monitoring service for servers,, recently released a report about popular site downtime ( we found quite interesting.  As you will see, even the big guys have a few hours of downtime, at least from the point of view of the site’s members’ monitoring.  Our question would be to measure what a less distributed environment says about downtime, as well as what the monitored sites themselves have to say about it.  We hazard to guess the numbers won’t be quite the same from each network perspective.  We also hope that the distributed service isn’t divergent from backbone and local monitors because, if so, the “I’m not down, you are!” concept really does have merit. 

All kidding aside, it is clear that end-user perspective matters, but how many users “down” makes a site really “down”?  We guess it depends on who and where those users are versus what is happening to those users’ network routes during the failure… or something like that.  Let’s hope that Pingdom can pick up a large and wide user base, because this really could help show the stability/instability of the Web to everyone interested. That’d be us.

– Port80

No Comments »

John McCain Gets “Link Denied”!

Posted: March 28th, 2007
Filed under: Uncategorized

Well, we exaggerate just a little… Presidential hopeful John McCain didn’t purchase a copy of Port80 Software’s LinkDeny product, but he did encounter firsthand what leeching and anti-leeching is all about this week. 

Mr. McCain (or more likely his staffers) attempted to reach out to the Internet youth of the country by setting up a MySpace page for the candidate.  They decided to use one of the many templates provided by other MySpacers to make his page cool (or is that kewl)? 

No matter.  What McCain’s team didn’t do was to set up the borrowed MySpace template properly or with the proper credit to the MySpace template’s creator (thanks to Mike at for educating the masses while he link denied McCain!:  Instead of copying files used in the template to one of McCain’s own servers, his people linked directly to the files on the MySpace template creator’s site.  Whoops.  Now, at that point, all of Mr. McCain’s new MySpace friends began leeching bandwidth from the template creator’s site. Eventually, the designer figured out what is going on and decided to make an example out of the politician by changing some “straight talk” into some “straight egg on the face” by modifying the images that were being leeched, hotlinked, etc. by McCain’s new MySpace page.  For more details, check out

The basic replacement was similar to how LinkDeny works: the offended MySpace template designer simply sent a different image to people who directly hotlinked to his server via McCain’s MySpace page (no laws broken here is right: your images are your images, even if they change when someone is borrowing or stealing them). 

If you understand this, you know that John McCain really didn’t get “HACKED” here… McCain himself was really “LEECHING” or “HOTLINKING” — and he got caught with his hand in the bandwidth cookie jar… naughty, naughty, Mr. Senator.  Anyway, in honor of Mr. John McCain’s likely non-purposeful gaffe, Port80 is offering deep discounts to both the DNC and RNC for LinkDeny site licenses… just in time for the political season. 

We are non-partisan here at Port80 Software, except when it comes to Web servers!

Port80 Software 

No Comments »

Shared Hosting’s Death? A bit premature, but…

Posted: March 26th, 2007
Filed under: Uncategorized

Just came across a nice discussion of the shared hosting world – and how long it may last….The author may be aggressive by calling for the death of all shared hosting in the near-ish future, but we have been facing a unique shared hosting challenge since we started developing ISAPI filters for Microsoft IIS Web servers that relates to his pain points. 

In the past year, Port80 has been working on .NET modules, ihttpmodules and ihttphandlers ports for various Port80 products. With ASP.NET on current IIS 4/5/6 Web server deployments, the .NET modules will be handy for deployment of Port80 Web solutions in shared hosting environments.  One of the limitations of our current products, deployed as ISAPI filters, is that you have to install them with admin access, and most of the tools are installed at the global “Web Sites” level, above the individual sites/virtual servers in IIS.  So, this is nearly impossible without special ISP help to get the filters installed from a practical business point-of-view, as hosters would in most cases rather not bother with one-off special components on one server for a shared hosting client, especially if that component could, even remotely, affect other clients’ sites on that box.  And yes, like it or not, most hosts FEAR the ISAPI.  They should learn to embrace and extend, no?  Well, they won’t have to worry with Port80 Web solution options in the future, as there will be .NET modules to get 80-90% of the functionality in most current Port80 tools into a shared hosting context, so long as ASP.NET is working on the server.

The only real loss here is that these .NET modules will not see all Web requests that come into an ASP.NET site on IIS 4/5/6 – they will only be able to operate on file types mapped through the ASP.NET interpreter.  That can be a performance hit when you consider Port80 tool ports like compression and cache control, but isn’t life about such trade-offs? : )

Microsoft is aware of this limitation, and the IIS 7/Longhorn Web server will take the .NET modules to a new level, replacing ISAPI with ihttpmodules and ihttphandlers, that can be deployed by site easily and are modular, supposedly “safer” additions to the Web server (ISAPI is safe if you know what you are coding, and spend some QT in this Microsoft API backwater).  First, of course, Longhorn server will ship (2008-ish), before these tools are in wide use outside of the test lab, but it is coming — and Port80 Softwarewill offer most of its current tools and a few new surprises for IIS 7. 

More to come on .NET and IIS 7, stay tuned to this Bat Channel…

Happy Monday,

No Comments »

Ajax Security: A Server-Side Solution?

Posted: March 15th, 2007
Filed under: Uncategorized

There has been a lot of discussion lately in the Ajax community about the security implications of using raw JavaScript or JSON as a response in an Ajax application (see,, and  The primary concern is that, in some instances, third party sites can gain access to private data in this format using a technique called Cross Site Request Forgery (CSRF —

CSRF is not a new idea, nor is it unique to Ajax, but let’s briefly summarize how the technique may be used within an Ajax application.  Say that a user logs into their private site and authenticates, which issues a cookie allowing them to fetch content from various protected URLs. Later, the user visits an unknowingly evil site.  This evil site uses JavaScript to include a file from your private site by inserting a <SCRIPT> tag referencing assets in JavaScript format of interest. Because it is running on the client and you are still authenticated against your private site, it has no problem getting to the URL of interest since the access cookie will be sent with the rogue request. Once the evil site page has received the data of interest, it then makes a silent request back to itself, typically using an XMLHttpRequest (XHR) object, and passes the stolen data – mischief accomplished, victory for the evil site! Booooo. Hiss.

Now, given this high level overview, let’s look at this approach a bit more deeply. First, we note that the evil JavaScript must make use of the <SCRIPT> tag to steal the data. Why did they not use the XHR object to steal things? Well, as even neophyte, Ajax programmers know the XHR object must obey the same origin policy — meaning that it cannot access resources on domains other from that which it is served.

It should be noted that there are limitations to what can be stolen easily via the <SCRIPT> tag because any data that is included this way must be something that will run on its own when evaluated by the browser’s JavaScript interpreter. For example, if the requested URL on the secured site returns:

var credentials = {“username”:”testpassword”, “password”:”testpassword”};

then the evil script can easily parse credentials and send the relevant data wherever it wants. In addition, if the file returns

processCredentials({“username”:”testpassword”, “password”:”testpassword”};

then the evil script can create a function called “processCredentials” and again easily have access to the data.

Given that the response must be executable, it has been suggested that if a site returns JSON instead of script like so:

{“username”:”testpassword”, “password”:”testpassword”}

it will then be safe because nothing can be done when the response is evaluated. However, Joe Walker offers a dispute to this and suggests how one might still access the data in this scenario (

So, how to really solve this problem, and why does Port80 Software think a server product like LinkDeny has anything to do with Ajax security issues? (Yes, we had to go there.) Well, it turns out that the problem the Ajax folks are encountering is, at its heart, a broad-based one — access limitation without authentication. The initial aims of LinkDeny were to stop image lechers and provide Web administrators finer grain access control to various objects on their Microsoft IIS Web server(s). The Ajax security problem we just described is fundamentally that the evil site should not be able to access the file that produces the JavaScript response or even any JavaScript resource directly. The bad guys should be blocked because they are not accessing the resource from an allowed URL. This idea is no different than a user not being allowed to directly hotlink to an image or other object on your server in an attempt to steal your bandwidth or content…

The primary mechanism to thwart direct access to objects including scripts on a server is to check the HTTP Referer header. Basically, in order to be allowed to fetch certain objects, a valid Referer header must be sent that is the same as the serving domain. With such a restriction in place on the secured server, the page on would have to forge the header in order to conduct the nasty CSRF attack. Using JavaScript without XHRs, this header is not possible to modify — and thus the bad guys are foiled.

Now, even if you could somehow spoof this header (as our image leeching friends do occasionally), there are plenty of other countermeasures that you can deploy. You can issue special time-sensitive access cookies or have short-time unique URLs that must be used for the requested objects. Such measures make it much more difficult for the evil site to lay a trap that is active long enough to get what they want…

So, to help plug this script-related security hole at the server level, don’t let any domain link to objects such as .js files or URLs that generate JSON or JavaScript unless it is your own domain or one that is explicitly allowed. This is easily accomplished using Referer header checking (and other access control techniques with teeth) either with a product like LinkDeny for IIS or mod_rewrite for Apache.

Port80 Ajaxians

No Comments »

LinkDeny: Results for IIS anti-leeching and access control

Posted: February 26th, 2007
Filed under: Uncategorized

LinkDeny has been out on the Net for a few weeks now, and Port80 Software is seeing some interesting uses of the tool in real Microsoft IIS Web server deployments.  Why are admins and Web developers turning to LinkDeny?  Let’s check the top few reasons…

#1 People steal your bandwidth…  Jerks!

You (and/or your team) work hard to build a site or application, and the network infrastructure to support your traffic.  You have success in whatever you are serving (hey, good job!).  Then, randomly, your bandwidth usage shoots through the roof, but you are not seeing any benefit from the increased traffic.  Curious, you start to examine logs and then discover the culprit:  someone has hotlinked the image for your product on, and you are paying for the images they serve to get their orders.  You have been leeched, but how do you solve the problem?  Legitimate users need those images, but you want to stop serving to this freeloader… what is the play?

#2 People slam your site with hotlink requests, creating a DoS-style attack… For shame!

OK, so you have been hotlinked from the guy, but the bandwidth cost is low, so no big deal.  A few days later, bandwidth goes up even more, and you simultaneously start getting warnings that your site is down or takes forever to load.  Nothing changed on your site, so what is going on?  Back to the servers logs, you realize that you are getting bursts of requests for another image, this time from a blog post.  You google around, and discover the blog post is also hotlinking to another image, but this time the requests are being pooled through an RSS feed, fed out to other sites, and now you are not only paying for that bandwidth, but you have to increase your bandwidth limits with your ISP just to keep the site open for legitimate users, as these hotlinkers are hogging up the line.  This is getting annoying…

#3 People just bother you for no good reason… Sigh.

So, you upped the bandwidth limit, because you had to keep serving legitimate users those images – you need to be online for your business.  Still, you know those guys are hogging your bandwidth with hotlinks (and they never respond when you email or call with a “cease and desist” warning), so you let it go…  Until your site is hacked, customer data is stolen, and you are in trouble.  After the dust settles, you discover that some kid in Russia got in through some cross-site scripting magic.  Damn, you don’t have any clients in Russia – why did these guys have to find your site and hack away? 

LinkDeny in the real world

How can you make sure this does not happen again? Obviously, it is time to lock down the Web server – and to make sure that precious bandwidth bills are being spent on legitimate requests, not someone else’s HTML experience with your served image bandwidth.  This is why Port80 Software built LinkDeny – to defeat nefarious hotlinking cold and to make sure that you have a bouncer at the HTTP layer, blocking requests you don’t want (or need) to establish access control for your IIS server. Complementing firewalls and IDS security systems, the software creates an access control bubble around IIS – and really, around one site – to allow for fine-grain access control by request attributes like these:

  • IP address

  • Referring URL

  • Country or geographic location

  • Demographics

  • Length of user session

  • Type of Web browser

  • Existence of cookie

  • HTTP request header type and content

The flexible, rules-based system allows you to dream up just about any method to allow or deny access to content resources, making LinkDeny a powerful security tool for Microsoft IIS Web site administrators and management.

One client evaluating LinkDeny on a medium traffic site for bandwidth control was able to block 2,600 requests by using a combination of default rules in LinkDeny… all in 24 hours!  The nice thing is that he was able to test LinkDeny in production without messing around with real traffic by putting the rules into a “log only” mode…  this is a good approach when you are just starting to test anti-leeching and access control with LinkDeny…

Check out some of these default rules that ship with LinkDeny (this is not the full list, but you can get the idea for this feature — white or blacklist who you like, log it, redirect them, or drop the session with a forced 404):

  • Top Countries with Most Internet Users

  • Top Countries with Longest Surfing Internet Users

  • Top Countries with High Risk for Hacking or Fraud

  • Top Countries with High Risk for Phishing Attacks

  • U.S. Embargoed Countries (think Terrorist Watch List)

  • Top Blog Hosts (Blogspot, LiveJournal, etc.)

  • Top Social Networking Sites (MySpace, etc.)

  • Top Auction Sites (eBay, etc.)

  • Top Anonymous Surfing Sites (Proxify, The-Cloak, etc.)

  • Top Search Engines (Google, Yahoo!, etc.)

  • Top News Sites – General (NYTimes, Slate etc.)

  • Top News Sites – Local and TV (NYPost, Telemundo, etc.)

  • Top News Sites – Financial (SmartMoney, Bloomberg, etc.)

  • Top News Sites – Tech (Slashdot, InfoWorld, etc.)

  • Top Internet Portals (MSN, Amazon, etc.)

  • Allow common Web browsers only rule template

  • Deny images or video without user-agent header present rule template

  • Deny image or video without referrer present rule template

Another client is not concerned with blocking anything, or rather wants to block everything coming into a server, except requests from one very specific IP…  their firewall needs to be a bit more open, so they are using LinkDeny to allow traffic on their network to get to the site, an extranet app, and blocking everything else…

Finally, here is an example of a client with no anti-leeching or access control issues, but who is using LinkDeny to redirect traffic for dead links and selectively redirect the incoming traffic to different new pages based on the HTTP referring site…  very cool.

What do you think about LinkDeny, hotlinking and IIS access control?  We look forward to hearing your comments.  If you have a chance to check out the evaluation guide and the documents – and something does not make sense – just drop us a line so we can get your LinkDeny rules up and running soon on your Web server.


No Comments »

IT Cheat Sheets collection (and Happy 2007)

Posted: January 25th, 2007
Filed under: Uncategorized

Hello folks,

We have been out of commission on the 200 OK blog for a few weeks during the holidays…  and while we were preparing LinkDeny for advanced IIS anti-leeching (don’t be stealin’ my bandwidth) and access control (imagine a bouncer at the Web site/app layer way tougher than that guy at Studio 54).  This new IIS security tool has been baking for a few years, and at a San Diego Windows Server 2003 User Group meeting last week, we got some excellent feedback from So Cal’s finest admins:  They loved the idea of blocking hacker requests and bots, controlling traffic, and protecting site content with a one simple-to-use tool.  Check out LinkDeny this week or early next.

Oh, we almost forgot — here is a comprehensive list of IT cheat sheets from — they did some good Googling to come up with these tips from twisted pair cable systems Cliff’s Notes to character entities.  Enjoy!

Finally, we hope that you had an excellent holiday season (if you had the time to enjoy it!), and we look forward to working with you on Microsoft IIS and HTTP solutions in Lucky Number 2007.


No Comments »

New httpZip 3.7 with PDF and PNG Compression, Ajax Tweaks, and More Speed

Posted: November 30th, 2006
Filed under: Uncategorized

We have been finishing up work on the last update to httpZip for this year — version 3.7 — and the new httpZip advanced IIS compression software is now available for full download and for patching.

What’s New in httpZip 3.7?

1. Added default support for PDF (learn more) and PNG (learn more) file compression.
2. Added optimization to handle small and large Ajax files for better performance.
3. Added compatibility with the Windows Server 2003/IIS 6 kernel-mode cache (ISAPI filter will not interfere with kernel-mode cache operation).
4. Fixed various HTML-based reporting bugs.
5. Enhanced core ISAPI code for improved efficiency in CPU utilization and memory management.

You, our favorite users out there, have been requesting PDF and image compression, and we are happy to now support PDF and limited PNG compression – there should be very safe to deploy (but will only show up in a full install – the patch will keep your former httpZip settings as is).  If you like, you can try to add “jpeg” to the image MIME to compress, but out research shows that these usually get larger with HTTP compression (as do GIF files for sure, so test JPEG compression first before deployment).

We had some issues with small and large Ajax files and compression in previous httpZip versions – 3.7 resolves these issues.

In the past, if you were using IIS 6’ kernel-mode cache, httpZip would subvert this feature.  httpZip has been modified to be kernel-cache friendly, which should be a speed gain for those using kernel-mode caching on Windows Server 2003.

Finally, we cleaned up a few bugs.  Let us know what you think, and please do try to stop IIS before installing or patching.  The InstallCheck tool is good to have around as well during your installation. 

Where can we help with IIS compression and httpZip?  We look forward to your comments, and we hope to release a few more HTTP compression surprises in 2007…

Best regards,

No Comments »