LinkDeny FAQ


I want to block the "images" directory to the entire web, only my local sites can access this directory. How can I do this?

LinkDeny provides a number of ways to accomplish the goal of preventing "hot-linking" or "leeching" of your content, such as images. Which method you choose, and/or how many different methods you use to employ, will depend upon on how strictly you want to enforce this content security policy.

The simplest way is to use a single LinkDeny rule containing a Referer test. Our online evaluation guide contains a fairly straightfoward example of how to implement such a rule for all images on a given Web site: Evaluation Guide

This type of test is sufficient to make a LinkDeny rule that will block most casual hot-linking or leeching of your content. It is not however foolproof, which is why LinkDeny also provides more advanced functionality to more strictly enforce the desired policy.

Beyond the simple Referer check illustrated by this example, you can apply even stricter enforcement of your policy by adding aTime Limit Test to your rule. Time Limit tests come in two varieties: cookie-based and URL-based. The cookie-based test is less strict but requires no changes in source code. The two types of Time Limit test can be used independently or together. For a detailed discussion of Time Limit tests, go to Using the Settings Manager and scroll down to the box entitled "Time Limit Test".


When uninstalling and reinstalling - how can the rules that we have setup be saved or are they destroyed in the process?

The Link Deny rules for a given site are persisted in an xml file called rules.ld, which are stored in site's document root directory. The LinkDeny uninstaller will not remove this file. This means that you can completely uninstall LinkDeny and then reinstall it, and the prior state of your LinkDeny rules will be preserved.


We are attempting to stop several site scrapers, which are using predictable methods to steal site content and we need to create a rule to block them. How can we block these invalid bots without affecting valid ones such as Google and Yahoo?

Often the best way to use LinkDeny to protect content from undesired bots is to create an Allow rule that uses an "HTTP Request Conformance" test. Using this type of test, you can define valid HTTP requests as having one or more characteristics. For instance you can require only certain HTTP Verbs (e.g, GET and POST only) or HTTP versions (e.g., HTTP 1.1). But probably the most useful requirement you can apply is that a set of particular HTTP headers be present in the request.

You could for instance define an HTTP Request Conformance test that requires all requests to have non-empty User-Agent, Accept, Host and Date headers. Naturally, the particular headers you choose to require should be based on the HTTP request characteristics of the bots you wish to block. The goal is to find a set of request headers that the "undesirable" bots do not supply, but that normal browsers and "friendly" bots do supply. If you can find such a pattern in your IIS logs, you can then use LinkDeny to enforce it as a requirement for access to particular files.

Another option, besides building a complete HTTP Request Conformance test, is to focus on the User-Agent header alone. An Allow rule can be constructed with a User-Agent test that not only requires the presence of this particular header (as in the HTTP Request Conformance test) but requires it to match certain patterns, such as simple wildcard expressions or even complex Regular Expressions. In this case, you would examine your IIS logs to see if the misbehaving bots are failing to use sufficiently "normal" User-Agent headers (e.g., ones containing common sub-strings like "Mozilla", "MSIE", "Firefox", "google" and so on. An Allow rule with a User-Agent test requiring the User-Agent to match one of these well-known substrings might be sufficient to shut out those unwanted bots. Alternatively, if the bots in question are reliably sending a distinctive User-Agent header, you can easily block them with a Deny rule using a User-Agent test, where this time the pattern is designed to match the header sent by the unwanted bot.

In addition to these measures, bad bots can sometimes be more effectively controlled with a variety of different kinds of tests. Two others that can be useful are the simple Referrer test, and the IP Address test. If the content you wish to protect can be counted on to be requested with an HTTP Referrer header, under normal circumstances, then you can require that header using a Referrer test in an Allow type rule. If the "bad" bots fail to provide the Referrer that is one more basis upon which they can be denied access. Lastly, if these bots are coming from a particular IP address range, you can make an IP Address test in a Deny rule, to explicitly block their access.