AP Twitter Hack: Lessons and Prevention

Posted: April 25th, 2013
Filed under: IIS & HTTP


Earlier this week, the Associated Press’s Twitter account sent out a tweet announcing a terrifying attack on the White House. The Tweet quickly spread throughout Twitter and sent the stock market spiraling.

However, as we now know, the Tweet was fake, the result of the @AP account being hacked. As web security professionals, what we’re most concerned about is: a) How did the account get hacked, and b) How do we prevent it from happening again?

Continue Reading

No Comments »

No Universal Solution to BYOD: Develop Your Own Strategy

Posted: April 4th, 2013
Filed under: IIS & HTTP


Ask any CISO what his or  her topmost concern currently and 7 out of 10 will tell you its bring your own device (BYOD). More than 70% of IT executives believe that companies without BYOD will be at a competitive disadvantage.

The importance of this topic can gauged by the fact that only in the last week, we have received calls from five Client CISO’s who have asked us for our opinion on the subject. During the same time we conducted a security review of two organizations on behalf of a client, and both these organizations had a BYOD policy.

BYOD surely has its advantages. Employees are happy as it gives them freedom to use their own device, which increases flexibility, convenience, and productivity. Companies are happy because it cuts the cost of deployment and management of sometimes hundreds of devices. It’s not surprising therefore that BYOD has become a natural favorite amongst both employees and employers. In fact our President Hiren Shah was a visionary in this regard as he implemented BYOD way back in 2006, when he released a policy of allowing select models of mobile phones access to corporate e-mail.
Continue Reading

No Comments »

Two-Factor Authentication: Enough for Your Application and User Security?

Posted: February 28th, 2013
Filed under: IIS & HTTP


From our partners at Net-Square

Two Factor Authentication, as the name suggests, relates to a form of approval which requires a user to present two or more of the 3 authentication factors. These are: “something the user knows (eg. Password, PIN)”, “something the user has (eg. ATM card, Smart Card)” and “something the user is (eg. Biometric characteristic such as fingerprint).” Not a new concept for the masses in this era of online banking and internet trading. Two factor authentication has become increasingly important for online applications because of the ever increasing hacking attacks.

Recently, a US based company offering free cloud storage services was hacked where few accounts were broken into and one employee account containing user data files was breached. This sounds dangerous, as hackers may get confidential data from account owners. Since this service was being used worldwide, where companies were storing important data online, this company then decided to implement two-factor authentication to prevent such attacks.
Continue Reading

No Comments »

Application Security: Blacklisting or Whitelisting?

Posted: December 12th, 2012
Filed under: IIS & HTTP
Tags: , , , , ,


 

black list vs white list

Does the Blacklist Approach Work?

Traditionally, IT security is thought of from a threat perspective. It always brings into focus thoughts of protecting the applications, systems and infrastructure from viruses, malware and other threats posed to IT assets. Therefore one is always focused on identifying new threats and making sure they get integrated into the “Blacklist,” an “allow all, except” list that is maintained to protect one’s assets. This is the same principle on which many anti-virus, anti-malware, and other security product providers work. You update the signatures, and the blacklist is updated so you will be protected from a certain threat, which, by the way, is out there in the open known to everyone. While we have our thoughts on whether this approach is truly effective or not in protecting against viruses and malware, our views on application security is very clear. The blacklist approach doesn’t work; especially not today when the attacks have become very sophisticated.

Update all the blacklists!

The Problem(s) with Blacklists…

For one, we are fast reaching a saturation point for the blacklist approach’s effectiveness, as the volume of blacklists that need to be maintained is large and ever growing. As one Senior IT Manager at one of our client’s organizations once put to us, “How much will I filter? There is no end to it.” This is not the first time we have come across this frustration. We recognize this challenge for the drivers of IT in an organization, as their core function is to improve productivity and drive innovation.

And second, because the attack vectors have become complex and the attackers more innovative and skillful in evading detection, the “Blacklist” approach will not work. I was personally seized of this challenge when we were working on putting together an Anti-Spam solution in my earlier stint. The sheer number of SPAM messages meant that some of them would definitely filter through. Unfortunately, the same scenario is now playing out in the Application Vulnerability space, but with potentially disastrous implications.

So You’re Saying I should Whitelist?

So then what is the answer? Well, take the “Whitelist” approach. With the whitelist approach you structure the application to only accept the legitimate functionality and stop everything else. Some simplistically put it as diagonally opposite of blacklisting, i.e. the “deny all except” philosophy. In the past this approach has faced a roadblock because nobody wanted to take a chance of blocking a legitimate transaction. Recognizing this challenge, we are now helping our customers design applications by integrating the whitelist approach. What we do here is sit with the architecture or development team and review the business case for each user input and then work out different solutions of applying a whitelist on these inputs. We believe that this approach works best as now you are only allowing a legitimate functionality to get executed. In what form does this whitelist approach take? It takes many different forms like filtering input characters against an array of allowable characters or doing a comparison of input values against legitimate values from the database.

Using the blacklist approach is like chasing your tail. How long can you do it for before you exhaust yourself?

 

Until next time, stay safe!

– Hiren

No Comments »

From 0-Day to Fix

Posted: October 19th, 2012
Filed under: IIS & HTTP


From our partners at Net-Square


One of the ongoing jokes in the information security industry is that “every day is a 0-day”. New bugs are being discovered daily. The recent Java 7 exploit (CVE-2012-4681) found in the wild has captured everyone’s attention. This bug has the potential to cause widespread damage. As I write this, almost after a week of its discovery,  organizations
and individuals are  still  being targeted by this exploit. Governments, intelligence agencies, underground outfits and script kiddies are having a field day “owning” your systems.  The logical question that follows is “so, how do I prevent myself from being owned?”  Vendors have always pursued the noble goal of creating bug-free software. What many vendors miss is the “TIME TO FIX” factor. Time-to-fix is the time taken from the bug’s first discovery to a patch or update made available to the customer. If you are building software or responsible for the security of your organization’s applications, ask yourself the following questions:  “How fast is my bug fixing process?”,”What is the average turn-around time of all bugs fixed in 2012?”, “What has been my fastest time-to fix? My slowest?”

Scoring guide:

Less than 48hrs – Excellent.

48-72hrs – Fair.

More than 72hrs – …I don’t want to be impolite.

Chris Evans, who heads the Google Chrome security program has set an aggressive time-to-fix goal of less than 24 hours. And his team has come good on this promise. The Pwnium contest at CanSecWest 2012 in March saw two Chrome bugs fixed and turned around in less than 24 hours. Users worldwide had a Chrome patch before the conference was over. Google paid USD 120,000 in bug bounties as an appreciation for the bugs turned in. Pwnium 2 shall be held at Hack in the Box, Kuala Lumpur in October (where yours truly is speaking as well as training). I am told that Google is bringing a treasure chest of 2 million dollars to be paid out in bug bounties. This is solid security!

I must also mention the herculean efforts made by Microsoft. “Patch Tuesday” is an important monthly event for IT professionals worldwide. Microsoft releases patches for its entire product line on the second Tuesday of every calendar month. Sometimes there is an extraordinary Patch Tuesday in the same month to address critical fixes. Although a bi-weekly update seems slow in light of my personal 72 hour benchmark, it is a mammoth task to ensure stability of a bug fix across a vast and diverse product line as Microsoft’s. Vendors of other high exposure software are not as mature as Microsoft or Google. Adobe, Apple and Oracle are learning this lesson the hard way. They have been struggling with bringing in a strict time-to-fix regimen.

The speed of bug fixing is something every organization should pay serious attention to. We have found this to be the Achilles’ heel for many of our clients. They struggle to remediate their vulnerabilities simply because their outsourced software vendors don’t have their act together and take months to fix them. In the past we have  helped our
clients on remediation through our training programs. This week we are piloting a novel approach to help solve  our clients’ problem through an onsite service. A Net-Square analyst will work onsite with the internal development team or the software vendor to ensure the vulnerabilities are fixed. This they will do sitting next to them, helping them understand what the issue is and helping them literally code appropriately for it. Incidentally, until Java 7’s latest bug, CVE-2012-4681, is fixed, I personally have uninstalled Java browser plug-ins from all my systems. I probably intend to keep it this way too.

Saumil Shah

No Comments »

New Features for SDVP IIS Web Application Firewall

Posted: August 20th, 2012
Filed under: IIS & HTTP


We recently launched the newest version of our advanced Web application firewall for IIS, ServerDefender VP. The update includes security enhancements that will allow users to block IP addresses by geographical location and improvements to the user interface.

Of course, there are plenty of great features already built into SDVP, but since country blocking is one of the newest and more powerful features, we wanted to show it off a bit. Be warned, geo-targeted traffic blocking with SDVP is easier than you may think. Below, we also took a look at a few other new little features as well.

NEW FEATURES

Geo-targeted IP Blocking

Country blocking for IIS

ServerDefender VP’s newest feature allows users to prevent site access to users in select countries by blacklisting nations from a drop down menu. Geographically-targeted IP blocking provides added security for organizations that experience repeated attacks from specific nations or heavy amounts of malicious or bot traffic from certain countries. We actually use this functionality on our own site as we have experienced heavy amounts of malicious traffic from certain regions of the world for some time now. Trust us, it’s help a lot.

The ability to block traffic by country had previously been available in Port80’s anti-leaching software, LinkDeny. We decided it made a lot of sense to have that functionality in a Web application firewall. So we added it in. The above image shows all the work required to block a country. Select a black or whitelist model, choose the countries you’d like to block, add them to the list, and click apply. That’s all the rule configuring required.

Informational Threat Severity

Informational severity IIS web application firewall

ServerDefender VP’s LogViewer gains an additional threat severity level called “Informational”, which is used to report threats that have been deemed harmless by creating an exception. We built this feature to reduce the signal to noise ratio in our LogViewer. In other words, when an event occurs that has an exception created for it, SDVP won’t put it in big bold red letters like this:

High attack severity IIS web application firewall

Instead it’s grayed out, so real attacks (they’re the ones in red) stand out.

Filter By Severity

Filter by severity - IIS web app firewall

ServerDefender VP LogViewer will now allow users to filter logs by severity of attack (e.g. high severity, moderate severity, low severity). This is useful if you only want to see the most severe attacks or check what you have marked as an exception. This further allows users to perform the action they want to quickly without having to look through every  log for the correct one.

What do you think?

Have thoughts for other features? Hate the new ones? Send us an email and let us know what you think: info@port80software.com

No Comments »

Dependent on Automated Web App Scanners? You may be missing vital clues!

Posted: August 9th, 2012
Filed under: IIS & HTTP


From our friends at Net-Square

A variety of automated web application scanning tools are available today, which can perform a vulnerability analysis of web applications really quickly and give you a list of vulnerabilities detected.  This certainly reduces a lot of time for your Infosec team, which is already loaded with security issues. They can now certify and approve deployment of applications to production after getting a clean report. But, how good are these automated tools? Can we really rely on them?

Three security experts, Adam Doupe, Marco Cova, and Giovanni Vigna of the University of California, Santa Barbara, put the best of automated and semi-automated scanners to test and have concluded that “while certain kinds of vulnerabilities are well-established and seem to work reliably, there are whole classes of vulnerabilities that are not well-understood and cannot be detected by the state-of-the-art scanners”.

Their study, aptly titled “Why Johnny can’t pentest”, demonstrates how very well regarded automated scanners miss as many as 60% of the findings. And this is really not surprising. Even the best of scanners are limited in application “coverage”. What you don’t see, you don’t report. Net-Square has a history in building some of the best automated scanners in the past, and we are well aware of problems with automated scanners. The second fundamental problem is that automated scanners can never perform vulnerability chaining. Critical findings discovered by Net-Square analysts sometimes take more than two or three bugs to exploit.

The other problem is an operational one; customers and firms undertaking application testing are so focused on reducing the number of false positives that they weaken the actual premise of testing. When I talked to a client about Net-Square’s automated scanner – NS-Webscan, the first question was about its rate of false positives. But wait, aren’t we supposed to care about what it finds in the first place? Many firms performing application testing use automated scanners to generate the reports and have their analysts to filter out the false positives. How did we find this? While interviewing candidates from these firms.

Automated scanners aren’t entirely useless. They are best utilized in reducing the load for manual testing. Obvious vulnerabilities get detected right away. Not all customers can engage a team of sharp penetration testers throughout the year. Automated scanners, like NS-Webscan, provide an intermediate solution for Infosec Management to certify rollout of minor releases and changes between two cycles of manual penetration testing.  However, being entirely dependent upon automated testing is like an ostrich sticking its head into the sand.

We use NS-Webscan to initially check to see how vulnerable the application is. If we find many issues in the first round of testing then our analysts know that they have a long road ahead on that particular application test, with many interesting vulnerabilities to be discovered. The bottom line is, no amount of automation can match the skill and cunning of a hacker’s brain.

No Comments »

How Popular are Microsoft IIS Servers Around the World?

Posted: July 30th, 2012
Filed under: IIS & HTTP
Tags: , , , ,


Microsoft’s IIS web servers have been among the most popular in the world for years, historically battling Apache, and the increasingly popular nginx, for supremacy. IIS remains strong in many market segments throughout the world, and with data gathered by W3Tech, we’ve compiled some breakdowns of IIS use around the world. Numbers are a representation of IIS use per top-level domain (TLD).

[include file=”map.html”]

 

Below is an enlarged view of European IIS usage. This bar graph is in order of IIS popularity among top level European domains.

[include file=”euro_iis.html”]

 

Below, IIS usage is broken down by generic top-level domains. This area is strong with governmental domains like .gov and .mil, which were both over 50% for IIS usage. IIS also proves popular with educational organization, accounting for nearly half of all .edu domains.

[include file=”generic_iis.html”]

While IIS was reportedly passed by nginx for overall popularity earlier this year, it maintains a strong hold in several countries and industries, especially at the governmental level. This should be no surprise because as we have seen IIS remains popular within many corporations in the Fortune 1000.

What do you think IIS’s future holds? Let us know in the comments below.

Data source: W3Techs http://w3techs.com/technologies/breakdown/ws-microsoftiis/top_level_domain

No Comments »

Denial of Service by Regular Expressions – ReDOS!

Posted: July 9th, 2012
Filed under: IIS & HTTP


From our friends at Net-Square solutions:

A Regular Expression or “regex” is heavily used in pattern matching. Applications use regular expressions to verify inputs of all sorts – dates, zip codes, email addresses, transaction amounts, etc. Little is known about the dangers in poor implementation of regular expressions. Regex  search patterns, if not implemented properly, can cause havoc. That havoc is named ReDOS!

The intrinsic nature of regular expressions involves finding the best and tightest match. This involves recursion, a nested operation that can be time consuming if not handled properly.

Example: The regex “^(d+)+$” matches any groups of digits. An input of “1234” matches the regex on the first pass itself. An input of “123X” will fail on the first pass. Since the regex involves grouping, the engine will try another pass to see if a different grouping works. Before it declares the input to be non-matching, the regex engine would have tried 2^4 = 16 passes. An input of “123456789X” will require 2^10 i.e. 1024 passes before it fails the matching. A single character added to the input doubles the number of passes for checking. In our tests, evaluating a 10 character mismatched input took 0.001 seconds, whereas a 25 character mismatched input took 15 seconds.

Adding more characters in the evaluation string increases the number of paths evaluated exponentially and forces the regex engine to evaluate millions of paths, eating up CPU time and leading to ReDOS.

At the heart of a regex engine is a Finite Automata machine. Some regex engines use NFA, or nondeterministic finite automata (example below):

Usually when matching search patterns either an NFA, which involve backtracking when it comes to mismatches, or a DFA engine is used. Backtracking matches the positive input fairly quickly, it is the negative input which takes a bit longer. The NFA must confirm that none of the possible paths through the input string match the regex, which means that all paths have to be tested.

ReDOS is more common than you think it to be. Constructing a regex is a complex task, and if it isn’t thought through properly, you have a ReDOS situation in your application.


Learn more about ReDoS

-by Mayur Singru, Net-Square Solutions

No Comments »

Microsoft Reveals Four Windows Server 2012 Editions

Posted: July 6th, 2012
Filed under: IIS & HTTP


There are more details in on Windows Server 2012.  This week Microsoft announced the four versions of its much reduced server platform lineup.   The editions now consists of Datacenter, Standard, and Essentials.  Microsoft says the new grading simplifies the licensing experience, making it easier to select the appropriate edition.  We’d say so based on previous edition guides.

Take a look at the new and simplified editions and pricing below:

Windows Server 2012 Editions

From Microsoft.com

A Windows Server 2012 release candidate is available now for download now.  Thirsty for more?  Check out some of the new security and performance features in IIS8 and Windows Server 2012.

No Comments »

6 Useful IIS Resources

Posted: July 2nd, 2012
Filed under: IIS & HTTP


As developers of Microsoft IIS performance and security tools, we frequently seek out resources dedicated to IIS. Sure, the Web is full of blogs, articles, and forums touching on IIS, but finding the best of the best can be a bit tricky. So, rather than have you scour Google results for quality resources, we’ve put together a list of some of the most useful IIS sites and resources from around the Web.

6. Scott Forsyth’s Blog

What’s good about it? Scott Forsyth has worked (up until recently) for two companies  in web hosting, and it shows in his blog.  Scott’s blog features many useful technical pieces on IIS, ASP.NET, and Webfarms.  IIS specific posts include troubleshooting guides, video tutorials, and IIS news.

Read Scott’s blog.


5. dotnetslackers.com – Mastering IIS

What’s good about it?  Videos.  Lots of videos.  46 to be precise, and all of them have been created help you master IIS and Web technology.  These ~10-20 minute videos provide tutorials and guides for topics such as securing IIS, setting up IIS shared configuration,  and IIS FTP.  Though these videos are created by Scott Forsyth, this site acts as an organized location for this series of videos.

Watch lots of IIS videos.

4. Steve Evans Site

What’s good about it?  Steve Evans is a Microsoft Most Valuable Professional (MVP) who has worked in IT for over 12 years.  Steve’s site features a number of slidedecks and video presentations on IIS, like “IIS 7.5 for IT Pros” and “IIS for Developers.”

Learn from the MVP.


3. IIS Answers

What’s good about it?  IIS Answers is an IIS exclusive Q&A board, though one that is limited in its usability due to its lack of a search feature.  While there are much much more robust and user-friendly forums out there, this one is exclusive to IIS, so – if you look carefully – you may be able to find an answer to a question you have or, if you ask a new question, have it answered by a member of the community.

2. ServerFault

What’s good about it?  Serverfault has a large community and a robust tagging and voting system.  With tagged posts, IIS relevant questions are easy to find and answer.  IIS and Windows Server rank among the most tagged questions, so there is plenty of useful info for IIS users – and active users to provide answers if you have new questions.

Find IIS questions and answers.


1. IIS.net

What’s good about it?  This is the home of IIS.  The forums are rich with IIS users with varied levels of expertise, there is a massive supply of blog posts from IIS team, and community-built tools available for download.  Not to mention the resource center, which contains documentation from Microsoft on all things IIS: planning IIS architecture, installing and configuring IIS,  deploying websites on IIS, and much more.  Think of this as the encyclopedia of IIS.

Explore the home of IIS.


Not enough resources?  Well, there’s always our blog [200 OK] where you can find IIS specific pieces as well as pieces touching on the broader worlds of information security and Web performance.  

No Comments »

Building Secure Web Applications (Infographic)

Posted: June 13th, 2012
Filed under: IIS & HTTP


Often times Web app firewalls are deployed as a corrective action for flaws in software that could have been avoided with better development practices.  Veracode has put together an infographic of some common attacks Web apps are susceptible to, and provided a quick check-list for building more secure Web apps.  Take a look:

Building Secure Web Applications

Infographic by Veracode Application Security

No Comments »

How to Use a Web Application Firewall (The Right Way)

Posted: May 8th, 2012
Filed under: IIS & HTTP, Web and Application Security


The Perceptions of Web Application Firewalls

Difficult to configure. Confusing to use. Time-consuming to manage. Set-it-and-forget-it security. These are some of the perceptions of Web application firewalls that can be, in many cases, dangerous to the security health of your organization. Like physical exercise, exercising good security practices requires effort and commitment, but at the end of the day, the benefits far outweigh the costs. This may be news to some, but a Web app firewall is not a set-it-and-forget-it security crutch. Rather, a Web app firewall is a security tool that requires dedicated use. Most importantly, the Web application firewall isn’t a sentient being, it’s a device that, like many of man’s creations, is only as good as the person or people wielding it.

A State of Mind

Security is never a feature you can outright purchase. It’s not a box you can check, or a test you can pass and be done with. This rationale applies to security whether it be for a car, a home, or your Web apps. This is a concept most people can grasp in more physical, trivial day-to-day tasks, but sometimes is lost when it comes to the Web. For example, driving to and from work is potentially very dangerous. However, if you practice, get your license, and drive with caution, while avoiding activities like speeding and texting while driving, you will -in all likelihood – be safe. No one would get into the driver’s seat of a car for the first time and expect passive safety features like a seat belt and airbag to make up for their lack of experience. The same applies to Web app firewalls; there are measures of practice that must be applied in order to achieve security, one cannot rely solely upon passive features to do all the work. Web app firewall users need to be active drivers, not passive ones. Here are some tips to help keep in an information security state of mind, and become an active Web app firewall driver.

Think in Layers

Don’t: Put up only a single obstacle to prevent vulnerability exploits.
Do: Use a Web app firewall as one layer in a multi-layered wall.
Why: Most code in applications isn’t perfect (really, none of it is, but even if it was finding legitimate paths of entry via proper authorization information is possible). This means there are flaws in it that can be exploited by attackers. However, flaws or vulnerabilities oftentimes cannot be easily fixed or recognized. Security should be thought of in layers, with each layer serving its own purpose and no layer being responsible for the entire load. Think of home security; you may have a fence, locks on your doors, and an alarm. A WAF should only constitute a single layer in a larger defense scheme.
How: A Defense in Depth strategy. This type of strategy aims to create a series of backup measures in case one layer fails, which allows each security tool to perform within its functional specifications and not put the entire job of security on one layer. This strategy can also be thought of as a funnel, with broader measures at the top leading down to the most specific security functionality at the base, in the form of a Web app firewall.

Firewall

Block traffic going to all ports other than Ports 80 and 443. This will funnel traffic into those two ports for inspection by the other layers of security behind it. Wendy Nather wrote a piece on why we still need firewalls that nicely explains their usefulness.

IDS

Detection is needed to alert a system admin if and when an intruder is recognized in the application. Being aware of an intruder’s existence inside the app allows for a potential attack to be identified before the attack has commenced. This will give a system admin time to take additional preventative measures to prevent an attack, such as blocking the IP or logging off a user account if its been taken over. This will provide a more general security detection solution that will alert when intrusions occur at the network and system levels, not specifically within an application, as an app firewall would.

IPS

An intrusion prevention system will prevent an attacker from gathering information on your app and sever. It’s essential to prevent hacker reconnaissance by obfuscating information like server type, file extensions, and application or site errors from being easily accessible to hackers. This is again a more system and network level approach.

Application firewall

Finally, your applications will require an app firewall to secure them specifically, as they are valuable centers of information with large amount of traffic going in and out. A Web app firewall can monitor, detect, and prevent malicious traffic accessing applications.  With active usage, a Web app firewall will act as a powerful last line of defense for your Web apps against attacks.

Shrink Wrap Your Security

Don’t: Expect a WAF to be correctly configured for your site out of the box.
Do: Set up your Web application firewall for your specific needs.
Why: Shrink-wrapping a Web app firewall’s security to tightly fit around your specific requirements leaves less room for error. Simply using a blacklist of attack signatures is a good way to get hit by a Zero Day, and whitelisting could lead to false-positives.
How: Set app specific policies that only allow an app to be used the way it was intended. Any vulnerabilities found through penetration testing that cannot be easily remediated should have app firewall policies put in place to protect until corrections can be made to the vulnerable code. Setting some of the below policies in your Web app firewall is a good start to shrink wrapping security.

Input validation.  Configuring a Web app firewall’s input validation policies will help prevent against attacks like SQL inject and cross site scripting (XSS).  Platform-specific exploits can use complex URL strings to gain access to a shell or a Common Gateway Interface (CGI), from which a hacker can easily get a directory listing revealing file structure. Input sanitizing prevents harmful scripts from being injected into your app through URLs or form fields and should be enabled on an app firewall. This should be configured depending on the characters permitted and needed in each individual field.

Use different configurations. Like customizing input sanitizing, if you need to secure – for example – an Exchange server and a Joomla site, do not use the same configuration for both. Just as a house with no windows will need different security than a glass house, different Web apps will have their own security needs that should be addressed independently.

Manage file uploads. If users can upload files, only allow the file types your site uses. This means things like preventing dynamic files from being uploaded if your site only hosts images. A Web app firewall should be configured to block any attempt to upload files that your app or site does not use. Be sure to be as specific as possible by either whitelisting only what you will use, or blacklisting all file types that you cannot use.

Be weary of session hijacking. Set user sessions to timeout when idle for an extended period of time. This will help prevent a user’s session from being hijacked, leading to unauthorized access to sensitive information. Policies can also be enforced so a session can only be used on a single IP address, thus preventing an outside hacker (with a different IP) from gaining access to the legitimate users session because it is limited to one IP.

Request management. You know the areas of your site or app where sensitive data can be accessed, the types of sensitive data, and the types of files that your site is composed of. Make sure access to admin URLs are restricted and requests for sensitive files are blocked by your Web app firewall for untrusted users.

Without security measures in place, hackers will find vulnerable penetration vectors in your Web applications. Imagine your site as a bank; there are ordinary locks and alarms on the perimeter doors, but the valuable goods (money, etc.) are inside a vault. Since you know your Web applications best, it’s up to you to make sure you place locks on all the doors, and make sure you put bigger locks on the more important doors. If you’re unsure what needs to be secured, thoroughly scrutinize and pentest your site.

 

You’re Never Done Securing

Even after you’ve configured your Web application firewall perfectly and put in place the best security policies you can, there are dangers to be aware of.  Human error, trust betrayal, and organizational challenges pose security threats that are hard to defend against, but are important to evaluate as they pertain to your organization.

Denial of Service (DoS). Made popular by its presence in the media as the attack method of choice by hacktivist groups, denial of service attacks (DoS) have become a major concern for many organizations – and rightly so. DoS attacks rely on a large number of requests to bring down a target site or app.  Casual DoS attacks can be diminished through use of a Web application firewall by limiting the number of requests per second or blocking IP addresses with high request frequencies. Though if you are dealing with very serious and determined DoS attacks, you may want to perform a risk assessment and investigate cloud-scale countermeasures, which will require some organizational backing to make the necessary changes.

Zero Day.  Zero Day attacks are attacks that exploit previously unknown vulnerabilities.  Even if you have tested against every known exploit and have secured against them, new vulnerability exploits will arise that you will not be protected against.  Any attack, even know attack types like SQL injection or cross site scripting, can become major threats if its a zero day exploit.  New exploits are created every day and can be very damaging, though a Web app firewall can be used to preempt them.   When purchasing a Web app firewall make sure it has a strategy for dealing with unknown exploits or a methodology for dealing with attacks that it hasn’t seen before.   Even with an app firewall in place, it’s important to remember that security today doesn’t mean you won’t be vulnerable tomorrow.

Exploiting Legitimate Trust Relationships.  Web app firewalls and security policies can go a long way to securing an organization, but exploitation of trust relationships – whether intentional or unintentional – are hard to defend against.  Members of an organization must be intelligent and prudent about their Web use for the organization to remain secure.  This means not opening attachments from unknown senders, not clicking on strange links from unknown and un-trusted people, and so on.  For example, if a user unknowingly downloads a keystroke logger from an untrusted email correspondant, a hacker can simply lift credentials from the logger and gain legitimate access to an organization’s system – no SQL injection, session hijacking, or complex hacking techniques required.  This example brings to light the human element of Web security; an organization is only as safe as its least secure link.

Test, Update, and Evaluate

Don’t: Have static security or perform one round of penetration testing, auditing, or use being PCI compliant as an excuse to not perform further testing.
Do: Test and audit regularly. Evolve your security plan as threats and traffic trends on your site change.
Why: Security vulnerabilities can arise after a code change, an update or through a new hacking method. Nothing is ever static; the Web is constantly changing, which means that security needs to stay current with changes that occur. New vulnerabilities are discovered all the time, even in major software companies’ code.
How: Stay current with patch and updates; people spend hours developing these for a reason. Stay informed by reading about information security trends, following #infosec on Twitter, talking to peers, and so on. Regularly pentest and audit your apps and sites. Analyze logs and learn from them to make adjustments to Web app firewall configuration.

Log. When first setting up a WAF, use a logging mode to evaluate your site’s traffic. This will give you a sense of where tighter policies may need to be set or where exceptions may need to be set.

Check for false positives. False positives are bad for business. You never want to blocking a legitimate and harmless user who accidentally mistyped a URL or input incorrect characters into a field because security policies are set too tightly. Logs should be checked regularly for errors that aren’t really errors, and exceptions should be added to policies that may be resulting in false positives.

Perform tests. Penetration testing for vulnerabilities in your code will show you what you need to protect against until new code can be deployed to fix it.  Set new policies in your app firewall based on your testing.

Update and patch.   Install updates as they are released.  Deploy patches to secure vulnerable code; it does’t matter if the patch is out if you don’t actually apply it.  If there is an update or patch available to secure a vulnerability, chances are there are people who know how to exploit it.

Web app firewalls can be a useful ally toward greater Web security for those who know how to use them properly.  Whether you’re in the market for a new Web app firewall, or are already a proud owner, understanding that a Web app firewall is a tool designed to be driven is an important step toward increased Web security.

No Comments »

This Week in Security and Performance – Week 15

Posted: April 13th, 2012
Filed under: IIS & HTTP


Utah Medicaid Data Breach

On March 30, Utah’s Medicaid health services was breached by a hacker based in Eastern Europe who stole up to 780,000 medical files – up to 280,096 of which contained social security numbers.  According to Utah Department of Technology Services spokesperson, Stephanie Weiss, the breached server was “A test server and when it was put into production there was a misconfiguration.  Processes were not followed and the password was very weak.”

This week Utah Governor Gary R. Herbert called for an audit of all state information security and data storage procedures – as well as the handling of the information in the recent breach.  It’s unfortunate that it took an incident like this to recognize the need for thorough security audits.

Low-latency Networks for Your Home

A brief overview of the problem of network latency, and how software solutions look to remedy it.

Microsoft Patches and Internet Explorer Woes

Microsoft issued six security bulletins with new patches addressing four critical vulnerabilities.  All four four of these critical bulletins were for remote code execution vulnerabilities.

Of note is a critical remote code execution vulnerability in Internet Explorer.   By tricking a user to open a document in their browser, a hacker could take control of the user’s PC.

The Anatomy of a Hack

Patrick Meenan rolled out his access logs to trace the activities of a hacker who penetrated the forums at webpagetest.org.

No Comments »

Spotting Unseen and Potentially Harmful Traffic

Posted: April 2nd, 2012
Filed under: IIS & HTTP


Startling Finds

Don’t think your site is receiving harmful traffic?  Incapsula recently put together a report about website traffic and its legitimacy, harmfulness, and humanness.  The information compiled came, “from a sample of one thousand website of Incapsula customers, with 50,000 to 100,000 monthly visitors,” their site says.

What did the report say?  51% of all website traffic was non-human.  Of that 51%, 31% of traffic consisted of potentially harmful visitors, including hacking tools, scrapers, spies, and comment spammers.

That means that less your site’s traffic may be less man than machine.

So wait, you’re telling me that of the 10,000 visitors Google Analytics told me I had last month, only 5,000 of them were actual people?

No, actually, the amount Google Analytics told you is more or less accurate about the amount of human traffic, it simply doesn’t tell you about the hackers, spammers, scrapers, and spies that are visiting your site.  These visitors essentially go unnoticed and are invisible unless you look for them.  But how can you prepare and secure your site if you don’t even know what’s actually happening on it?

A Problem of Analytical Proportions

Now you’re probably crying, “why wouldn’t Google tell me these kind of things?!”  Well, the short answer is: they can’t.  When a user goes to a page on your site, their browser makes a request from your web server to send a specific page.  The web server returns the page to the browser with an embedded piece of JavaScript, which is the code that runs Google Analytics.  The code gathers the information from the page opened in the browser and sends it to the databases at Google.  After the information is sent to Google, you can go to your Google Analytics account to access the gathered data.

The problem is that non-human traffic, like scrapers and hacking tools, are visiting your site, but they don’t run the JavaScript used to send the information from the browser to Google.  Since Google Analytics runs on a separate server from your site, this script needs to send the information to the Google Analytics databases in order for a visit to be recorded.

What Can You Do?

You can keep using Google Analytics, or your preferred analytics service, but there are measures you can take make yourself more aware of what is happening on your site.

  • Check log files.  By manually coming through your server log files you can see all the traffic that has come to your site.  It may take a little bit of time and work, but spotting bots can be done fairly easily.  Things to look for:
    • Suspicious user agents. Users not fetching dependencies like JavaScript or cookies are good signs of a bot.
    • Rapid movement. If you notice a visitor has viewed a lot of pages in a very short period of time (i.e. 1000 pages in 10 seconds), that’s a good sign of a bot.
    • Bad Requests. If your site is on ASP and you see requests for PHP, a guessing bot is probably making the illegitimate requests.
  • Checking for Robots.txt requests.  This is the file that bots will look for and request, by noting the number of requests you will know how many times bots were on your site.
  • It’s a Trap!  Setting up traps to catch bots can be done by creating invisible links on your site, which human visitors won’t be able to see or find, but bots crawling your site for links will be able to.  For every time that link is hit, you will know that a bot visited your site.
  • ServerDefender VP can track and show you all every request hitting your server.  Real-time logs and monitoring clearly display the names of malicious visitors.  Below are images of what bad traffic looks like versus legitimate traffic.
No Comments »

IIS 8: New Security and Performance Features

Posted: April 1st, 2012
Filed under: IIS & HTTP


The New IIS and Windows Server 8

In March, Microsoft released beta versions of its latest and much-altered Windows line-up.  While most of the focus was given to Windows 8 for PCs, there is much to be seen in the new Windows Server “8” (the official title is in parentheses for the time being) and IIS 8.

We took a look at the new features in IIS 8 and Windows Server 8 beta (Download the Windows Server 8 Beta) and have put together a breakdown of some of  what’s new and what’s got us excited.

What’s New

A lot has been added or changed for IIS and Windows Server 8.  Microsoft’s continued efforts for cloud integration are apparent, as well as improvements in virtualization, security, and elsewhere.  Here are a sampling of the changes made:

-Server virtualization.  Server 8 goes “beyond virtualization” to offer features for building a private cloud, scaling and protecting workloads, while being more cost-efficient and more secure.

Multiserver control.  Now, you will be able to control multiple servers simply and easily through a single server.

PowerShell 3.0.  The newest iteration of the command-line interface provides a comprehensive management platform for servers, storage, and networks.

Cloud service connectivity.  Server 8 provided flexibility in the cloud, allowing users to build and deploy applications in the cloud or on-site.

Also…

-There is the new Metro user interface.  Optimized for touch screens, tablets, and mobile devices, the Metro UI is one of the most noticeable visual changes, albeit not one of the most useful for the server edition.

What’s Exciting

Since we develop security and performance tools for IIS, our opinion of “what’s exciting” is somewhat skewed.  Below we’ve highlighted some of the best new performance and security features, as well as some other exciting new aspects.

IIS

Centralized SSL Support

Previously, copying and importing SSL certificates was an inefficient process, with each certificate needing to be copied and imported individually to their respective machines.

Now, IIS 8 has Centralized SSL Certificate Support to store all SSL certificates centrally in a file server, allowing the files to be shared by other servers.  This makes scaling easy, since additional servers can all share a single folder, and only this single folder will need to be updated.  The ability to simply copy certificates also means you will no longer need to follow multi-step importing procedures like this.

In short: SSL certificate management is simplified into one shared location.

Dynamic IP Restrictions

Allows users to set up filters to deny access to IP addresses of potentially malicious users.  Dynamic IP Restriction will automatically block potentially harmful IP addresses, making it beneficial in protecting against DoS attacks.

Microsoft highlights the following ways that dynamic IP filtering can be used:

  • “Block access for IP addresses that exceed the specific number of requests
  • Block access based on the number of connection attempts from an IP address during a specified time period.
  • Specify the behavior when IIS blocks an IP address.  Requests from malicious clients can then be aborted by the server instead of returning HTTP 403.6 responses to the client.”

Administrators can set static rules like “block requests from IP address X” or dynamic rules like “block requests from IP addresses with more than X simultaneous connections” to restrict access.  There’s even a “logging only” feature that allows admins to set rules that log what would happen if the rule was in play, but do not actually block any requests.

Using IIS 8.0 Dynamic IP Address Restrictions

In short: Dynamic IP Restrictions can be set up to block potentially malicious IP addresses and can help thwart DoS attacks.

FTP Logon Attempt Restriction

This new server-level feature (meaning that the restrictions set apply to the entire server not just a single site) restricts logon attempts to the FTP server.  This feature will help alleviate brute-force attacks by limiting the number of logon attempts in a specified time period, preventing malicious users or tools from running the gamut of names and passwords to gain access to your server.

When a user reaches the number of failed login attempts, the FTP connection to the client is automatically closed by the server.  The client’s IP address is blocked from accessing the FTP service until the service has been restarted.

 

In short: FTP Logon Attempt Restrictions can help prevent brute-force attacks on your servers.

IIS CPU Throttling

IIS users have been exclaiming “Finally” since this feature was announced.  CPU throttling allows admins to set the maximum amount of CPU consumption for specific application pools.  When tenants are in a cloud environment, CPU throttling will help prevent a single user’s application from monopolizing CPU resources and slowing down other users.    This resource management also controls resource consumption on a site level, preventing any one site from hogging the CPU’s memory.

In shared-hosting scenarios, where several tenants are on the same server, CPU monitoring will be able to see which users require more CPU resources and potentially prevent low-usage tenants from being over-billed.

CPU Throttling

 

In short: With CPU throttling, admins can now monitor usage and set specific levels of CPU usage for different applications.

Improved Live Migration

Hyper-V allows for increased speed of live migrations between clustered virtual servers.  This is, as Microsoft calls it, “live migration without limits.” Now, rather than requiring a shared storage location on the backend, live migration simply copies the memory pages of the migrating virtual machines to the destination prior to migration – with no perceived user downtime.  Plus, admins can use up to 10 gigabytes of bandwidth to complete live migrations.

In Short: Live migration between virtual servers is faster and easier.

Windows Server 8

Easier Security and Compliance

Administrators will now be able to set up centralized access rules and policies by using claims, which are statements about an associated object’s attributes.  These are used to manage access to information, like preventing modification or deletion of files.  With this feature, admins will be able to establish strong, yet malleable rules like “to access files classed as (HBI) data, a user must be a full-time employee, access from a managed device, and logon from a smart card.”

With this, admins will be able to execute an audit on file servers to ensure that employees are in compliance with company and legal policies.

In Short: Administrators can now set up centralized access rules and policies to more easily ensure security and compliance.

Storage Spaces

Storage spaces is a sophisticated storage solution that provides a more cost-efficient and reliable way to store data.  Physical disks will be turned into storage pools that act as storage spaces.  All available physical disks can be used to create one pool, or multiple pools can be created by dividing the physical disks.

Storage spaces are made continuously available by integrating with failover clustering and resiliency modes (mirroring and parity) allow data to be mirrored for backup in the event of storage failures.

In short: Storage spaces are a cost-efficient, reliable, and continuously available storage solution.

Multitenant Security and Isolation

Hyper-V provides the platform capabilities to create private cloud environments and transition to infrastructure as a service (IaaS) environments.  Windows Server 8 has new security and isolation features that use Hyper-V Extensible Switch to make cloud and IaaS environments more secure.  The Extensible Switch is a Layer-2 virtual network switch that provides programmatically managed, policy enforcement on virtual machines that isolate multiple tenants and maintain security.

Interview with Microsoft’s Bob Combs on the Extensible Switch

In short: Hyper-V’s Extensible Switch provides security and isolation for multi-tenant usage.

 

Other New Features

PowerShell.  Power shell is Microsoft’s command-line shell designed for system admins that has been around since 2006, but Microsoft has pushed to make PowerShell a major component of Windows Server 8 by adding more than 2000 native commands.  We found this nifty PowerShell cheat sheet and a thorough breakdown of all the PowerShell commands in Windows Server 8.

Application Initialization. Allows admins to configure Windows Server 8 to initialize web applications, enabling the application to be ready for the first request.

NUMA-Aware Scalability.  Non-Uniform Memory Access (NUMA) is an architecture that determines memory-based upon memory location relative o the processor.  NUMA prefers local memory access over remote memory access.

WebSocket Protocol.  The new standards-based protocol provides real-time bidirectional communication between a browser or application and a server.  The WebSocket Protocol is supported in IIS ASP.NET 4.5 and WCF for writing server-side applications.

Try it Yourself

Want to test-drive Windows Server 8 for yourself?  Here are some useful documents for getting you started:

  • Installing Windows Server 8
  • Installing IIS 8 on Windows Server 8
  • IIS 8.0 Using ASP.NET 3.5 and ASP.NET 4.5

More New Features…

We’re still digging into the new features.  Follow us on Twitter for more updates as we move closer to the release date, and continue to breakdown the new pieces of Windows Server 8 and IIS 8.


No Comments »

Around the Web – Week 11, 2012

Posted: March 16th, 2012
Filed under: IIS & HTTP


Earlier this week we attended the webinar “The Great Application Security Debate: Static vs. Dynamic vs. Manual Penetration Testing” by Bank Info Security.com.  The webinar will be presented again on March 28th.  Check it out if you’re interested in understanding why app security testing is critical, the differences between types of testing, and determining what testing approach best suits your organization.

DevThought rolled out their SPDY indicator exstension for Chrome, which allows you to see when you’re on a site using the SPDY protocol.

Incapsula reported that as much of 51% of site traffic could be non-human with 31% of that being potentially harmful – and you probably don’t even know about it.

BlueCross BlueShield of Tennessee was fined $1.5 million for a data breach in 2009 that saw one million BlueCross members’ information stolen.

 

No Comments »

Around the Web – Week 10, 2012

Posted: March 9th, 2012
Filed under: Around the Web, IIS & HTTP


This week on the performance front:

Indications that Twitter is using SPDY.  With one of the Internet’s most popular sites on board, who will be next to switch over to the faster protocol?

In the world of information security:

We saw the introduction the Cybersecurity Bill “SECURE IT Act” (Strengthening and Enhancing Cybersecurity by Using Research, Education, Information, and Technology Act) to the Senate, which doesn’t suggest any additional regulations for info security and would distance the government from protecting the private sector.  Others suggest the Department of Homeland Security should be involved.  What do you think the governments role in information security should be?

David Spark writes, “The bad guys are really good at sharing information to break into us. We’re really not that good at sharing information to prevent them from breaking into us.”  Should we be sharing incident data?

Finally, this week showed us a some browser vulnerabilities in  Google Chrome and IE9.  Google offered up cash rewards and credit for finding their bugs.  Microsoft? Not so much.

No Comments »

2011 Web Security Statistics and How to Avoid Being a Victim in 2012

Posted: February 27th, 2012
Filed under: IIS & HTTP, Web and Application Security
Tags: , , , , , , ,


 

The More We Know, The Better We Can Prepare

The landscape of web security is constantly changing with hacking attacks growing more prevalent and diverse.Our job is to constantly be evaluating that ever-changing landscape so we can stay one step ahead and be prepared in the event of an attack.Here are some resources from around the web to keep you informed, prepared, and – most importantly – secure.

2011 Data Breach Investigations Report

2011 represented the all-time lowest amount of compromised data, but also the highest amount of incidents investigated ever.Among the highest hacking methods for 2011 were brute force and dictionary attacks, SQL Injection, and buffer overflow.Web applications were the most attacked pathway when hospitality and retail victims were removed from the data-set, and suffered more numerous attacks than ever.

Read the Report>

Web Application Firewalls can help protect your business from attack. Keeping harmful traffic out and letting good traffic in is crucial to running a business online,just as it is to running a brick-and-mortar business.With attacks evolving and becoming more and more prevalent, a blacklist of signatures is no longer enough to secure web apps.They need protection against threats both known and unknown.

Intrusion Detection FAQ>

Writing an Information Security Policy

An Information Security Policy is the cornerstone of an Information Security Program. It should reflect the organization’s objectives for security and the agreed upon management strategy for securing information.

In order to be useful in providing authority to execute the remainder of the Information Security Program, it must also be formally agreed upon by executive management. This means that, in order to compose an information security policy document, an organization has to have well-defined objectives for security and an agreed-upon management strategy for securing information.

How to Write an Information Security Policy>

Selecting a Web Application Firewall

OWASP, the authority on web security, recommends the following criteria for selecting your Web Application Firewall.

  • Protection Against OWASP Top Ten!
  • Very Few False Positives (i.e., should NEVER disallow an authorized request)
  • Strength of Default (Out of the Box) Defenses
  • Power and Ease of Learn Mode
  • Types of Vulnerabilities it can prevent.
  • Detects disclosure and unauthorized content in outbound reply messages, such as credit-card and Social Security numbers.
  • Both Positive and Negative Security model support.
  • Simplified and Intuitive User Interface.
  • Cluster mode support.
  • High Performance (milliseconds latency).
  • Complete Alerting, Forensics, Reporting capabilities.
  • Web ServicesXML support.
  • Brute Force protection.
  • Ability to Active (block and log), Passive (log only) and bypass the web trafic.
  • Ability to keep individual users constrained to exactly what they have seen in the current session
  • Ability to be configured to prevent ANY specific problem (i.e., Emergency Patches)
No Comments »

List of Free Site Performance Tools

Posted: May 3rd, 2010
Filed under: IIS & HTTP, Web Speed and Performance
Tags: , , , ,


If you are a site owner, webmaster or a web author, here are some free tools that you can use to evaluate and improve the speed of your site:

Firefox / Firebug Add-ons:

  • YSlow, a free tool from Yahoo! that suggests ways to improve website speed.
  • Page Speed, a free tool from Google that evaluates the performance of web pages and gives suggestions for improvement.
  • Hammerhead – a free tool to measure the load time of web pages.

Online Testing:

Development Tools:

  • CSS Sprite Generator – Generates a CSS sprite out of a number of images.
  • Smush It – Online tool that allows you to upload images for lossless compression and optimization. Provides a report of bytes saved and downloads a zip file containing the optimized versions of the files.

Additional Script:

  • In Google Webmaster Tools, Labs > Site Performance shows the speed of your website as experienced by users around the world.

Looking for more free tools? Check out Google’s list.

No Comments »