Wednesday, May 29, 2013

False Positive Detection Support in IronWASP


NOTE: Before reading this post I would strongly recommended reading the introduction post that covers basics of web security scanner functioning and False Positives


When a scanner reports a vulnerability, the user is left with the responsibility of determining if the reported vulnerability actually exists. How the user performs this task depends on how deeply the user understands web security and the vulnerability in question. Most non-security users (Functionality Testers/Developers/QA etc.) are left scratching their heads at this point.

Even for a skilled penetration tester this task isn't exactly a walk in the park. Now would you believe me if I said that for a penetration tester, writing off a reported issue as a false positive can at times be trickier than discovering a similar vulnerability manually? Let me explain why that is the case.
When trying to manually discover a vulnerability, a tester would perform a series of probes and observe how the application behaves. If the probes elicited a favourable behaviour then the tester does more tests to confirm the presence of a vulnerability. If the probes did not create an impact on the application's behaviour then that section of the site is termed secure and the tester starts probing another section.

But when it comes to testing for false positives the tester has to perform an additional step. If the manual probes don't indicate the presence of the reported vulnerability then the tester has to come up with a reason for what might have caused the scanner to report it incorrectly. Unless this can be done there hangs a cloud of suspicion around the issue. What if the tester did not know about a new or lesser known technique by which this vulnerability was reported by the scanner? What if the tester is wrong and the scanner is right!

To do this additional step the tester has to know exactly how the scanner detected the issue and why it reported it. The tester can only get this information from the vulnerability summary provided by the scanner and from the request and response pairs that are usually included along with the vulnerability summary. How useful and clear this information turns out to be depends on the scanner and the reported issue. Most black-box security scanners aspire to be black-boxes themselves, so they don't try to be generous with information about the detection techniques used.

In my observations the vulnerability summary information of Burp Scanner and Netsparker did a good job of explaining how a reported issue was detected, two scanners whose authors have been penetration testers themselves.


IronWASP's False Positive Detection Support:

 IronWASP helps the user with this process by doing two things:
  1. Explaining exactly how IronWASP detected this issue and why it was reported.
  2. Giving instructions on how to manually test and determine if the reported issue is a False Positive.
The following screenshot shows the information included in the description of a Command Injection vulnerability detected by IronWASP on a test site.


When checking for a vulnerability IronWASP typically uses more than one technique, in fact for detecting SQL Injection IronWASP uses 5 different techniques. This is done to ensure coverage so that if one technique fails to identify an issue then another technique might pick it up.

When a vulnerability is reported from a scan then the reported issue has a list of reasons based on which IronWASP determined there was a vulnerability. Each detection technique that succeeded provides its own reason here. The reason section has detailed information on what payload was sent, information about the payload, what analysis was done on the response that came back and how IronWASP inferred the presence of a vulnerability. This description is given in simple and clear language so that it makes sense to even non-security users.

There are no blanks to fill or dots to connect, a penetration tester reading this would know beyond doubt exactly why this issue was reported and this makes it easy for them to reason why the scanner might have made a wrong detection in a particular case. This might not be a ground-breaking difference from what the other scanners do but the real world benefit this provides is surprisingly significant. The incremental benefit provided by this approach would accumulate in to hours of testing time saved per assessment.

Now for the first time non-security users have a realistic shot at picking out false positives reported by a scanner. This is because each reason section also contains simple and precise instructions on how to manually test and determine if the reported issue is a False Positive as per the detection technique used.

If the instructions for False Positive Check require the user to resend the same payload again or send a payload with modifications then it can be done from the Manual Testing section of IronWASP. The following screenshot shows how the user can use one of the requests sent by the scanner as a starting point for this process.



In addition to these, users with a greater appetite for information might find the scan trace section to be an added bonus. This section contains brief information about all (successful as well as unsuccessful) the individual tests done. To get a better idea please refer to the screenshot below of the test trace of the same Command Injection vulnerability.


 

No comments:

Post a Comment