Security Testing of Web Browsers
Authors: Pekka Pietikäinen, Aki Helin, Rauli Puuperä, Jarmo Luomala, Atte Kettunen, Juha Röning
Category: research article
Keywords: Web browser, security testing, vulnerability testing
Abstract: Web browsers have an enormous install base and vulnerabilities in them can result in wide-spread infections. In this paper we describe efforts made in 2010-2011 to systematically test for vulnerabilities in web browsers. The work was done with Radamsa, a black-box fuzzer that automatically generates test cases based on samples. Approximately 60 bugs were found in widely used browsers, about half of which had potential security impact.
Permanent link to this page: http://urn.fi/URN:NBN:fi-fe201109275588
-
Initial submission
-
Security Testing of Web Browsers
- Revised version of paper
The paper gives an overview of white-box and black-box approaches to security testing.
The paper also describes a black-box fuzzer developed by the authors and discusses the
algorithms used in the tool in a general level. The results of an effort to test web
browsers using this fuzzer are given.
In general the paper is well structured and written in a clear and understandable way.
However, the paper should be proof-read more carefully to turn it into a more polished
state. Now there are multiple confusing sentences, missing words and such.
The topic of the paper is interesting but there seems to be little new in the paper or
anything surprising in the results. Maybe there could be some more insights to the
testing of web browsers. How useful the new generation based fuzzer modules were compared
to the more simple techniques? How many mutations were made to find the 60 bugs? How much
time it takes to generate the mutations, run the test and filter out duplicate defects?
How many previously found/reported bugs did you find? Currently the results section does
not allow the reader to make concusions about the effectiveness of Radamsa, although the
finding 60 bugs does sound impressive to me.
In section 2 you say that the problem of static analysis tools is that they find thousands
of defects. This is quite confusing. Do you mean that the tools report a lot of false
alarms which becomes a problem? (However, you say that a large number of the issues are
real which would indicate that the ratio between true and false positives cannot be that
bad.) Or do you mean that most of the defects do not have security implications or do you
mean something else?
All in all, the paper gives a nice look into security testing and the application of
black-box fuzzers in particular.