Why hasn’t Cross-Site Scripting been solved?
31 December 2017By Haina Li
Introduction
In 2017, Bugcrowd reported that cross-site scripting (XSS) remains as the number one vulnerability found on the web, accounting for 25% of the bugs found and submitted to the bug bounty program. Additionally, XSS has remained in the top 3 on the list of the web’s top vulnerabilities for the recent years. Over the 17 years since XSS was first recognized by Microsoft in 2000, XSS has been the focus of intense academic research and development of penetration testing tools, yet we are still finding vulnerabilities even in top websites such as Facebook and Google. In this blog post, we explore some of the reasons why XSS is still a major problem today.
XSS has evolved
XSS evolved while modern applications became more complex than the static pages that they once were. While reflected and stored XSS have not disappeared because both server and client-side logic have become more elaborate, the pattern of replacing server-side logic with client-side JavaScript gave rise to DOM-Based vulnerabilities. Additionally, server-side XSS prevention tools that examined deviations between the request and response (XSSDS) do not work for DOM-Based vulnerabilities because the entire flow of malicious data from the source to the sink is contained within the browser and do not go through the server.
New methods that do prevent DOM-Based XSS attacks include XSS Filters and CSP. These myriad of sophisticated tools aimed to achieve the seemingly simple purpose of escaping user-provided content. As it stands currently, these tools are not able to catch all XSS vulnerabilities, and escaping everything all the time would break an web application altogether. For example, a recent work by Lekis et al. [PDF]
describes a new attack that was missed by every existing XSS prevention technique. In the new attack, the injected payload is benign-looking HTML but can be transformed by script gadgets to behave maliciously.
The effectiveness of web penetration tools are limited
In a study of automated black-box web application vulnerability testing by Bau et al. [PDF], researchers tested commercial scanners such as McAfee and IBM and found that the average scanner XSS vulnerability detection rates were 62.5, 15. and 11.25, respectively, for reflected, stored, and advanced XSS that used non-standard tags and keywords. The study found that the scanners were effective in finding straightforward, textbook XSS vulnerabilities, but lack sufficient modeling of more complex XSS with respect to the specific web application. Web application scanners are designed using a reactive approach, converting new vulnerabilities into test vectors only after they’ve become a problem. When it comes to stored XSS, XSS scanners also struggle to link an event to a subsequent, later observation. These scanners are also often difficult to configure and often take too long if they were set to fuzz every possible location in a large and complicated web application.
Conclusion
As with most web vulnerabilities, XSS is not going away anytime soon because of the constant evolving technologies of the web and the challenges in developing penetration tools with high true-positive rates. However, we may be able to eliminate most of the client-side security issues by replacing JavaScript with a new language that exhibits better control-flow integrity, such as WebAssembly.