We test a lot of ASP.NET web applications. On about 40% of them, we notice when testing for cross-site scripting that the only thing protecting against it is the framework’s own Request Validation. In other words, when you enter a basic XSS vector – you get a Yellow Screen warning that your input has been blocked as potentially dangerous.
This is all very well in its way – but for years we have been putting in our reports that relying on it is bad practice because it is a pretty crude control mechanism. Firstly, it is pretty ugly and makes you think right from the start that the developer hasn’t put much TLC into his product, and secondly and more importantly, Microsoft themselves do not recommend it as a substitute for a proper input validation method created within the application itself, as there is no way that the designers of the framework can predict what type of content an individual application will need to accept in a given field.
So now there are XSS vectors that get round this – for example
http://www.vulnerablesite.com/login.aspx?param=<%tag style=”xss:expression(alert(123))” >
MS have said that they are not going to fix it – which seems justifiable as they never recommended its use in the first place.
I can’t help wondering if all the sites we have seen that have used this have been back over their code and done their input validation properly this time. Sadly I suspect that the answer to this is likely to be no in a large number of cases.
This is a shame, because a lot of the problems are simply caused by developers accepting characters into fields where they simply don’t have to. For example – we see a lot of telephone number fields that only need to accept numerics that take ‘<>’ and other dangerous characters. And for the rest, simply encoding all user supplied content is not hard and protects against the vast majority of issues.