Hacker News new | past | comments | ask | show | jobs | submit login
A practical security guide for web developers (github.com/fallibleinc)
894 points by zianwar on July 21, 2016 | hide | past | favorite | 68 comments



Had this SO link saved since probably soon after it was asked 7 years ago. Still relevant and still being updated.

http://stackoverflow.com/questions/549/the-definitive-guide-...

Includes:

- How to log in

- How to remain logged in

- Managing cookies (including recommended settings)

- SSL/HTTPS encryption

- How to store passwords

- Using secret questions

- Forgotten username/password functionality

- Use of nonces to prevent cross-site request forgeries

..........

And much much more.


Thank you for sharing! I'd just like to add my fav resource on webdev security: "OWASP Developer Guide Reboot".

https://github.com/OWASP/DevGuide

It's the updated version of their classic web security guide. All of the updates happen on Github in the open and they also accept patches. Chapters 3 and up are really great.


Shameless plug: although not a secure development practice, but it's a security practice to scan your application regularly. For that you should run tools like "brakeman" for ruby on rails, for example, but you should also run dynamic tests using a free service like https://gauntlet.io -- and you should scan regularly because scans got updated and may find new bugs. That's a good practice.


The quote at the beginning of the SO question you linked to, one of the top-25 questions of all time, is so ironic given the current state of SO:

> We believe that Stack Overflow should not just be a resource for very specific technical questions, but also for general guidelines on how to solve variations on common problems.

SO is a great website, but I wonder how much more it could've been if this sort of a page was allowed now.


I know what you're saying but they're coming out with Stackoverflow documentation : http://stackoverflow.com/tour/documentation.

Not sure whether this will be the solution that you're looking for. But if you give feedback, maybe, it will happen sometime.


HIPAA compliance is a good checklist even if you don't need the certification. It covers the administrative and physical safeguards in addition to the technical ones.

Check it out: http://www.hhs.gov/hipaa/for-professionals/security/laws-reg...


I'd really like to see something like this for mobile apps as opposed to form-based web apps.


Efforts like this are very good.

But one of the most serious problems with web development is how few frameworks ship with most of these sane answers out-of-the-box (edit: or don't ship concepts at the right level of abstraction)

When we all need to copy-paste some best-practice way of how to Argon2 a password and how to constant-time equality check a hash, we've already lost, in that we're reimplemeting these sane answers every time from the weeds.

I want to see more things like Django's automatic password hash upgrading [1].

Specifically, checklists like this effort's should be for people who develop frameworks, and not people who develop custom apps with them. With some things like CSRF protection, we're already there, but with so many other things, we're not.

[1] https://docs.djangoproject.com/en/1.9/topics/auth/passwords/...


While I appreciate the kind words for Django, there are plenty of libraries and frameworks out there which are doing the basics of web app security out of the box these days, and my recommendation is to find one you like and use it.

The big obstacle there is people who insist on writing their own versions of everything rather than using off-the-shelf components. I know it's a point of pride for some folks not to "rely" on third-party code, or to post long rants from someone's experience with a late-90's-era framework as evidence that they're all awful, but getting a bunch of best practices done for you for free is the advantage they offer, and given how hard it is to cover even the basic security bases, that's an advantage I think people increasingly can't afford to give up.


Security Engineering: A Guide to Building Dependable Distributed Systems by Ross Anderson is available online for reading -- http://www.cl.cam.ac.uk/~rja14/book.html

A couple of other resources:

- 7 Security Measures to Protect Your Servers [0]

- SSH best practices [1]

In case one doesn't prefer to be overwhelmed with documentation, one could refer to: My First 5 Minutes On A Server; Or, Essential Security for Linux Servers [2].

[0] https://www.digitalocean.com/community/tutorials/7-security-...

[1] http://www.cl.cam.ac.uk/~rja14/book.html

[2] https://plusbryan.com/my-first-5-minutes-on-a-server-or-esse...


Rather than just "JWT is awesome..." wouldn't it be more sensible and responsible to caveat this with some of the drawbacks?

I read this article recently (http://cryto.net/~joepie91/blog/2016/06/13/stop-using-jwt-fo...) that proposes not to use it for sessions but instead for the use cases listed at the end of the article. Follow-up article here http://cryto.net/~joepie91/blog/2016/06/19/stop-using-jwt-fo...

Also this https://auth0.com/blog/2015/03/31/critical-vulnerabilities-i...


Why not using one key per user? Then I would just have to invalidate this one user and not all of them.


You don't know who the user is until you verified the integrity of the JWT. Verifiying the integrity requires the secret. Your solution adds the dependency: the secret requires the user. It is cyclic, unsolvable without breaking a constraint.

You could assume the username is correct, then get the secret, validate. But that sounds like something breakable.


A great book is The Web Application Hacker's Handbook: Finding and Exploiting Security Flaws, 2nd Edition [0] - by learning how hackers search for and exploit various web issues - you'll be naturally aware of them to defend against. i.e. start thinking like a hacker and you'll be amazed at the issues you discover in your applications.

[0] http://eu.wiley.com/WileyCDA/WileyTitle/productCd-1118026470...


Agreed, still the best text out there for web application security.


There's a whole section on input sanitization but nothing on escaping output.

If you're on the hook to sanitize all inputs, doesn't that mean you're not escaping output?

The biggest security mistake I've made so far in production was that one time I used an HTML templating library that didn't escape output by default.


I've written an HTML templating system where the common security issues are simply not allowed: https://github.com/haplo-org/haplo-safe-view-templates

I'm pretty sure it covers all the HTML generation issues described in https://www.nostarch.com/tangledweb.htm

We're been using it very happily for a good six months, and it's worked out well.


Can you clarify? What exactly do you mean by escaping output, and how can forgetting that cause a security issue?


Example HTML output in a user's profile:

Would you like to contact ${NAME}?

Where ${NAME} is a user supplied parameter (you ask them what their name is)

Let's say I entered my name as: <script>/evil code/</script>

Now, if the output isn't escaped the page reads:

Would you like to contact <script>/evil code/</script>?

You've just injected evil code into the website that will be executed every time my user profile page is visited by another user.

EDIT: Hacker news doesn't properly render javascript comments.


But that script tag would be taken care of in the input sanitation step. You normally remove all hints of HTML tags on input sanitation, which renders output sanitation a moot point.


Unless you application has a static mapping of input -> output that never changes, you can't properly sanitize input for all potential output contexts. The string ';alert(1) is perfectly safe to drop in between HTML tags, but can be very dangerous in JavaScript, but only if it's inside a single-quoted string.

You can try to filter for anything that may be potentially dangerous, but that's going to make a very long list of invalid inputs and once again you're playing whack-a-mole, hoping you correctly sanitize your input for all potential output contexts (unless you go through and re-sanitize all your user data whenever you add a new output context, which is a bit absurd).

From a programming perspective, it's akin to a function not checking that the input it has received is valid (because the caller is always going to do that...).


>>You normally remove all hints of HTML tags on input sanitation

Then what happens when you want to use that input in an excel export? PDF export? CSV file? Text file? How about if you want to use it in an HTML attribute? In a URL? Export the database elsewhere? (Such as a credit card company reporting to the CSAs). You can't assume that your data is going to be inside an HTML page between tags always because that mucks up your data. Data should be able to be used in many different ways because it will be and should not be tied to HTML.


Ok, this is the comment that best explained it to me -- you want to sanitize (escaped, etc, whatever) output because, even if you sanitize all HTML/CSS/JS on input, they might have inserted malicious Excel scripts or PDF exploits, etc, that eventually do get executed in an output context.


User-provided data in any transport format (e.g. HTML) needs to be properly escaped so it can't use special characters to be processed as code or metadata rather than just plain data. Bugs in escaping cause pretty much every kind of injection attack, from SQL to XSS. The fundamental problem is not escaping data correctly for the context in which it is used.

If you want to see specific examples, search for how to avoid XSS attacks. Any decent guide will focus on escaping, not sanitisation.


Right, but you seem to be talking about input -- the question was about output.

I did get my answer, though -- output matters because it might be a different medium than input. So even if you sanitize HTML input, malicious VBA code could make it through that eventually ends up in an Excel report and gets executed (for example).


And the bigger issue with only sanitizing user inputs without also escaping output is that different escaping methods are required for different output contexts (HTML content, HTML attributes, URL params, JavaScript variables, etc).


Use static source code analysis and dynamic web app scanners.

They are easy to integrate into your SDLC, they are not going to replace manual testing or secure development practices but they'll help a lot. They'll pick up tons of stuff for free, they'll remind you best practices.

I have a startup (at least it still feels like a startup!) and we are developing a web application security scanner called Netsparker [0]. It found over 100 zero days in open source applications while testing it [1], including very popular vulnerabilities in applications such as Wordpress and Joomla. I guess that by itself proves how good scanning can be.

If you want to try it on your websites and see it for yourself drop an email / message to contact@netsparker.com with a mention of HN and I'll get you a fully functional trial that you can use on your own websites.

[0] Netsparker Cloud https://www.netsparker.com/online-web-application-security-s... - Netsparker Desktop https://www.netsparker.com/web-vulnerability-scanner/

[1] https://www.netsparker.com/web-applications-advisories/


How does this compare with Trend Micro Deep Security for Web Apps? We're using it now but it does not seem effective - not catching much in its scans. So we're looking at alternatives...


I never used them or seen them in a benchmark so cannot comment much of the capability or quality. I think it's best if you compare it for yourself. Running a scan very easy and won't take much of your time, drop me an email and will get you a license or account to our cloud solution, so you can test it and see how it compares. Email: contact@netsparker.com


Hey I wanted to let you know I thought your slogan on your website was a bit confusing at first "False Positive Free Web Application Security Scanner" I grokked it as "False Positive, free web application security scanner"


Thanks for the feedback. We are actually renewing our website and changing that slogan.

This will be the new message, "Proof Based Scanning" and we tried to explain it in here : https://www.netsparker.com/blog/docs-and-faqs/proof-based-we...


The problem with checklists, including this one, is that we tend to limit ourselves to what's in the list. Furthermore the list doesn't explain 'why' you should do things. They help, but nothing is a replacement for education. And when it comes to education, there's a decent write up I did and is still accessed in a daily basis [0]. I also recommend you to check OWASP [1] and read their "Testing Guide" to know many attacks and defenses.

[0] Security for building modern web apps https://dadario.com.br/security-for-building-modern-web-apps... [1] https://www.owasp.org


Well, at least it's a good starting point.


Where is the actual guide? Is this the TOC for a book? It looks good but I don't see the actual content, just a check list and a table of contents.


Exactly! Where are the doc to all the sections? This seems terribly important stuff, but it looks like a 'teaser' to a book? If not where do we get the information for each section?


It's mostly a security checklist (as of now): https://github.com/FallibleInc/security-guide-for-developers...


This is one of the best examples of how Github nails collaborative document development I've seen.

It is striking how valuable much information is retained in negotiating the material here, vs email arguments with word documents and embedded content, where the app-seperation of submissions makes it too difficult to consume.


> Store password hashes using Bcrypt (no salt necessary - Bcrypt does it for you).

In PHP, I would rather recommend to use password_hash() with its own defaults since it's built-in and designed specifically for this purpose - and quite future-proof. But this is PHP specific.

> [] Destroy all active sessions on reset password (or offer to).

> ...

> [] Destroy the logged in user's session everywhere after successful reset of password.

I believe these are the same. The second one is clearer though.

Edit: clarified


I think the first one is saying "destroy active sessions when a user attempts to change their password" and the second one is saying "destroy active sessions when the user succeeds in changing their password."


My interpretation is: "when a user changes their password, offer to destroy all active sessions"


* Don't let HTTP GET requests modify state, ever. It's very difficult to prevent CSRF via HTTP GET.

* Session keys are password-equivalents. Hash them with bcrypt or something before you store them.

* httponly is not incredibly useful. If the attacker can run JavaScript on your page, you're in trouble.


> Don't let HTTP GET requests modify state, ever. It's very difficult to prevent CSRF via HTTP GET.

Isn’t it exactly as difficult as any other method?

> Session keys are password-equivalents. Hash them with bcrypt or something before you store them.

bcrypt is overkill, especially for something that has to be checked every request; just use any SHA-2 (non-iterated). A session key should be long enough by far to resist brute force.


> httponly is not incredibly useful. If the attacker can run JavaScript on your page, you're in trouble.

Gotta disagree with that. Defense in depth is always a good idea. Don't ignore a simple security win just because it isn't necessary if all of your other security measures are working. Security is needed the most when some of your security measures have already failed.


This reminds me of:

"The Basics of Web Application Security" (Cade Cairns, Daniel Somerfield)

http://martinfowler.com/articles/web-security-basics.html

It's an ongoing evolving publication at Fowler's website.


"Check for no/default passwords for databases especially MongoDB & Redis. BTW MongoDB sucks, avoid it."

Come on, you're better than this. What the fuck.


The following PDF focuses on just one specific aspect of security: cryptography, but deserves a mention nonetheless. Configuring various services such that insecure mechanisms are not used is not exactly a trivial task.

https://bettercrypto.org/static/applied-crypto-hardening.pdf

Edit: GitHub repo at https://github.com/BetterCrypto/Applied-Crypto-Hardening


Please excuse me if this comes across as anything other than constructive criticism, but I don't believe checklists should be used to guide web developers to build secure software.

My reason for this belief is that, in my experience, it engenders tunnel vision and what I appropriately refer to as a "checklist mentality". There are developers who believe, "We're immune to the items on the OWASP Top 10, so we're secure," when there are entire classes of vulnerabilities that applications can be vulnerable to (say: using a weak and predictable PRNG for their password reset tokens) that isn't adequately described by the OWASP Top 10.

An alternative approach that I feel is more helpful is to organize insecurity into a taxonomy.

  * Code/data confusion
    * SQL Injection
    * Local/Remote File Inclusion
    * Cross-Site Scripting (XSS)
    * LDAP Injection
    * XPath Injection
    * Several memory corruption vulnerabilities
  * Logic Errors
    * Confused deputies
    * CSRF
    * Failure to enforce access controls
  * Operating Environment
    * Using software with known vulnerabilities
    * Using HTTP instead of HTTPS
  * Cryptography flaws
    * Yes, this deserves a category of its own
    * Chosen-plaintext attacks
    * Chosen-ciphertext attacks
    * Side-channel cryptanalysis
    * Hash collision vulnerabilities (e.g. length-extension)
    * Weak/predictable random values
You can further break down into more specific instances.

There are three types of XSS (stored, reflective, DOM-based). There are blind SQL injection techniques worth studying too. But the underlying problem that makes these vulnerabilities possible is simple: User-provided data is being treated as code. Any technology that prevents this confusion will greatly improve security.

For example: SQL injection is neutered by using prepared statements. You might one day forget to manually escape a single input (and it only takes one to be game over), but if user data is always passed separately from the query string (i.e. you never concatenate), there's no opportunity to make this mistake. There were also corner-case escaping bypass attacks (usually involving Unicode) that you might not be vulnerable to. With prepared statements, these clever multibyte character tricks accomplish nothing. The query string is already in the database server before your user's parameters are sent.

I believe teaching developers to think in terms of taxonomy (very general to very specific) will result in a greater understanding of software security and reduce the incidence of vulnerable code.

I've written about this before, in case anyone wants to link to something besides an HN comment: https://paragonie.com/blog/2015/08/gentle-introduction-appli...

---

EDIT: Opened an issue: https://github.com/FallibleInc/security-guide-for-developers...


Well said. Security is a continuous process, not a product, checklist. If security isn't built in at all levels, the developers are creating more attack surface instead.

The categories in your list are important, but I suggest adding what I consider the most important idea in security: stop designing features in a way that requires an enumeration of badness[1], aka default permit is always a bad idea. Checking for known problems inherently skips anything new.

Instead, anything that arrives over the net or other hostile input needs to be validated with a formal recognizer. For a good explanation of this approach, see Meredith and Sergey's 28c3 talk, "The Science of Insecurity"[2].

[1] http://www.ranum.com/security/computer_security/editorials/d...

[2] https://media.ccc.de/v/28c3-4763-en-the_science_of_insecurit...


What you're referring to is the difference between "quality checking", which is the act of checking a product meets acceptance criteria after you've finished building it, and "quality assurance", which is act of putting in processes to guarantee a product is of sufficient quality that you use throughout the build process. A lot of people believe they're doing QA when really they're only doing QC. Both are important.


Interesting point. We make the same in high-security INFOSEC where one must differentiate between adding security "features" and "assurance." The features are things like trusted boot, kernel/user separation, crypto protocols, and so on. The assurance... often missing... are activities that ensure those features are built securely & provide evidence of that to 3rd parties.

Common Criteria's EAL's provide an interesting set of assurance activities with descriptions of the increments:

https://web.archive.org/web/20130718103347/http://cygnacom.c...


The guide seems to have reasonable technical measures. I would like to see more discussion of risk, both in terms of what is being protected, and of who might be trying to attack. For example, you might wish to be more careful when developing a bitcoin wallet than when tracking baseball scores.

Shameless plug: I've been working on a somewhat less practical guide to software development security practices [1]. Even more shameless plug: I'm currently running a survey of security practice use in software development [2], and would welcome participants who work on open source projects.

[1] http://pjmorris.github.io/Security-Practices-Evaluation-Fram...

[2] https://ncsu.qualtrics.com//SE/?SID=SV_1HdQOa2lfX57vkF


The first thing jumped out is:

Store password hashes using Bcrypt (no salt necessary - Bcrypt does it for you)

A better approach would be recommending storing password with password-based key derivation functions (recommendation: scrypt or bcrypt).

I don't want to start the whole debate of scrypt vs bcrypt, GPU vs FPGA here (not qualified and we keep repeating the conversation every time the vs is on the table).

-

When parsing Signup/Login input, sanitize for javascript://, data://, CRLF characters.

Not familiar with why only "signup / login" input.

-

Serially iterable resource id should be avoided. Use /me/orders instead of /user/37153/orders. This acts as a sanity check in case you forgot to check for authorization token.

Had to re-think twice. A stronger argument in favor of /me/orders over /user/37153/orders is to avoid enumeration attack.

-

Any upload feature should sanitize the filename provided by the user.

I like this tip very much, but if the requirement is to keep the filename (think DropBox), you should sanitize filename before storing in the database.

-

Add CSRF header to prevent cross site request forgery.

I don't believe there is a standard header to do this. I had to look up "csrf header". Correct me if I am wrong. I think this is framework specific (if and only if framework supports it). A better recommendation would be to enable CSRF protection and consult with the framework you use. Most modern frameworks would have CSRF protection built-in (but implementation of CSRF protection varies!!!!!!)

-

Add HSTS header to prevent SSL stripping attack.

Simply put, after the user has visited a site with both HTTPS and HSTS header present, user agent like Firefox will honor the header and always attempt to load resource (HTML, JS, CSS) over HTTPS up to the max-age declared in the header. The caveat is you must have visited the HTTPS site first. To actually implement HSTS 100%, you should always redirect user (301) to HTTPS from HTTP.

-

Use random CSRF tokens and expose business logic APIs as HTTP POST requests.

Needs clarification on what "business logic" mean and why POST only? What about PUT and PATCH which also allow body to be used? GET I kind of get it.

-

If you are small and inexperienced, evaluate using AWS elasticbeanstalk or a PaaS to run your code.

Again, caveat is doing everything right. PaaS and IaaS shields you away from some common mistakes, but not all. You can have a remote code execution on an EC2 with instance profile with full access to the entire VPC and the execution is to remove all instances except the one it is on. Perfect.

-

Use a decent provisioning script to create VMs in the cloud.

I have to be a little picky... don't say decent. This is so ambiguous. Did you mean don't reinvent the wheel, or did you mean have a solid engineering process (code review, testing), treating infrastructure automation as software engineering as opposed to an ad-hoc scripting shop.

-

Check for no/default passwords for databases especially MongoDB & Redis. BTW MongoDB sucks, avoid it.

I get it. You own the article you write whatever you want. Professionally, if you want someone to take this serious, please don't say that. I have seem people running Oracle just as good as running PostgreSQL. I have heard companies running Apache as successfully as running Nginx. I have heard horror story about Cassandra and success story about Cassandra. MongoDB has a few old mistakes like default to listen on 0.0.0.0 (I heard it is fixed by now?). BTW, I have used and have managed MongoDB, I know some of the pains, but half of that came out of not knowing what the hell I was doing.

-

Modify server config to use TLS 1.2 for HTTPS and disable all other schemes. (The tradeoff is good)

Right tradeoff is to use data and figure out whether or not you need to support legacy systems like those stuck with XP. It may not be critical, but there are companies that do. Use data first before making a decision like this.

-

Four other thoughts.

1. sanitization of inputs - context matters. The same sanitation technique for HTML doesn't work for XML. That to me is one of the most complicated part in securing application. I am not surprised XSS is still #1 (or at least top 3).

2. Run code in sandbox mode. Not necessarily Docker or a container, but chroot and restrict application access to the available system resource. That's very important.

3. Always use reputable framework. As a young adult I love inventing shit, but whatever you invent for your work you are now responsible and the next person picking up your work after you leave is also responsible. So think twice. I am not picking on Node developers because I have seen Python developers doing the same thing - before you import a random package that does a few thing, look at the standard library. Sometimes maintaining a 100-line code yourself vs doing in two lines after an import from the code is written by a random open source lover and is mostly won't be maintained a few years from now is dangerous.

4. Always upgrade your framework, the toolsets you use, database server you use, etc.

I also think every framework should publish security best practice like https://docs.djangoproject.com/en/1.9/topics/security/ and even more details. Security is one of those things I'd wish I had more time to experiment and address. I am no longer active in that space of automation sadly, but from time to time I think about is the fault really on development practice and developers? Can we make everyone's life easier by having strong framework standard? Are we not making tools available? With so many formats being invented every year, we need to think about is our security flaws a result of our own creativity? Unfortunately, we can only hope for the best that we continue to improve security of our framework and we continue to add strong defaults. Also think about security testing. The low hanging fruit like detecting existence of certain security headers is trivial, but fuzzing and getting real good result of vulnerability within an app is extremely custom AND extremely hard to do (so many states, so little knowledge)... you've got very expensive consultants and then very inexpensive but also very general purpose security testing tools that may not do much and can expose common mistakes. One thought would be sampling and either repeating or mimicking user traffic and run simulations. Perhaps some machine learning stuff could help - not sure.


Add this to the github repo. It's more useful feedback this way. https://github.com/FallibleInc/security-guide-for-developers...


If you, as a developer, jump ship to the new shiney without understanding the security story then it is your fault, irregardless of what promises were levied by your new tool.

That doesn't mean that bad defaults are fine, as part of the community we should all be fixing these, but knowing about these things are part of the job and you should not absolve the framework jumpers of their responsibilities.

More importantly, companies should pick their technical leaders better. That they don't is often why they end up in a mess.


This looks like a great candidate for stack overflow documentation.


At first, I didn't know what you mean. Now I know: http://blog.stackoverflow.com/2016/07/introducing-stack-over...


I thought this line in the checklist was rather interesting:

> Check for no/default passwords for databases especially MongoDB & Redis. BTW MongoDB sucks, avoid it.


I saw this cool site a while ago: https://www.hacksplaining.com/lessons

It explains basic vulnerabilities in a very simple way and offers specific ways of avoiding them in different languages.


I didn't see any mention on how to secure store the session id, only references to session data. It should be noted that information needs to be securely stored both on the client and server.


One thing I totally disagree with: "Set secure, httpOnly cookies."

That is just security theater. It's worse than useless because it makes you think you're more secure, when you haven't prevented attacks are all.


This is not security theatre.

Secure => Attacker can't simply inject an img to a non-https version of the site and then intercept the cookie sent by the browser, therefore stealing the session.

HTTP-only => An XSS attack can't steal the session cookie. You're still in big shit, but it's much harder to persist the attack beyond the user closing the browser window.


It is security theater. Real attackers don't sit there waiting for a cookie to arrive so they can start to craft their malicious authenticated requests. If they can write a script to steal your cookie, they can also write a script to execute the actual requests right there in your session. And httpOnly cookies do not prevent that at all. Security theater is worse than nothing because you think you're secure from XSS, when you should know better.


The "Secure" flag has nothing to do with protecting against XSS. It protects against anyone who can proxy your requests (i.e. when you connect to dodgy wifi) being able to steal your session simply by injecting <img src="http://yoursite.com"> into any non-secure web page your request. That is a massive security hole and not theatre in the slightest.

The HTTP-only flag is less important but still useful. As I said, you're still in deep shit because of course they can make actual requests and if you think it's all you need to protect you then that's a problem. But there's still a difference between an attack that can be persisted, and one that gets interrupted as soon as the user navigates away from the page.


I'm talking only about the httpOnly flag. It's worse than useless because it makes you think that you dodged some kind of bullet, when in fact that same class of attack can still happen: the attacker needs to craft a script to send the cookie to their server, so they might as well have that script execute the actual requests in the context of your authorized session, the same way that they'd do it if they had your cookie. They would write the script ahead of time. No real attacker would sit there at 3 AM making requests because their victim finally activated a script that sent them the cookie. Far more likely, that script is already pre-programmed to do what it needs to do. And if httpOnly cookies didn't exist, the people would care more about sanitizing their Javascript output to prevent XSS, which is the only correct way to prevent this attack.


For what it's worth, I agree with the criticisms of HttpOnly, but I still recommend people set it as one of several redundancy measures.

The approach I take these days (for example: when I gave a three-hour tutorial on web app security at DjangoCon this week) is basically to provide combinations of basic mitigations and then more complex/advanced ones, and push people do at least some combination of them, even if they can't do all of them. So to take XSS/CSRF, for example: Django's template system autoescapes variable output by default, and there's a CSRF protection mechanism on by default. Those are the basic easy things, because the advice is just "they're on already, don't turn them off", and just those simple measures will deal with a lot of nastiness for you.

From there I can talk about cookie options, or CSP, or that kinda-sorta-works header that turns on reflection detection in a few browsers, and the pros and cons of each, and recommend people use combinations of them. CSP is my gold-standard recommendation nowadays, and I downplay HttpOnly as not super great, but I'm not going to complain if somebody just watches my talk and does all the things mentioned.


My problem with this is another howler for security:

Creating something that already exists.

Although OWASP are not legally mandated, they are the most respected go-to people for this kind of stuff and have much more exposure that your "guide" ever will, it also has a much greater level of review and scrutiny so instead to trying to help by increasing the web noise level and possibly making your own mistakes/ommissions (some of which are mentioned below), why not instead get engaged into the existing community and increase the quality of that if needed?


ISTM I've seen some rather trenchant criticism of OWASP's lists in the past. Maybe they've improved, but are they really a "respected go-to"?


As a basic starting point OWASP's top-ten list is fine. I use it when doing intro web-security sessions as a structured way to start people thinking about the things that can go wrong, and I like it for that purpose because some of its items are vague enough to allow good open-ended discussions that take people out of the "just check these boxes" mindset and into full-blown paranoia.

I typically follow it up with a rundown of less-obvious things drawn from my experiences with Django, to point out that even when you cover the OWASP checklist-y stuff you still very easily have major issues.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: