Hacker News new | past | comments | ask | show | jobs | submit login
The alarming state of secure coding neglect (oreilly.com)
169 points by jgrahamc on May 3, 2017 | hide | past | favorite | 103 comments



The survey polled 430 "mostly everyday programmers". Unfortunately, everyday programmers mostly know very little about security.

Developers tend to think of security as about avoiding coding mistakes, and that's reflected in their idea that security is about pen testing, code review, tools, etc. Any security professional will tell you that these are valuable but only a small part of the big picture. Take a look at Microsoft's SDLC for a wider view of what it takes to weave security into every aspect of software development[1]

Probably the single most valuable thing most development organizations could do to improve security of applications is to do threat modeling[2][3]. It's especially valuable in the early stages of application design, but it can be applied at any time. Threat modeling can increase awareness of how an application's security assumptions interact with its overall architecture. Thinking through your application's threat model systematically is the first step to prioritizing mitigations.

Unfortunately, this is voodoo to most developers even though it really should be an intrinsic part of designing application architecture. I've heard people say there's a mental block because the kind of thinking required for security is almost the opposite of that required to design and construct systems. I don't believe that though. I think it's mostly a matter of training and historical accident that security is even a separate discipline. It shouldn't be.

[1] https://www.microsoft.com/en-us/sdl/

[2] https://msdn.microsoft.com/en-us/library/ff648644.aspx

[3] https://www.owasp.org/index.php/Application_Threat_Modeling


There is a rich history of computer hacking that this community and others seems to have forsaken -- an entire generation of people who grew up with exactly that creative/destructive mindset. Unfortunately that ethos died when computer hacking became tantamount to terrorism in the government's eyes. The industry has done this to themselves, because we scream bloody murder every time there's a security breach.


There are deep historic and cultural reasons for this approach. Homes and businesses are generally not secure because the doors are locked, they are secure because the people around them don't try and break in or even check if the door is locked. In city's where that changes you see what looks like more security, but that has surprisingly little impact as it mostly convinces people to break in somewhere else.

What changed in computing is the internet is the worlds largest 'city' by a huge margin and people can mostly automate checking to see not just if the door is locked but if the lock is of poor quality. Clearly in that situation laws are going to have limited value, but because they have been so successful in the past it's really hard to get out of that mindset.

PS: Sure, there is crime, but compared to say 20,000 years ago the odds some kills you and takes your stuff next year is tiny.


Good comparison with locks, because when you read lockpicking topics it looks the same. That pin tumbler locks are bad and whole industry for locks is bad because they should provide better options and throw away pin tumblers. Most of people are not getting robbed only having basic locks. Second is that actually thiefs are not picking locks but smashing doors or opening them with crowbar.

I think it is a good idea to make hacking be viewed as heavy offense instead of fun and games. Of course you can do it on your own servers for fun but do not touch what is not yours.


That is effectively the law right now (at least in the US). But between a media circus of demonization (which frequently arises in cases like these), the government's demonstrated aggressiveness and zeal in pursuit of hacking charges (see Aaron Swartz), and public ignorance and fear, it is difficult to receive a truly fair trial.


> Second is that actually thiefs are not picking locks but smashing doors or opening them with crowbar.

I think people have an implicit threat-model that their home/business would be robbed by a cat burglar, rather than a robber. Because, if they, as a regular person, wanted to steal stuff, cat burglary is what they'd do, because it makes it a lot easier to get away with.

People don't realize, of course, that thieves are usually people rather desperately in need of money—people who need a short-term solution to an urgent problem—and so not only don't care as much about the long-term risk of their actions, but really don't have time to "case the joint" or come up with a stealthy solution.


> people can mostly automate checking to see not just if the door is locked but if the lock is of poor quality

From across the world. Lack of proximity overturns the effectiveness of millennia of social norms.


I completely agree. It's strange to me that people are so scared of 'all of the hackers'. It's not like everyone with a black belt in karate runs around beating up everyone they see. Personally, everyone I know who has a deeper understanding of computer security, is so caught up in their curiosity, and getting 'that next trick' (more like skateboarders) that they don't even have a trace of the inclination, the time, or the threshold for the risk of prison time as it would interfere with their research, to plot and execute the type of stuff that people are so worried about.


You might like this talk on historical computer viruses and the shift from hobby to business in the 2000's. [0]

[0] https://www.youtube.com/watch?v=yswPIwDFYDY


It's true that there are lots of white-hat hackers, but there are also lots of black-hat ones, many of them, I gather, associated with criminal organizations -- and as Retric points out, the Internet allows attacks to come from anywhere on the planet. I don't think it's responsible to suggest that people are unnecessarily worried about the problem.

(Full disclosure: I work in the computer security industry.)


The reason for this is really quite simple: every day programmers should not be concerned with security, that's systems level programming, not application level programming.

But because the interfaces and protocols used for the creation of web services require remote access and hostile input suddenly you get an army of application programmers doing systems level work.

So, to further your point: historically it was a separate level, the web has squashed systems programmers and applications programmers into the same layer of the sandwich.


It's also squashed all the applications together in giant sandbox with a few strategically placed separators.


Come on, you should know the basics and how to avoid them. SQL injection for example.


You are entirely missing the point.

Of course you should know the basics, in fact, you should know everything otherwise what you build will be insecure. But traditionally the 'systems programmers' took care of those details for you and you could write your application in a wonderful trustworthy world. Until ~1992 hacking into a remote system was remarkably hard because there was far less software and that software had been vetted extensively before it was deployed by people who knew what they were doing.

Now it's a free-for-all where everybody with $5 to spare can spin up a VPS and slap some insecure bunch of webstuff on it or cook it up themselves. That's a completely different situation.


It is that in 90ties nobody predicted Internet. Most systems where build with assumption that network is trusted or no network.

Secondly Moor Law consequences where visible after few years. Even MS did not predicted PC boom, everyone now have few computers. Both in terms of performance and availability of hardware we see it massive shift.

It is extreme difficult to add security to system afterwards. In Last Kerberos vulnerability was fixed like last year (Kerberos is used from Windows 2000). Wordpress is still not secure... OpenSSL have something every year.

  > it was deployed by people who knew what they were doing.
It is opposite. These people had no clue that systems they are building will be exposed to internet. Even if they did, it is all written in C on hardware that have very little protection (rowhammer).


I think my point is that SQL injection is at the level application programmers should be worried about.


> I think it's mostly a matter of training and historical accident that security is even a separate discipline.

Puts me in mind of the sort of systems you create working with the military: every line of code not only has to do its job, but has to be hardened against both electronic warfare (e.g. memory corruption from radiation from a maser) and cyberwarfare. It really does feel all of a piece when you get into that mindset.


Yet one more thing developers have to worry about, as if the list wasn't long enough already. The developer world is still full of people creating sql injection attacks, I think you may be raising the bar to well beyond what is practical.


This just means that you need to add that script kiddie scenario to your threat model and prioritize it accordingly.


At one place I worked, our commercial (wordpress) site got hacked and defaced by some Turkish outfit. The devs at our company reverted the site, and were joking about the hackers just being script kiddies. They didn't seem to understand my point of "... but we were hacked by those script kiddies, why are we laughing?"


Unfortunately, everyday programmers mostly know very little about security.

If you really want security, it's something that every programmer should be thinking about, at least in the back of their mind, on every line of code they write.


Maybe it's actually should be the other way around ? Isn't it possible to build frameworks(using relatively popular/easy languages) for the most popular application classes(CRUD web apps, IOT MCU) that in many cases will isolate the developer from needing to think about security ?

And if it's possible, And we already have a few such tools(like say scala lift, ARM mbed ) but somehow haven't yet became popular, why is that ?


Many of them already are, but they aren't "sexy". I personally do a lot of .Net, and MVC 5 has relatively good defaults if you just install and go. ASP.NET Core is even better in some regards (CSRF tokens are completely transparent now). I think a lot of the problem is that people want to use a lot of new tech which hasn't had time to develop security as a convenience feature, or they just flat out don't want to use a framework.


If you're writing queries, either through an ORM or by hand, you need to be thinking about what data will be returned to the user. If you're not thinking about it, you'll create a data leak in the best case.


Nope, you've got it backwards :) If you really want security, it's something that no programmer should have to think about. Your language/framework/platform/API has to provide it for free. Trying to make every developer a security expert is a laughable proposition. That's my conclusion after 15 years in the security industry.


It's an AND not an XOR. You need both.

Because an API that provides security for free is a laughable proposition.


>If you really want security, it's something that every programmer should be thinking about, at least in the back of their mind, on every line of code they write.

Unless you're doing this with some level of competence, it's probably wrong.

Making line developers competent at security is a nice idea, but you have about 27 other things people have said that developers should be good at alongside of security.


I find most security 'training' to be really contrived. The examples are so trivial as to be useless and easily avoided in real life.

I have had to do several 'security' training sessions. I understand it intellectually, but they don't instill a deep understanding.


Yeah. There's a lot of security information that is easy to understand, but not very accessible. I've been thinking about writing a book on that topic, for that very reason.

Start out with things like, "don't leave your telnet port open" which seems obvious, but apparently is hard for a lot of people. Then from there lead to a reasonable understanding of metasploit.


"security proponents will probably have to demonstrate improvements to the bottom line: less maintenance, improved customer satisfaction, or other measurable incentives to bring everyone on board"

In most fields there is little incentive to change things when the company itself isn't too affected in case of hack. Stocks go down for a week, then back up. General population doesn't seem to care enough to stop using the services, and does not understand enough about privacy/value of their data/need for encryption. So why would a company care ?

If being "ethical" and more devoted to privacy becomes a trend, perhaps there will be a stronger drive to follow security experts' advice. Did Whatsapp get a surge in users after enabling end-to-end encryption?

https://hbr.org/2015/03/why-data-breaches-dont-hurt-stock-pr...


>In most fields there is little incentive to change things when the company itself isn't too affected in case of hack

This is correct. As long as the risk isn't too high then companies will just take the risk and accept a hack as the "cost of doing business". Much like Goldman Sachs expects they will get fined by Governments, but they don't care because the money they make far outweighs the fines imposed.


New legislation, fines against businesses form a feedback cycle. They keep going up against repeat offenders until the behavior changes. Inverse exponential-backoff under collision with the regulatory body.


Haha awesome, determine fines with a PID-controller


> In most fields there is little incentive to change things when the company itself isn't too affected in case of hack.

This is steadily changing. I've had a few recent engagements where i've been asked to evaluate the security practices of potential partners and suppliers and in one case the result affected a sourcing decision.

I can see this becoming more common, which could eventuate into a credit-reporting like bureau setup to audit security and privacy practices.


The EU will soon have fines for data protection violations. Up to €20m

https://www.out-law.com/en/articles/2016/may/gdpr-potential-...


That fine seems low, preserving the moral hazard, unless it applies to each affected person or some other compounding factor.


€20m buys you a lot of infosec.


If you personal info gets stolen from a company (or government) you entrusted it with, it ends up being your problem, despite any negligence on their part.

Personally, I believe that if it's negligence, there should be compensation - they lost something that belongs to you and is of value. But few people seem to really care. The info of 1 billion yahoo accounts was hacked, but who cares? Until that changes, the problem will continue to exist.


And this is why we see companies scooping up people's data and saving it with no regard to the individuals. We need to move to companies seeing that data as a liability, not an asset.


It should be both an asset and potential liability. There's no doubt that user info is valuable and can be an asset. But if a company is negligent with that info, perhaps they should be liable in some way.


> We need to move to companies seeing that data as a liability, not an asset

this could lead to serious limitations on legitimate future progress including on anything built using statistical methods that require large training data sets.


We choose to limit medical progress by restricting the experimentation that can be done on humans because we've deemed such protections worthy of the cost to progress.

If protecting personal information limits or slows some progress, so be it. Also it could still be done, just more carefully.


That's certainly a concern given the way things are currently, but I'm hopeful concepts like Google's federated learning [1] will become increasingly popular, and a change to data-as-a-liability would help drive development and adoption of such approaches.

[1] https://research.googleblog.com/2017/04/federated-learning-c...


I really, really like this approach to data, and there's so many advantages beyond the ones outlined there: if you could turn that collection approach into a universal/OS-level API you could have some kind of setup where user data by default stays on the device, then to use it, the app/app server has to request it explicitly (allowing the user easy and powerful, but granular control over who gets what data) and then user data is further protected by the other measures talked about in their paper.

Some level of public education is also required I think, to make people aware of the value/threat of just giving everything up every time someone asks for it.


too bad, so sad.

If companies were trustworthy with data, than that wouldn't be an issue. Why should they get a free lunch on my dime?


And nothing of value was lost, except to marketers.


Yeah, cause they're doing so much with it now /s


I see very little evidence that personal data is useful for that. Can you name real life examples?

Most machine learning is done against anonymous dataset and works fine.


Machine learning can also de-anonymize that anonymous dataset. How does one protect against that? https://en.wikipedia.org/wiki/De-anonymization


Superficially, you're right. But a company that wants to survive has to keep its users happy, and data loss is really bad for PR. How many people dumped Yahoo after the breach? I imagine a lot.

And really the same is true for any company that has any sizeable PR problem, regardless of the cause. Examples:

- Yahoo

- Uber

- United Airlines

- Target


I think we need to do everything we can to make it so that the tools that regular programmers use aren't dangerous and/or insecure by default; otherwise, we're just playing security vulnerability whack-a-mole. Better to solve problems at the source than try to educate people how not to use a tool incorrectly. This is really hard because there are so many widely used tools and abstractions that were designed to be as powerful as possible rather than to be easy to formally verify for correctness. I feel like the whole software industry is built on a wobbly foundation, but it's hard to part with tools we know because they're useful and they work, even if they do break rather often.

A good start would be to stop using C and C++ for new projects, and generally try to eradicate undefined behavior at all levels of software. There's a lot of really great software written in C and C++, and it would be a huge undertaking to replace, say, the Linux kernel with something written in Rust or Swift or some language that hasn't been invented yet. I think the eventual benefits may greatly outweigh the costs, but it's a lot easier to sit in a local optimum where everything is comfortable and familiar than to set out on a quest to, say, formally verify that no use-after-free errors or race conditions are possible in any of the software running on a general-purpose computer with ordinary applications.


Yep, this is the only thing that can possibly work. The path of least resistance has to be secure, and in order to do something insecure, you need to know enough about what you're doing to jump through hoops to get there. Right now, we are in the opposite situation. People have to jump through hoops to do the secure thing.

You also have to meet them where they are, not get them to change their ways to suit yours. Otherwise, you're adding resistance, and you'll fail.


Well said.


Among the examples cited was the Sony entertainment breach, whose causes included a negligent security officer keeping passwords in a plain text file on his desktop. It's not clear what better coding practices could have done to improve that (or anything else at Sony entertainment, where the breached systems were mostly running commodity software).

Ref: http://www.telegraph.co.uk/technology/sony/11274727/Sony-sav...


I know at least part of this problem is driven by the IT security field itself. Try, for example, to find a pragmatic PCI auditor that can focus on real issues. They exist, I'm sure, but the defacto process is to create a huge report filled with minutea...versus something rolled up and actionable.

It's not good, but I can see why, after a few of these experiences, proactive security gets dropped off the priority list.


Unfortunately Security and compliance (like PCI) cover similar ground but are very different in implementation.

PCI auditors work to a fixed standard and can be negatively affected if its found that they deviated from it, so there's a strong incentive for them to be picky. When combined with the fact that it's hard to have a standard that reflects the reality of good security practice, you end up with well the current PCI process.

The problem you're describing isn't really (I'd say) one that came from the IT Security industry though, it was the card issuing companies who set the PCI DSS standard and them that mandated the compliance process. Auditors are just carrying out those requirements.


It was just one example. I've seen similar issues from IT security in other situations. Like recommending every tier of an application in AWS having it's own VPC, with firewall appliances between them, manual approval chains to open up ports in a dynamically scaled app, etc.

Basically, finding pragmatic security people that can balance "perfect" with "real life" is hard.


Indeed, part of that will be the culture of the company.

I've seen quite a few companies where any breach of security is held to be the "security team's fault", so they have an incentive not to accept risks (limited upside if they accept a risk, alongside a large potential downside if a breach/incident happens as a result)

Getting past that really requires a culture where security is the responsibility of all people in the organization and there's no finger pointing in the event of a breach/incident.


Huh but that is the job of security auditor to assess risk level give report with level of threat and then product owner job is to take responsibility for implementing fixes based on that report. I do not understand way of working where you have security team that dumps report with bs on developers heads and say fix all now or we die.


Interesting, although not particularly surprising, results there.

I'm afraid that the InfoSec community has been unsucessfully persuing the idea of "RoI" for security activities for a long time (I remember debating the idea 10+ years ago..)

Also the idea that increasing breaches would drive good practices seems not to have taken root that much, probably due to breach fatigue and the fact that most companies who are breached don't take any serious financial hit.

Realistically the most likely way to improve this situation is for it to feature more heavily in contracts and perhaps regulations.

Having a contractual requirement to carry out specific activities relating to code quality/security can drive them as there's a clear monetary cost of not doing so.


"I'm afraid that the InfoSec community has been unsucessfully persuing the idea of "RoI" for security activities for a long time (I remember debating the idea 10+ years ago..)"

I've always suggested they focus on confidence in control of assets and IT. The upper management are usually control freaks that like knowing what's happening is what they want to happen. A good, security program puts them in control of the businesses assets. A bad, security program puts a 14 yr old troll or competitor wanting their marketing/I.P. in control. A list of their asssets, esp those easy to move, next to the cost of a reasonable security program is next move.

I'd love to see more data on attempts at doing this along with the responses. I know the scheme has already worked for people selling life insurance. I learned it from one of them. Might help on security since ROI is a dead end for most companies.


If there isn't any financial hit, then there isn't any value in added security.


I said no serious financial hit, not no hit. Serious in terms of "the stock price went down significantly", there's still costs of breach clean up etc.

Also there's a big negative externality in that a lot of the costs are borne by users of the app/system and not necessarily the developers, but due to a lack of liability for software development and security breaches that isn't taken into account by many companies


I promise this will never get better until fines get handed out left and right for breaches of ANY personal information. Right now, no one really gets penalized for breaches unless it involves regulated data (financial, healthcare, etc...).

Why? Money, obviously.

1. employing security engineers who know what they are doing is EXPENSIVE.

2. third party pentests are expensive.

3. if there isn't an open source tool available, all of the software in the security area is SUPER expensive.

No company, especially small or medium size, is going to spend that kind of cash without a real motivator.

Even if you do EVERYTHING you should be doing, you will still have vulnerabilities. Its a loosing game.


In my dreams, some combination of closed hardware and/or software (perhaps the latest Intel AMT vulnerability?) leads to the personal information of all congressmen to be leaked--financial, medical, residential, etc. They respond with a "Secure Computing Act" that requires that the all US agencies, as well as any companies they do business with, to use 100% open-source hardware and software.


The more likely outcome would be to nonsensically ban open-source implementations and instead give a monopoly to a small list of "governmentally approved security companies." These companies in turn would be required to produce massive volumes of paper report to "manage the risk and prove that their software is secure."


In healthcare, you have HIPAA and if you mess up patient records you can lose your license, be subject to legal actions. e.g: leaking 1 patient record.

If a pharmaceutical company releases a drug that causes negative side effects, lawyers are happy to sue the company on your behalf for free.

But software engineering is a discipline where no license is required, and now thanks to informal educational institutions like coding camps, not even a degree is required. You can ruin the lives of millions of people but always hop around and get another job.

Companies maximize their margin saving money on security (and other non-functional requirements), and expose customer sensitive information to significant risks with no accountability. A statement like "Sorry! we got hacked, your SSN and credit card information is now being sold by the bulk in an .onion site!" would do. We as consumers should punish those incidents more aggressively and demand a reasonable cause.

The product-driven minimum-viable-product lean-agile full-stack get-it-done culture of spaghetti code bases without security needs to die now. It's highly profitable and the preferred business model for many, yes. Is it ethical? hell no. Stop doing it. In those cultures, security is treated as "tin-foil hat paranoia" and laughed upon, and put into some "nice to have"/"maybe some day" list, with the lowest priority.

A security bug can make it into any software. But if you assembled a team of coding camp guys or fresh graduates to work on a banking platform or making an IoT pacemaker you deserve to be sued for neglicence.

Unfortunately because software is a relatively new activity compared to others, there is no established legal framework around it and that needs fixing.


Security is one of those arts, especially when it comes to the programming side of things, that one tiny chink in your armour is enough. There are people and tools out there which scan continually for the bugs and holes, either wearing a white hat and submitting them to bug bounty programs or similar, or wearing a darker shade of hat and doing much worse.

Of course, there are things out there which can help a business to minimise these risks and to try and catch these potential coding horrors before they're put in-front of the general public:

- Static Code Analysis (sometimes referred to as source code analysis) is sometimes a quick win here, but of course not the silver bullet. Sometimes a bug cannot be easily identified by just looking at code for common mistakes, it takes a skilled eye or even dynamic analysis for it to be spotted. However, static analysis can be added into your production pipeline and workflow, checking on each push for any newly added vulnerabilities!

- Automated vulnerability scanning/testing is also something else which can be done in-house usually, with the right tools. There is no reason why you shouldn't be running various security scanning tools against your application during testing/pre-production, such as web application scanners or even fuzzers.

- Go external, and get a 3rd party to penetration test your application if it requires that level of scrutiny. There are plenty of smart folks out there who do it day after day who can do this for you.

You can also deploy thing post deployment of course (depending on what you are coding!), so for web applications, a WAF (web application firewall) is sometimes useful to stop the vast majority of automated attacks. The alerts from this will also give you a very good idea of what is out there and at what scale you are being targeted. I'm currently working on a side project [1] which is to try and identify breaches once they have happened, as unfortunately they are almost inevitable. It isn't always your code which lets you down! It may be a dependency or library, or even a simple phishing email. Put simply, my project produces a canary to add to your user base, that we'll monitor continually for a number of tell-tail signs that someone else may have a copy of your data.

At that point, it's time to invoke your incident response process! Or... get someone in to run that process for you.

[1] https://breachcanary.com - If you get this far, I would absolutely love any feedback.


>There is no reason why you shouldn't be running various security scanning tools against your application during testing/pre-production, such as web application scanners

Likely one reason is cost. When each individual tool costs 10k+ with the vendors trying to throw in consulting, it adds up.


And we're surprised? Didn't we learn anything from the 90's? No amount of diligence sitting at a desk, carefully evaluating the implications of the placement/thoroughness of your user input sanitation, adjusting the settings in server configuration files, preventing your employees from using removable media and accessing outside sites... No threat model, seriously none.. ever.. will ever.. Stop a young Angelina Jolie on rollerblades from gaining access to your evil corporations's super computer and thwarting your carefully laid, super-villain plan.


There's so much sarcasm in your post that I'm not sure I understand your point. Can you clarify?


Let me rephrase that; Skipping reindexing punch cards: If your adversary can write some ASM to get the EIP to point to a malicious instruction, they can instruct your system to do something you don't necessarily want it to do. Then our homies at bell labs built C, as a layer of abstraction to ASM. With C, your adversary has several ways to accomplish getting the EIP to his malicious instruction. Then, over the years, many brilliant, incredible minds, (no sarcasm about that. None.) have built abstractions to simplify C, and then built abstractions on top of those abstractions, and then abstractions to simplify those abstractions. (I'm totally not even going to touch networking protocols) There is decades, of building systems with flaws, on top of systems with security flaws, (which admittedly wasn't as much of a concern to anyone, as providing the functionality to accomplish objectives, business and otherwise) ... literally, like over half a century of this. So then these middle-management suits, operating with "LEAN 6-Sigma" misconceptions about the nature of the world, expect a kid, with a degree in anthropology (not knocking the study) to run through a 12-week intensive program, and be able to write code for a production system, with perhaps 2 people on their dev team of 8-16, and 3 folks in devops/IT who understand security to be able to proof all of that code, and make sure that your Gibson is bulletproof? It's unrealistic. If she wants to hack your Gibson, she's going to hack your Gibson. We're all going to attempt to stop that, and after we've failed, we will spend days filling out reports, talking to feds, and mitigating the damage. But we're continuously building onto a flawed mechanism, with another flawed mechanism. I mean, do you know any civil engineers who would say, "Oh hey this foundation is cracked, let's build something that tries to patch those cracks, and when that's broken, we'll build another level on top of that, and let's just obfuscate what's really going on underneath everything so that nobody who uses the building realizes it's unstable, and just hope it doesn't get too windy, or that there is an earthquake." ? Ipso Facto: When you launch some ransomware, that threatens the software reading a gyroscope to tip over an oil tanker if you aren't paid $1,000,000, and try to blame it on some kids who's only crime was curiosity, they will find a way to subvert the carefully measured security mechanisms you have put in place, to not only clear their names, and prove beyond the shadow of doubt that it was in fact YOU, who hatched this terrible plot, but also save the environment.

Sorry, I should have said that to begin with.


> I mean, do you know any civil engineers who would say, "Oh hey this foundation is cracked, let's build something that tries to patch those cracks, and when that's broken, we'll build another level on top of that, and let's just obfuscate what's really going on underneath everything so that nobody who uses the building realizes it's unstable, and just hope it doesn't get too windy, or that there is an earthquake."

No, but civil engineers say stuff like "What's the likelihood that a 9.5 earthquake will his this area? What about a 5?" and model their designs on that. That's the point behind threat modelling - if a nation state actor decides they want to 'hack your Gibson' that's one thing, but if you're a bank than it may be that your most likely threat is employees or contractors stealing customer data. So you put your effort into protecting against those threats as well.


Programmers probably don't go out of their way to do secure coding not only because it's not mandated to them, but because they know that unverified code is junk. If security is a feature, it has to be verified. It's not the kind of verification where you can show that a feature works on correct data, and rejects incorrect data in some anticipated ways. It requires exhaustive code review, and a lot of cunning in the test strategy.


Programming languages from Adga to Monte are coming now with the ability to encode meaningful real-world proofs, and the compilers are checking those proofs.

We could be writing a lot of verified code, if we wanted.


You radically overestimate the skills of the average programmer.


You are correct when you say that writing verifiable/verified code is a separate skill that not many programmers possess.

But crucially, it's a skills gap -- not an intelligence gap -- that stands in their way. For most properties, writing verified code doesn't require that much more intelligence than writing normal code. Different skills, but in most cases not much more intelligence.

But I don't think a skills gap is the most pressing barrier. There is an intrinsic difference in difficulty between stating conjectures and proving theorems. There's no silver bullet here, including either education or raw intelligence. The latter is, intrinsically, more difficult and more time consuming.


I'm reasonably sure that for all things where a proof would be helpful, that is properties that the programmer has a nontrivial chance of getting the implementation wrong and not catching the mistake with a test, a correctness proof is a lot harder than writing a correct implementation. I've dabbled a bit with Coq and Isabelle and certainly found this to be the case.

Stronger type systems help a lot with common security problems. Memory safety and a system to express taint of inputs eliminate a huge class of potential problems. They are still very, very far from correctness proofs that could catch actual logic bugs.


In Monte, this object is proven by the compiler/interpreter to be transitively immutable:

  object popsicle as DeepFrozen:
    to getFlavor() :Str:
      # Obviously immutable.
      return "lime"

    to getObservedColor(eye):
      # The eye might not be immutable, but that's still okay.
      # All that's required is that this object not secretly stash the eye.
      return eye.observeCMYK(38, 0, 78, 1)
You radically underestimate the intelligence of your fellow humans.


In C++ the same can be achieved by using "const". I don't think that such properties are very useful in preventing security problems.


> One of the central security protocols protecting the web—OpenSSL

OpenSSL is not a protocol. It is an implementation of TLS. Other implementations were immune to Heartbleed.


I implore you to try your very best to hire only professionals. (computerhackguru AT gmail DOT com or (513) 453-6539) will increase your chances of getting your job completed. I was lost with no hope when my credit score is below 500 and i was about to lose my job, my car, most of my properties including my house because i was yet to pay the mortgage, I sit my best friend down and tell him all my problems and he then introduce me to savior of all time Computerhackguru AT gmail DOT com or (513) 453-6539. I contacted him and he asked for my info and i gave him all he needed to proceed with the hack. Behold! I thought it was a joke after which have funded the exploits and i was told to wait to 48 hours. He helped me hacked into the FICO and all and get my credit score boost to 825 plus and now, I got a new perfect job and had to pay my mortgage and i'm living my life in peace without debts and bad credits. All thanks to the realest hacker alive! get to him through Computerhackguru AT gmail DOT com or (513) 453-6539. He's real, swift and affordable. I bet you get to find me and thank me later.


Cultural changes among engineering teams would be helpful, but the bottom line as far as business apps are concerned is that, to prioritize security, the financial liability of letting hackers access customer data needs to meet the financial incentive of shipping working code. Until those curves intersect the status quo remains.


Government could impose fines for not meeting certain security standards, and then do random audits, or in a more free market way this sort of alignment ought to eventually be able to come about via:

1. Class action lawsuits to sue companies who have data leaks.

2. Companies take out insurance for being sued for data leaks.

3. Insurance companies impose security requirements in order to provide coverage.


		The costs of dealing with breaches, no matter how demoralizing, never seem to justify 
		the extra time and money that good security requires. Additionally, although a security 
		flaw is sometimes traceable to a single line of code—as in Apple’s famous “curly braces” 
		bug—breaches are often a simultaneous failure on several levels of the software stack 
		and its implementation. So each company may be able to shift blame onto other actors, 
		and even the user.
That above paragraph provides another perspective on why the software security vulnerabilities happens more frequently, which is different from the hardware ones. [1]

[1]: https://news.ycombinator.com/item?id=14238391


Self plug =).

Check us out at https://oneupsecurity.com if you're interested in secure software development.

Always great to chat with businesses who are passionate about security.


Replace "security" by "safety", and I suspect you have the same problems.

I wonder how that will work out with e.g. self-driving cars.


I would be surprised if safety was not very high on the list of priorities. Getting hacked is always a nebulous proposal but I think it is quite easy to describe the consequences of somebody being hurt or killed by your product.


"We need to add this feature to the car immediately!"

"We can't do that because it would mean we would have to re-test the codebase, meaning we'd have to test-drive the car for hundreds of thousands of miles."

"Can't we take a shortcut? I believe the change is quite innocent. And the competitor already has this feature."

"Ok, sounds reasonable. Make the change."


You are assuming that car manufacturers operate like web companies.

Tip: They do not.


"Sir our car won't pass smog tests!"

"Oh no! Now we will need to do x and y and z and it will cost x dollars"

"Well we could adjust the software.."

"Sounds great!"

Yep, car industry...


Easily caught and heavily fined. (Not to mention loss of PR.)


Car companies are, somewhat terrifyingly, becoming web companies (in part).


Safety investment is a by-product of how much money you're likely to get sued for. (or in the case of airplanes, more a side-effect of regulations and preventing customers' fears from affecting sales)


What's the incentive for companies to care? They get hacked and leak everyone's data all over the place and we all just kinda shrug and say "that sucks". Sometimes bigger companies get in trouble for some millions, an insignificant amount to them.

Look at what happened last week with Netflix. It didn't involve user data, but they got hacked and their stuff leaked and then what? Everyone just shrugged. No big deal. I mean probably some people are getting yelled at internally, but otherwise the situation is clear: we pretend to care about this, but we don't care about it.

Imagine being a security advocate in an organization in this environment. You get to convince business people to spend money so that something doesn't happen, which even if it does happen, will result in embarrassing headlines for a day. Not exactly a convincing case!


There are some obvious contributing factors:

0) The majority of programmers at this point in history are novices. Src: Joe Armstrong.

1) There is very little formalized training except in some enterprises. Many outsourcers invest very little in training and expect staff to learn on-the-job.

2) Large enterprises with immense codebases with many committers easily become Tragedies of the Commons without active code reviews and high standards.

3) There is very little standardization (convention over configuration): many languages, many non-orthogonal coding styles/language features.

4) Security seems like a non-value add activity... until there's a major problem. (Non-proactive development/CMM.)

5) Offensive and defensive sec require a different, acquired skill-set and engineering mindset from simply implementing features and fixing bugs.


You can only have two things in software development either your software is "user friendly" or "secure". Example: PGP


PGP/GPG is the FFMPEG of encryption software. It could be done way better.


ffmpeg has had a lot of hours of fuzzing and improvements thrown at it. I believe it's come out of the avconv fork looking better - after all, it survived.


It's powerful software, but goddamn that command-line interface makes ImageMagick look downright simplistic.

There are GUI tools for setting these options, but they're absolutely atrocious. You may as well be trying to program a guided missile.


FFmpeg is mostly a set of libraries rather than the test command-line application. There are really a huge number of applications using it internally.


ITT: talking about poor incentives, doing basic threat modeling, etc.

The answer has always been professional licensing.

And no, just because there's always a new flavor of the month language or framework, doesn't mean the fundamentals change all that often.

* Sanitize inputs * Secure data at rest and in transit * Never store secrets in plaintext * Set up IAM properly for your organization * Use 2FA whenever possible * Principle of least access * Update frequently

Why, again, can't we conditionally license professionals on knowing the basics, and threaten disbarment for knowingly neglecting security?


> The answer has always been professional licensing.

That's only the answer if you're asking people who have no real experience in security. Those who do are more likely to tell you that will only make the problem worse.

Professional licensing for writing or operating software will never work, and if it does, it will be extremely harmful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: