Hacker News new | past | comments | ask | show | jobs | submit login
Our Dumb Security Questionnaire (hangar.tech)
149 points by zdw on Jan 15, 2021 | hide | past | favorite | 88 comments



In an ideal world, these are mostly reasonable questions, even if a bit specific to the OP's requirements, but unless you are purchasing SaaS software on a large, enterprise-style license (i.e., 6 figures or more), it probably won't be cost-effective for most vendors to gather and collate this documentation, as well as answering these questions, just like it's not worth it to ink a custom contract on a smaller deal, where legal fees alone might eat up the entire profit margin or even push it negative. Engineering time is expensive and some of these questions are technical or "special" in nature.

There's a reason why standardized third-party audits like SOC-2's and ISO-27001's exist: to reduce the time required to document your veracity and security as a vendor for a potential customer. Since even large customers rely on the statements (independently audited, unlike in these questions) of a security attestation to make purchases, why should a customer request that extra time should be taken away from other responsibilities, like making better products or providing customer support?

I freely admit that I'm a bit biased against ad hoc security questions, even though I used to do it myself when working on large security teams. ;) My security-focused SSH key management company, https://userify.com, went through the time and expense to achieve AICPA SOC-2 certification from an independent third-party auditor to reduce the time and costs involved in responding to smaller RFP's, and to provide fully documented, standardized, and legally binding proof of our security bonafides. We still try our best to intelligently respond to any and all questions, especially about security, from any customers at all, even free-tier customers and hobbyists, but it's harder to do that when presented with a big list of questions that are mostly answered in our SOC-2 audit already.


> it probably won't be cost-effective for most vendors to gather and collate this documentation, as well as answering these questions, just like it's not worth it to ink a custom contract on a smaller deal, where legal fees alone might eat up the entire profit margin or even push it negative.

Indeed. I get these security "wall of questions" for my $10/lifetime SaaS product (it's a dumb little puzzle maker). I can't fathom why anyone thinks a vendor would spend the time answering their questions for that kind of money.

In their defense, it's probably a standard part of the procurement process and they don't even look at the app before sending the questionnaire.


At the risk drawing fire from everyone, I don't really think ISO27001 is a security certification. We get audited annually against 27k Annex A (basically ~95% of full one) + UK Gambling Commission extras.

The audit focuses far more on technical aspects of business continuity than actual security. There's certainly plenty of overlap, but other than the parts about access controls and "who watches the watchmen" aspect, ISO27k is almost entirely about your ability to recover from even the most devastating disaster. The pragmatic security parts have a bolted-on feeling to ensure the recovery path remains largely uncorrupted.


Annex A is a subset of 27001, but 27001 is focused on security.

https://www.iso.org/isoiec-27001-information-security.html

(Even so, your point is well taken. The overall 27000 series is more business continuity (DR etc) focused rather than purely security focused.)


and to be fair, Information Security is generally accepted to cover Confidentiality, Integrity, and Availability (see: CIA Triad)... so DR/BC are definitely within scope.


SOC2 and ISO27001 are just the conversations starters. Companies that take security seriously will send you a questionnaire like OPs. It's commonly referred to as a Vendor Security Assessment (VSA).

Source: I worked on compliance management software for a while and VSAs were a major pain point for our customers (small to mid market companies trying to sell to enterprises).


I did the analysis and looked at the costs of a security questionnaire. They run anywhere from $250-$4500 each. The main problem is that there has become a race to longer and longer questionnaires. "Oh, your questionnaire has 1000 questions, I am going to make mine 1100!." I like the author's intent here. Maybe 10 questions is a little short, but let's end the gaming of this process and keep it straight forward. As one commenter noted, some of this information is confidential and should be obscured, not sent around via email attachments to people who may or may not enter into a contract at some point.


Agreed!

IMO, ask for the company's SOC-2 once you've demonstrated that you are a legitimate prospective customer with a budget and not a competitor or social engineer.

That should address most concerns, or at least make the questions more salient, and it has the added benefit of being vetted by a third party.

If they don't have one, then proceed with the dumb questions, but as you quite correctly point out, no one should be surprised if some questions are rebuffed.


We are doing some work in this domain to make security questionnaires less of pain à la at https://securitypalhq.com/. Would love to get your inputs or on ways if we could help you out.


If every customer asked the same questions, such as these, it would be worthwhile compiling the answers once for everybody and keeping them up to date.


Exactly, that set of standard questions and responses (ie controls) is exactly what a VSA is.

However, SOC2 takes it a step further and requires an formalized audit from a AICPA certified security auditing firm[0].

Therefore, a security questionnaire should be for follow-up items that the customer feels were not adequately addressed in one or more of the vendor's compliance attestations.

Otherwise, the customer is asking the vendor to step through redundant check-the-box busywork that actually requires a higher level of skill that can not be adequately completed by a junior engineer. To wax hyperbolic, it's like an engineering DoS attack which serves neither the customer nor the vendor well (unless the goal is to slow down the vendor from making new and better products)

0. https://www.aicpa.org/interestareas/frc/assuranceadvisoryser...


There is a value to just human responses, answering someone's questions can help you look inwards to your service/product as well.

For example, as far as I can tell, "HIPAA" is spelled "HIPPA" on your website – this indicates a lack of familiarity with the Act (and potentially professionalism) to me as a first-time user, regardless of the multiple kinds of certifications you might have.

Is it possible you could have caught that when replying to a simple set questionnaire like the one in the link but focused on HIPAA?


> There is a value to just human responses, answering someone's questions can help you look inwards to your service/product as well.

We do respond to every single email, and hopefully fairly intelligently most of the time ;), even if the customer is a hobbyist or free tier user, especially with regards to security questions. (Although internal stats are slightly meaningless, and survey responses are relatively uncommon for the volume of emails we receive, we do have >99% customer support satisfaction based on our internal tracking.)

> Is it possible you could have caught that when replying to a simple set questionnaire like the one in the link but focused on HIPAA?

Ha, looks like we mispelled it in one place and spelled it correctly in another! Are you asking if we'd have caught a misspelling or other oversight on our website on the basis of an emailed questionnaire? Good question, but, to be honest, doubtful (but thank you for bringing it to my attention -- fixed!)

We've had wall-of-questions requests like that before, and we either respond with our other certs/attestations, or negotiate a BAA, once we get down to more specific questions. For HIPAA, we're providing on-premise security software, and not working with PHI in any way (we're deployed in some hospital systems in the US), so generally customers are just looking to understand our general security stance and how we can help them meet the Security Rule with login accounts.


"But unfortunately, they’re also kinda the best we can do. Huge companies might be able to afford real human-powered security audits and convince vendors to allow them, but small startups like ours don’t have a better option than relying on DSQs."

Not what I expected based on this title. I liked this.

I'm not sure how many people know about the Higher Education Community Vendor Assessment Toolkit, but if you're into Dumb Security Questionnaires, this is worth checking out. "The HECVAT is a questionnaire framework specifically designed for higher education to measure vendor risk." I know it says Higher Ed, but it works just about anywhere.

https://library.educause.edu/resources/2020/4/higher-educati...


I worked for a HIPAA covered healthcare company that routinely passed audits. They also stored all the production database and server passwords in a plain text file accessible to half the company and no audit trail of any kind on logins.

Clearly no auditor ever asked or looked.


where would one hire an auditor like this? asking for a friend


There are two kinds of auditors:

- One who will drive you crazy demanding to see things that don’t matter, argue with you over non-issues, think they are way smarter than they are, and just produce a lot of irrelevant paperwork

- One who is barely technically literate but got their CISSP certificate, definitely can’t code, and has never written an exploit in their life. They just want to see a familiar tool name and some checklists.

There are zero kinds of compliance auditors who will ever find an actual vulnerability, and I’d be surprised if they can even explain common basic attacks like SSRF.

The reason is simple. That work is so boring, that if you had any skills at all, you’d be doing more interesting security work.


I know you're being facetious, but any SOC2 audit would overlook this because they're just going down a checklist making sure specific controls are in place and not actually probing for the various possibilities to bypass these controls.


> any SOC2 audit would overlook this because they're just going down a checklist

From the GP, "They also stored (1) all the production database and server passwords in a plain text file accessible to half the company and (2) no audit trail of any kind on logins."

Any competent SOC-2 auditor would not overlook this. Large swaths of criteria are specifically geared to uncover both of these (unencrypted credentials and audit trails).

SOC-2 audit firms have a strongly vested interest in hiring competent auditors, because if a SOC-2 auditor did not ask questions covering these areas, then failure to complete the audit would be actionable malpractice and the auditor would be liable, unless the company lied (committed demonstrable fraud) in its written responses.


> Any competent SOC-2 auditor would not at all overlook this

I think this statement is tautological, in a way


I see your point: if they were competent, they wouldn't overlook it. My point was more that because SOC-2 is more pedantic (than, say, HIPAA), it's harder for a SOC-2 auditor to mess this up.


No, I'm saying their audit process focus more in box checking than actually seeing "you left the key under the mat"


The "boxes" to be checked are actually asking open-ended questions about that, and asking you to provide copious written documentation backing up what you are claiming. This process takes weeks or months and is quite expensive.


The relevant box here is "Are all accesses to production systems logged in an indelible manner?" and "Is the principal of least privilege followed when accessing production systems?"

These questions aren't perfect, since they don't actually prevent security issues and merely document them extensively, but answering no to them will fail the audit.


Are you talking about HIPAA or SOC2?


As a counter example, a competent fire inspector might overlook it ;)


Given the current political climate, such a SOC-2 auditor should be outed by name.


> Given the current political climate,

Not sure what politics have to do with it.

> such a SOC-2 auditor should be outed by name.

That would be defamatory, and potentially extremely unfair, especially if someone lied or was just incorrect, in an anonymous internet forum.

If someone experienced a loss because they relied on untrue statements from an auditor, they would have grounds to bring a case, and the auditor would have a chance at due process to respond to that case.

It probably seems slow and unwieldy when we all want justice now, but this is how we can be assured that justice is, in fact, done, and not injustice.


A HIPAA audit would certainly miss this, but I would be very surprised if a SOC2 auditor missed this (or that they would remain in business long with the damage that would do to their reputation).


> If you don’t use a cloud provider: why not?

This makes it sound like it's a negative, but a solid argument can be made controlling your application all the way down to the metal is a more, not less, secure way to go.

We can all make opinionated assumptions about whether a cloud provider is "more" or "less" secure than doing it (whatever "it" is, exactly) on your own, but even the fact that it's so debatable proves that it's not really a defining point that should be on this list of questions (although it might be to an individual team, especially if you're trying to bring things in-house in the first place, but of course this set of questions is really relevant to SaaS.)


I would suggest that for the VAST majority of organizations, a cloud provider[1] will be an unequivocally more secure infrastructure than running it themselves 'down to the metal'. The big clouds have security research security ops teams larger than many companies. I don't think anyone who has seriously spent time looking at this would debate it at all. Nb. this has no bearing on if it's the right business decision, mind you. That absolutely could end with you being better on your own metal! But you won't be more secure.

Source: spent a decade doing security evaluations of cloud providers for the DoD/IC, who also had this misconception going into things.

[1] to be fair, I'm only talking about the large cloud providers (AWS, Azure, GCP, etc.). There are tens of thousands of SaaS/smaller cloud providers that are absolutely debatable around security posture.


> The big clouds have security research security ops teams larger than many companies.

That’s like claiming that a company is on safe legal basis beause of the army of lawyers they employ. It might be true, but the inverse correlation also holds.


OK, I should have added in qualifiers 'highly competent and respected'. More bodies doesn't necessarily equal better product, for sure.

They can devote teams of researchers and engineers to topics like CPU side channel attacks, entropy draining in virtualized environments, and other stuff that your average firewall/AV admin isn't even aware are attack vectors.


> OK, I should have added in qualifiers 'highly competent and respected'. More bodies doesn't necessarily equal better product, for sure.

Agreed.

> They can devote teams of researchers and engineers to topics like CPU side channel attacks, entropy draining in virtualized environments, and other stuff that your average firewall/AV admin isn't even aware are attack vectors.

True. Of course, side channel attacks aren't even possible threat vectors on bare-metal/non-virtualized/single OS/tenancy environments. From that perspective, virtualization can be seen as having two primary benefits: potentially higher availability and the cost savings of combining multiple functions onto a single piece of hardware, but if costs were not a concern and all else being equal, separate bare-metal servers will have better isolation.

One thing that concerns me is that KPIs at some large cloud providers seem to be focused on creating new features and not fixing bugs, especially security bugs, so there's no incentive to do the drudgery of curing security flaws for a developer. It's hard and detail-oriented work, which makes me even more concerned when I see a trillion-dollar company that won't pay to find about bugs in its own products.

Lots of huge companies with tons of money have big security teams. This does not always give the best results, as we've seen.

Of course, having great arguments (like this one!) helps in raising awareness and comprehension among the less technical stake holders. Iron sharpens iron!


If you don't think side channel attacks are a threat vector on your single tenancy system... (you can leverage these vectors to gain access to privileged data/etc. through unprivileged processes). They're differently threatening to you on your own metal, but as long as your systems take user input somewhere, they're a threat vector.

As to KPIs: there are significant cultural differences between providers. That was extremely evident while evaluating them. The differences in approach, thought, consideration and priorities between even the big 3 was substantial.

I'm curious as to why AWS doesn't run a bug bounty, although I could probably guess (lots of their sec teams have background in the intelligence community, etc.).

I'd also like to re-iterate that this is not 'cloud', but specific cloud providers. There were quite a few I looked at that were...unaware that security might be a thing (full push-to-prod creds on every developers laptop, working from cafes around the world, etc.).


> side channel attacks.. gain access to privileged data/etc. through unprivileged processes

True, but tbh that's not the first thing I'd reach for. if someone already has pwned your single function or a control plane, then it's usually game over anyway, and escalation sploits are usually a lot easier than a side channel.

Whether AWS has IC or FedRAMP background seems kind of irrelevant to a simple bug bounty program, especially for a trillion dollar company, when I was able to find an escalation vuln in about three minutes in an unfamiliar codebase in a language that wasn't my primary. They should have at least acknowledged and said thanks for the heads-up.


Big provider controls planes are not going to be ruined by a privilege escalation. There are many, many layers to their defense systems.

The Intel Community and it's members tend to not do anything to call attention to themselves, their actions, their capabilities, etc. Overall security can be enhanced by keeping things quiet, or at least, that is a common perspective from that part of the world.


> But you won't be more secure.

I respectfully disagree. A cloud has many more vectors for attack, and even the largest clouds have blind spots. For example, check out AWS's (lack of) response to a vuln report that I filed with their security team: https://github.com/aws/aws-codedeploy-agent/issues/30

Did you know that AWS does NOT pay for security vulns and does not operate a bug bounty program? It seems a common myth that AWS is more secure than any particular company. Your security inside or outside of a cloud has to do more with your internal security posture than what AWS brings to the table (and leaves behind).

The public cloud is not magic security pixie dust. It's a virtualization platform with extra services attached. You are still 100% responsible for securing your servers (meaning everything down to the network routing layer, and some beyond that, too). Obligatory reference to the AWS Shared Responsibility Model[0]

This is not a bad thing, but it's still a thing. Anyone who migrates to the cloud and magically expects their security posture to improve one iota is simply wrong unless they take active steps to integrate as tightly as they can into that cloud's security tools. As someone who has done a ton of AWS security audits, I can tell you that people take multiple approaches on that: either they don't integrate into it fully, or they submit to vendor lock-in.

This is perhaps too deep of an argument, since clearly intelligent and skilled people can reasonably disagree. FWIW, lapsed AWS SA, recovering Fortune 50 cloud security architect, and please don't take my argument about the cloud to indicate that I'm anti-cloud: my security-focused SSH key management company, https://userify.com, is strongly cloud focused: AWS Advanced Tier, GCP Partner, SOC-2, PCI, HIPAA. I'm very much pro-cloud, while recognizing that it enlarges your threat model but brings other security advantages.

Where you're hosted doesn't matter as much as whether you're paranoid and clueful, and perhaps have former blackhats on your team ;) So my point is simply that the cloud question isn't that relevant, and the thrust of the other questions should be used to determine security competency instead of where they're running their operations.

0. https://aws.amazon.com/compliance/shared-responsibility-mode...


Former blackhats? Why not current blackhats. 0:)


I've worked at some of the big companies, including multiple cloud providers. I wouldn't trust their shit ever. Perhaps it seems more secure than self built stuff but it really isn't. The same idiots you think won't build their own infrastructure correctly are also building the pieces of the cloud you so cherish.

I'll agree that it is more likely the cloud providers have more logging and monitoring in place though, and that can make a big difference. On the other hand, cloud providers get far more attacks.

I disagree that hosting and doing everything yourself is not more secure. It absolutely is, especially if you use your own non-standard setup and work hard to prevent easy analysis of your setup.

Example: If you host your own server and only server static web files with no interactivity... and only expose port 443... it is possible to pretty much guarantee the server cannot be hacked and at best could be DDOSed. Obviously hardening measures would be necessary too, and taking care to not expose anything besides pure static file serving, but that can be done with relative ease.

If you use a cloud provider, the attack surface is HUGE. All of them have extensive APIs, with so many possible places where there could be a vuln discovered.

I think it is absolutely incorrect that using a public cloud service is more secure; there are simply too many bells whistles gears bolts shims and everything else.


Example: if you restrict things to not being useful for most organizations then you can do it safer? Except, which is less likely to be owned and have no impact on your business, S3 or some janky home baked web server?

Are you staying up on every single dependency and security vulnerabilities present in that entire stack of gear (you aren't hooked up a web server directly into a provider backbone...so you've got routers, switches, an OS, firmware, the web server software, a building to hold it all in, etc. etc.).

Even something so absolutely simple as 'serve static web pages' has a huge cross cutting set of things keeping it functioning.

Do you have guards preventing someone from just walking out with the drive on that single machine? Is the data encrypted? Is it backed up? Is the backup offsite? Is the backup offsite encrypted? Who's got the keys? Are the keys backed up? Who's got access to the key backups?

Etc. etc. etc.


Having a "home baked server" in no way means it is "janky". There are some pretty clear and correct ways to setup a secure server these days.

Using an OS such a RHEL gives some guarantees about auditing for vulns being done already for you. You don't have to take on the responsibility for analyzing everything yourself. CentOS historically gives some of the same benefits.

What do you think Amazon uses internally for Amazon.com? They use a variant of RHEL. ( albeit a really old "jankified" version )

I'm not suggesting you run the server over home networking. ( which is against TOS by the way ) You can own your own server and place it in a good Colo facility who are already handling the power, routing, etc.

I previously wrote the management software for a major company that handles configuration and setup of all their data centers. I know a thing or two about what is necessary.

Redundant power and networking with monitoring of both is essential, and also common and easily available in a decent colo. You can of course use a bottom dollar colo that just throws some shit together, but you will, as you say, risk there being vulns in the stack outside your server.

That said, it isn't hard either to setup your own data center from nothing. You need multi-phase power, redundancy, and a number of components that are somewhat costly for an individual, but there are reasonably affordable options these days. You can get 220 in the home if you pay for it. I'd recommend having a commercial location pre-setup with it though of course. Preferably you'd want your data center near a major internet hub and/or with connections to multiple major internet providers...

And yes. I have guards from someone walking away with my equipment. I have a gun and I'm happy to shoot any intruder in the face with it. All my drives are encrypted as well and if there was a breakin they will auto-shutdown...

I own enterprise level tape backup equipment and hundreds of tapes. So yes: offsite backup that is encrypted. The keys are backed up and encrypted themselves.

You can go down the rabbit hole as far as you like. My setup is still more secure than Amazon and the like.


These are often knocked back with "We charge $4,000 to answer Security Questionnaires, or you can download our pre-made pack [that answers none of these questions] here."


Author here: weird, I've never experienced that (10+ years doing this silly exercise, on both sides).

Various vendors have offered their own compliance frameworks - PCI reports, SOC2, whatever -- and I'm happy to read those instead; they tend to have (most of) what I'm looking for. I've never been charged for the pleasure, though. Guess I have something to look forward to!


What is a "pre-made pack" ?

An overview of your security stance which is fundamentally fluff ?

Thanks.


> What is a "pre-made pack"

"I took the last DSQ I got and answered it pretty fully so here's a copy, and I'm not going to waste time answering the weird extra questions your CISO decided to add into the mix"


Correct. Some do a good enough job, but I find most lacking.


Which vendors charge for their DSQ?


HIPAA isn’t a certification just FYI. If someone is selling you a HIPAA certification they are scamming you.

HIPAA “compliant” simply means you’ve hired a 3rd party to audit your business. Essentially they ensure you do what you say you do in your policies and procedures.

There are certifications that can effectively illuminate the need for these types of questionnaires. HITRUST as an example in Healthcare and SOC 2 for broader SaaS providers.


> HIPAA isn’t a certification just FYI.

There are HIPAA certifications, but they aren't legally required or beneficial. They may have some marketing value.

> HIPAA “compliant” simply means you’ve hired a 3rd party to audit your business

No, it means you are following HIPAA requirements; 3rd party audit is not a HIPAA requirement any more than certification is, though it may be useful in identifying compliance issues.


Semantics.

I’m saying a HIPAA certification from HHS or any official governing body doesn’t exist. A 3rd party auditor might give you a “certification” but it’s meaningless to call yourself “HIPAA certified”. You wouldn’t say that. You’d say “we’ve conducted a 3rd party independent audit”.

You can go ahead and claim HIPAA compliance all you want, but most health systems are going to ask you if you’ve conducted an audit or if you have any certifications - SOC 2, HITRUST, etc.

So in reality you’re not HIPAA compliant until you can prove it. That’s my point.


“HIPAA compliant” means you aren't breaking the law, which is quite important independent of any audit or certification because there are penalties for breaking the law, and they apply to all covered entities.

“HIPAA certified” and the identity of the certifying authority is also potentially important; it may have some marketing value, but more importantly it is (as you note, but bizarrely use it to propose that “audit” is more important than “certification”), various people you might want to do business with may require certification, by an acceptable certification authority. (Possibly different ones for different aspects of HIPAA.) OTOH, there are covered entities that don't have to worry about this at all, because of who they are and who they plan to do business with.

A 3rd party audit for HIPAA compliance probably isn't important to anyone external to you directly (and is not an element of compliance), but may be useful for identifying compliance issues, or may be a component of certification.


Because there's no certifying / accrediting body for HIPAA compliance--practices that comply with the law do not constitute a certification process--what you get is a letter from an unaccredited firm, not a certification. As such, sophisticated customers don't put that much stock in it.


This also looks like a good way to evaluate a potential employer.


A part of my work tasks consists of reviewing answers to security questionnaires.

These are reasonable questions and I see quite a lot of value if they are filled out extensively and in a good faith approach. Most of the answers of usual security questionnaires can be deduced from the responses to this DSQ.

I really have a problem with Q6:

> Have you had any security breaches in the last two years?

> If yes: please explain the breach, and provide copies of any postmortem/root cause analysis/after-action reports.

Almost nobody will answer this truthfully. I see a couple of options: 1. There was a breach and it was public, then why are you asking. Do your research! 2. There was a breach and it was not made public. The company will likely not admit it to you. 3. There was a breach but it was a) not relevant to your case/b) internal/c) the data lost was not customer data/d) we forgot that there was/etc.

While lying in case of 2. might make the vendor liable (IANAL) they might be able to argue that 3 was actually the case.


This is great. I like the open ended phrasing, there's no on a scale of 1 to 10 thoughtless responses possible.


Are there any legal issues if one were to be untruthful/lie on security questionnaires?


What I'm familiar with is that you would write into any business contracts signed with that vendor that all of the [insert scoping modifier here] representations the vendor has made are truthful or else the customer may cancel the contract and seek [consequences of contract violation]. Just make sure the scope includes that security questionnaire.


In my experience people don't outright lie about these things, they sell a piggy bank as fort knox.

E.g. someone running an automated vulnerability scanner that may not even be entirely appropriate for the application being scanned could be considered a pen test or perhaps OWASP mitigation.

TOTP software authenticator on the same machine as the password safe? Totally 2FA.

Security training for employees? Some mind-numbing videos of a consultant reading the OWASP list from 2011 over some powerpoint slides and mentioning some buzzwords, employees self-certify having watched these videos.


If you're hacked and leak your customer's data, and it turns out that you materially lied on your customer's security questionnaire/due diligence, you could be sued by your customer for damages, and your insurance company could refuse to defend you.


If nothing else, I imagine if the vendor were to get hacked, the client would have obvious grounds for a lawsuit. Obviously you'd rather it not get to that point, but, still.


Check out Delta Airlines vs 24/7.ai - lots of claims relating to 24/7.ai security representations.


IANAL but wouldn't that be straightforward fraud?


I asked because I've heard more than once that a company either stretched the truth or outright lied on these questionnaires.

So stretching the truth could be:

Do you adhere to NIST?

The truth could be: "well not exactly but that's on our roadmap,we do somethings that are close enough." That would get a 'YES' check.

Or something like end to end encryption. The answer could be a 'YES' because a company uses front-end TLS and pretends to not completely understand the ask.

In this case it is mostly the business either forcing security to bs or another group (Sales?) filling out the response untruthfully because they are loosing revenue if they're honest.


This is indeed much shorter than what I’ve seen around.

As a developer/team lead that might need to answer these questions to a satisfactory degree, what are resources that would actually help implementing this kind of security infrastructure?


(Author here.) Well, you could always use this questionnaire as a starting point itself: ask yourself these questions, and if you're not happy with the answers, do something about it.

Another reasonable security practices starting point would be another article by Latacora: https://latacora.micro.blog/2020/03/12/the-soc-starting.html

It's semi-oriented towards SOC2, but every item on that list is practical, doable even for small teams, and has real solid security impact.


Ooh, I’ve done that, and I’m doing it with many such questionnaires I receive :) sometimes it makes sense and we do something about it, but many times you just don’t know what you don’t know, or you don’t know where to start, and it’s not a topic that comes up often on the various public fora.

I was looking for books, talks, guides - anything. I just read the latacora soc2 guide and it’s at least a starting point.


Some industries have already matured and standardized this concept. The HECVAT is one example.

https://library.educause.edu/resources/2020/4/higher-educati...

The benefit of this is that if you (as a vendor) wish to sell in this industry, you probably only need to complete this once, rather than one for each potential customer.


One reason to us SaaS - if there's a hack you can blame them rather than taking the heat yourself. It's outsourcing of culpability.


To a point. I’ve been asked about the criteria used to evaluate third party suppliers, for example. Or the physical security measures at outsourced data centres.


I really loved this list and think its a great place to start . Add to this a network diagram; arch diagram; Tenancy; WAF + FIM + SIM; Bring your own keys; and you can start bringing in the enterprises into your deals. Not that you need all of those, but the add-ons usually build on the struts mentioned on that list.


Basic information about these things is alright to pass on but this is kinda information that should be a bit private. Even if here ( and anywhere on Internet ) is a lot of people that think obsecurity is bad it is important tool in any security stack. "Security by obsecurity is bad" means that there is no other security but a obsecurity. Even NIST gives a recomendation for obsecurity if your other security stack is good


These are pretty good, but one problem for 'general' adoption of something like this:

Most customer orgs aren't qualified to judge the vendors answers because they aren't good at security either.

Are you confident you know what a good answer to 'how do you push code to production' looks like?


I tried to get at that in the "who this is for" section:

> [This assumes that] you have a security person (or team) who can evaluate the answers and is part of the decision-making process. If nobody’s going to read the answers, don’t waste your vendor’s time.

So yeah, totally agree. If you can't adequately evaluate the answers, it's not worth anyone's time to ask 'em.


I missed that line, sorry!


Can do it in 4 questions:

1. What do you need to protect?

2. What is the complete list of tech stack technologies it depends on?

3. What are you doing to protect that important thing today?

4. Who do you need to protect it from?

If you can't answer these, the answers to the rest of the questions really don't matter.


What would you consider a disqualifier for question 2?


- "I don't know, nobody understands that legacy system from 10 years ago"

- Very outdated and/or vulnerable software

- Backups of customer data are stored unencrypted on dropbox

- C/C++ (ok this one is a joke)


The way it works is if you don't have the answer, you collect what you can know and move with that. Also C/C++ means you need a linter, a code review process, VAs, and potential exposure to FOSS libraries.

This method is mainly for building things, but if they are legacy, the information should be available as well. The top level answer of "nobody understands this," is pretty much the most important thing you need to know from a security perspective.

It is funny, but it's funny because it's true and important.


we have a document we ask our vendors to fill out then our security, risk and project team meets with the vendor and their respective teams to go through in detail and then we create a risk document for approval or workarounds where the vendor doesn't meet a requirement. Usually it's just like oh... you dont do this basic security... ok bye.


pretty straightforward, but does it scale? and can the answers be captured objectively in order to inform consistent decision making?

i.e. this is fine for a one shot review, but would be tough to operationalize...

Plus, keep in mind diagrams can be critical. If you're going to be sharing sensitive data with this vendor you're going to need to know and have documented how that data will flow, where it will persist, etc. Can be a lot easier to capture in a diagram than narrative format.

There is variety of other FOSS type stuff out there that is useful for anyone that needs more:

Vendor Security Alliance -- https://www.vendorsecurityalliance.org/downloadQuestionaire (disclaimer: I'm an advisor @VSA)

CSA CAIQ -- https://cloudsecurityalliance.org/artifacts/consensus-assess...

SIG --NM, looks like this is closed/member only now, but if you can track it down (SIG Lite/Full, Standard Information Gathering) its ridiculously comprehensive.

Google VSAQ -- https://github.com/google/vsaq

As someone who deals with both sides (asking the questions to vendors, and answering them for prospect customers) I can say they mostly all suck pretty hard... and that's probably why there's a whole ecosystem of vendors in this space nowadays.


nice, thanks for sharing


As someone who works in security and deals with these questionnaires a lot, I don’t have any idea how they caught on. It is my opinion that they mean nothing and are worth nothing.

Asking how a company’s development practices protect against the OWASP Top 10 doesn’t make a lot of sense. The top 10 isn’t a static list, nor is it backed by any real data, and many of the entries are fairly generic concepts that aren’t something that can be addressed holistically with any particular development practice.

Everybody lies on these. They’ll just say a best practice, or they’ll quickly run a scanner in response to the question, or they give you the policy that no one actually follows, or they’ll give a partial truth, like saying that they use 2FA, when they only use 2FA in some places, etc.

Every company you hear about getting hacked had great answers on their security questionnaire. I guarantee that Equifax had a policy of patching vulnerabilities for example. These are just paperwork that wastes everyone’s time.

Certifications like SOC 2 are even worse, but they sure do make some people a lot of money.


> I don’t have any idea how they caught on.

Probably the lawyers asked for it.

As in, if $VENDOR gets hacked and we did no due diligence, we are definitely liable for $VENDOR's incompetence. If we can prove we attempted some form of due diligence, with a paper trail, we might have a fighting chance.


The thing is, lawyers aren’t asking for these. It’s mostly security bureaucrats who approve software purchases or SaaS agreements.

Someone will say, how are we making sure that we can trust our vendors?

Someone else will say, lets hire someone to put in charge of making sure we buy the right things. Someone who claims to be a security expert.

A “security expert” says, we will ask them for their audit reports. As an expert, I will read them and tell you if they are secure.

It’s just paper pushing.


Totally agree with you. However, in my head it was, "hey lawyer, what should we do to protect ourselves against breaches by our vendors?" Lawyer thinks, well, we do due diligence for M&A, financings etc. etc. so why not for onboarding vendors. Course, now this process is codified into law.

I have an idea for a better way... :)


Q: If you use a cloud provider (GCP/AWS/Azure/etc) A: I don't use a cloud provider

Q: Describe how credentials are provisioned, managed, and stored. A: Credentials to what exactly? Since I don't use a cloud provider all the data including credentials are safe and secure on the server hosting everything.

Q: If an attacker gained access to an individual developer’s cloud credentials A: The only developer is me. Anything besides what I trigger myself I would notice immediately as out of the ordinary.

Q: What actions could that attacker perform? A: Not much. Externally facing applications are containerized and have access only to the level of access to data that they should. They would not gain any control of the system

Q: How would you detect and respond to the breach? A: I'd notice activity in the logs which are monitored for unusual activity. I also check them manually myself besides the automatic monitoring rules.

Q: If you don’t use a cloud provider: why not? A: They are overpriced and I don't trust them.

Q: Describe how staff authenticate to company services (e.g. servers, email, SaaS products), particularly highlighting your use of password managers, 2FA, and SSO. A: Passwords? What b.s. is that. Certs all the way.

Q: What development practices do you use to protect against the OWASP Top 10?

1. Injection. A: Only parameterized queries are used. Non-issue.

2. Broken auth. A: Only use reliable session/auth stuffs and/or look carefully at the details of how it works. Don't use crap.

3. Sensitive data exposure. A: APIs only expose the data needed and all access is logged and monitored.

4. XML External Entities. A: I don't use crappy XML parsers with such vulns.

5. Broken access control. A: All applications I run were written by myself and access levels only have access to just the needed data. That access level flows to all places data is accessed. Nothing is given access without an associated current valid session and auth level.

6. Security misconfiguration. A: System is hardened. Containers are used. Everything is kept up to date to the bleeding edge at all times. No legacy crap is used.

7. XSS. A: All access is in house. No external entities are loaded period. I trust no one. All entered data is carefully output in a way to prevent user entered data from having influence on code. Eg: No user data is trusted and everything is validated as being, essentially, plaintext.

8. Insecure deserialization: A: No outside entities are deserialized since I don't accept any. Anything that could have been altered by a user, the parsers are all secure state machines which cannot possibly have any influence on the system.

9. Using Components with Known Vulnerabilities. A: I trust nothing, including libraries. I review the code of many many many things. Obviously cannot review everything but I at least choose libraries that are kept up to data and regularly worked on / used by others. I am sure there are 0days present that aren't public yet, but even if they were taken advantage of they wouldn't get access to much.

10. Insufficient Logging & Monitoring. A: All access and operations that affect data are logged. Read access is also logged and monitored to some extent. Describe the steps a developer or operations person takes to push new code to production. Have you had any security breaches in the last two years? If yes: please explain the breach, and provide copies of any postmortem/root cause analysis/after-action reports.


It really is dumb. As if a penetration test done in the last n months is proof anything is actually secured. and pointing at the OWASP top 10 and saying "do this" just screams IMPOSTER.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: