Hacker News new | past | comments | ask | show | jobs | submit login
Minimum Viable Secure Product (mvsp.dev)
183 points by arraypad on Nov 3, 2021 | hide | past | favorite | 46 comments



If you broke the practice of securing a company into software security, network/platform security, and corpsec, I think the proper prioritization from an engineering perspective would be corpsec > software security > network/platform security.

Thankfully, this checklist doesn't lead startups into a quagmire of stupid network security tools, scanners, and assessments. But it also leaves out corpsec almost completely ("single signon" is an application security control in the checklist, which wildly misses the point), so we'll call that a wash.

What I'll say is that if you're concerned about closing deals and filling out checklists, the appsec controls here aren't going to move the dials much for you, and the corpsec stuff that it's missing is going to trip you up. I'm not in love with it.

Also: for most companies, you're going to want to be well past product-market fit before you start engaging consultants to assess your code. Most startups are well past 30 engineers before they have their first serious assessment. Crappy assessments can hurt as much as they help, and they're the kind you get if you're shopping for $5k-10k pentests while delivering with 5 engineers.


With your excellent corp/software/net breakdown, this feels like it isn't actually aimed at a company-wide level. This feels like it's aimed at a product engineering leader who has no real power over the real corpsec concerns - like if the company uses SSO - but real power over the product itself.

I suppose it makes sense in a context where someone else is handling corpsec adequately, but if that's the idea then it's not very well explained.


Could you explain these terms a little more, perhaps with examples? In particular, what is corpsec? Google was unhelpful here.


Software security is vulnerability research conducted on the software you ship. The OWASP Top 10 is a software security artifact. Some people call this "appsec", though that tends to imply web software security, which is just a subset of software security.

Network/platform security is network access control rules, host configuration, to some extent cloud IAM†, and patching.

CorpSec is the stuff you do to address attacks targeted at your team and the computers and services you use to keep the company running --- laptop and endpoint security, Google Apps 2FA, single sign-on, onboarding/offboarding, and that kind of stuff.

IAM and cloud access/monitoring can kind of bleed into both of the other buckets depending on the aspects you're thinking about, but like 80% of it belongs in the net/platform bucket for most companies.


I dislike any compliance document that requires paid & external vendors, so would love to see that factored out

SOC I vs SOC II helps get at these kinds of distinctions in practice. I've seen a lot of conversations enabled by that. "We did the SOC I software checklist. At some point, we'll pay vendors $50K-250K for SOC II, feel free to fast track that now as part of our contract."

I get why it's there, but this kind of thing is also why, despite being designed to address a real need, initiatives like FedRAMP have been slow & expensive disasters in practice. We should be pushing to self-serve & automated accreditation, and all the way to 1 person projects. Anything that puts third parties, people, and $$$ in the critical path needs to be split out.


If you have a way to automatically handle all the auditing that goes into evaluating all the not-strictly-technical controls that are part of SOC and PCI-DSS and similar, a lot of people will be very interested.

Based on this list, how would you automatically validate that vulnerability reports are handled in a reasonable timeframe? How would you do self-serve validation for incident handling timelines? How do you quickly and easily automate assessments of subprocessor data handling?

Quick, easy, strong, self-service, automated accreditation is a wonderful goal! It's critically important to make this stuff as easy as possible because there are features to ship and customer needs to meet. Security must be a baseline for everyone, and achievable by everyone, or else it's just a way for big companies to squeeze out small ones It just might be worth considering carefully that there may be systems at hand that blend humans and computers. It may perhaps be possible that information security could be more than just an engineering problem.

If I may propose a different framing? Information security is primarily a human endeavor. It is mostly about how humans and systems made of humans behave. Information security is about process. Some parts of it can be partially handled by computers, but most of it is deeply not susceptible to automation.


The SOC audit isn't really doing any meaningful technical evaluation. You're not going to get any engineering benefit from it.


In my experience, the auditors themselves aren't really going to provide much value in terms of an evaluation.

It was a little like having a Physics teacher ask each student to write their own final exam... and then take it. The teacher opined on the number of questions being asked but that was about it. All they are doing is recording your questions and your answers and certifying that you were indeed the person that took that test.

That being said.. I do think you can learn a lot going through the experience of a SOC II. You force yourself to drown out the noisy world a bit and think really critically and thoroughly about security. You need to learn how to articulate security to the entire company, to clients, etc. And you need to back this up with data... not hand waves.

SOC II was a pain... but a good learning experience too.


It verifies you have answers to the questions asked. "Has your GDPR data deletion process met its 30-day requirement?" means you (1) have this process, (2) are evaluating this process continually, (3) on correctness and timeliness. What could be more important than verifiability of correct processes?


I agree that living up to standards, and specifically the engineering / operations efforts improvements to do them, is valuable.

However, it's not hard to imagine automated flows for verifying this. In this case, specifying endpoints and providing automation scripts for doing GPDR flows takes care of most of it.

A lot of these are converging on the same check boxes, so get rid of the people and $ aspect. A team should be able to put together COTS OSS, run on a cheapo cloud, and test as part of CI/CD . We need to reach the point properly configured RoR/Django on docker + some sidecars (ELK, autotls, ..) can do that.


I agree with humans and process, just from a shift-left perspective , the specific ones that have become typical no longer need people.

Take a look at how AWS/Azure Marketplace programs and supporting vendors are using certified components and automation in multiple layers to get rid of most of the craft. It's possible.

People does make sense for parts, but we need to cut that part down by a ton I effort and $. I might feel better about the third party thing if the NSA started, as part of their cyber def responsibility, to provide free annual audits upon request (assuming heavy automation as per above) . We should be pushing to enable one-man shops to do this stuff, even if that makes tighter happy paths for how they build and run. Vendors can compete to make their stuff easy to add to that happy path and value add beyond the regulatory lockin.


Think you might mean SOC II Type 1 and SOC II Type 2 (vs SOC I)?

SOC II Type 1/2 cover the "scopes" that more generally pertain to "secure" development.

A SOC II Type 1 is basically coming up with that checklist. The third-party auditor will then measure your performance against that checklist (you provide the evidence) for a single point in time. The Type 1 is considered the baby step into the Type 2.

A SOC II Type 2 is generally taking that checklist from the Type 1... but the auditor is randomly sampling for evidence over a time range (usually a year). Generally once you've done your first Type 2... you're doing it continually each and every year.


idk why you are being downvoted. I think the op had this misunderstanding.


Yes, typed too quickly. I like the continuous monitoring aspect of II, just not the vendor burden for it. Want to test continuous monitoring? Allow submission of API endpoints for test/recall that a program can check your response on. The rest can be a questionnaire.


>I dislike any compliance document that requires paid & external vendors, so would love to see that factored out

Yes I agree, although my my data might just be bad/skewed, my experience as a `freelance-security-auditor` (just a side hobby). Ever time I reported a serious website vulnerability to a company. Most of the time, their initial response was "But we paid for pen-testing/sec-audits !" or something to that regard.


I get why it's there, but then I also get why the guy who runs my corner store gives Jimmy a $200 interest-free loan every Thursday


I feel the lack of a "motivation" blurb for every recommendation may lead to blindly checking boxes.

Going through a grueling security compliance process at a previous job taught me that no single list of recommendations covers all the edge cases and complexity of modern software. At least with a "motivation" section, I was able to piece together what was actually expected of me. It also helps with prioritization if the "motivation" includes the consequences of non-compliance.

For example: the "data flow diagram" is probably meant to be part of threat modeling, where you're forced to think like an attacker. That may sound like a tedious paper-pusher task that could be put off, but actually IMO it's really nice to do early on since a threat model can help tell you where to focus your "minimal" security efforts: https://owasp.org/www-community/Threat_Modeling


I've found rationales and similar to be double-edged in practice. They're immensely valuable to people like yourself who just want to understand the point of a question to better implement what its getting at. They're also fodder for people whose primary interest lies in finding ways to undermine the goals.

Writing things like this, with clear requirements and no justifications, is what I do when I expect people will try and use explanations to weasel out of requirements.

Example: I once had a coworker try and justify sending full credit card data through Kafka as not really storing it if you set the retention time to under a second. So it was fine under PCI-DSS, right?


IP address logging can be problematic, my company considers it to be PII to have the full IP (but not if you remove the last 3 digits). I wouldn't include that as a minimum. I also think it's weird to include SSO without including some form of opt-in 2FA. TOTP isn't all that difficult to implement if you are already storing passwords. Note also that I said opt-in so if 2FA is inconvenient, that is fine.

I also don't feel like all of the requirements should be the minimum. Good examples of minimum requirements are the redirect to TLS requirement. External pen testing is one of the requirements that feels like a nice to have rather than a minimum bar. Minimum standards should be stuff that if you don't do it, technically savvy people would be very concerned if someone uses your product. Stuff like lack of TLS, plain text passwords, lack of input sanitization, using an ancient web server, etc.


Why should anyone care what your company thinks?

IP addresses aren't uniquely identifying. I think GDPR only considers them PII with additional information, no?


IP addresses can be uniquely identifying. I've had the same IP for years, it's static, and if you did a Google search for it a few years ago, you got straight to my full name and location (because that used to be on whois database for my domain names, which point at my IP; thankfully they no longer include that info in whois).


They don't meet the legal standard for identifying in the US so I think it's suspect to assume they are identifying in other contexts. I know some people will try.


IPs are personal data.

see https://gdpr.eu/eu-gdpr-personal-data/ section "Identifiable individuals and identifiers"


IPv6 without privacy extensions is pretty close to uniquely identifying.


Very conflicted on SSO. I love it in an enterprise setting where it is assumed that your information is shared for insurance, payroll, or just work related needs. But I hate it in a consumer setting where I just want good ol’ username and password. Of course the password/phrase should be long, uncompromised, and unique and salted and hashed and all of that, but there is just so much friction for me in SSO. I’ve even seen Next JS’s default authentication module basically drop support for username/password under the dubious claim that it is inherently insecure.


"Ensure non-production environments do not contain production data"

Yeah right, so how can you reproduce some bugs that only appear on production data, on other environment that is not production?


From my experience I hardly need the "real data" to investigate a bug. Most of the times I can understand the problem and reproduce the bug in the test environment based on production logs (like exceptions and stack traces).

If you really need more context to reproduce a bug, you can always try to anonimize the data before moving it out of the production environment, and delete the anonimized data from the test environment once the investigation finishes just to be on the safe side.


Some items where alright, but I agree with what tptacek wrote elsewhere in this thread (https://news.ycombinator.com/item?id=29102385).

Another list that I think is better for most startups: https://www.goldfiglabs.com/guide/saas-cto-security-checklis...


Great idea.

Currently still missing a lot of stuff I'd consider to be minimal on the corpsec side; the earlier it's implemented then the easier it is to keep in the company rather than trying to add it later after inertia sets in. Namely:

* Enforce security keys on the SSO side * Set up email security - SPF, DMARC, explicitly enumerating every source of email sent from your domain with full enforcement to catch new sources before they become critical and untracked


It's a good start but I feel like this should go hand-in-hand with privacy by default. Security is about protecting data (as well as preventing fraud and some other things) and data minimization reduces the attack surface.

In other words this reads a bit like how you can protect your safe full of wads of cash when the real question should be whether you need to store that money in cash in the first place.


“Minimum” secure product is interesting, I’ve definitely gone down the slippery slope of going too secure, and it doesn’t feel like the wrong thing at the time but looking back it slowed us down. Very hard to craft a minimum.


Maybe a way to help that would be to see if a library/framework is compliant. For example, Dream in OCaml automatically adds CSRF tokens to your forms https://aantron.github.io/dream/#forms which would comply with a subpoint of point 3.3. I took that example since OWASP compliance is one of the big points and it's not well known, I don't want to start a framework war here. Since there are constantly new developers, I think it would help to talk more and show more security best practices.


I think one of the key lessons here is that a framework fundamentally cannot be compliant. Too much of what is required is simply beyond what any framework can deliver and a matter of human-based process.

I have no doubt of the technical sophistication of Dream, but I do have some doubt that it provides a way to automatically assess the data handling of subprocessors for me (4.3).


I edited my comment to specify that I only talked about one specific point, I should have included that from the beginning. I agree with you, no web framework can ensure 4.3, but web frameworks could ensure/be compatible with 2.1, 2.2, 2.3, 2.4, 2.5, 2.7 (partially), 3.3 (partially). That's far from everything, but if all of that is automated, that's more energy that can be used for something else.


That's fair! I think you can get a lot of that from modern frameworks, provided you are willing to stick to their conventions and never deviate.

It's been my experience that sooner or later, every app has a reason to deviate. Maybe you wind up constructing a large analytical query as a string because nobody loves joining a dozen tables in an ORM. Maybe you find yourself stuck at a vulnerable version of a package because that branch has been EoL'd and the patched versions require major logic reworking.

So I think you're right - you can get a lot of the tools to be compatible with many of these items from a framework. I'm just not so sure about how much of that is automatically inheritable by an actual working app.


My challenge with this guidance is the lack of focus in what is truly minimal as this list is quite extensive. As a security practitioner, I would simply threat model the product and provide minimum requirements to release.


This seems kinda cargo culty. An MVP necessarily meets the security requirements of its customers else it would not be purchased and hence is not an MVP as it is unviable in the market.

If your product is solving a pain point things like SOC compliance will be granted exemptions or an attestation will be provided until an audit is completed, this allows the customer to reap the benefits of the product while the i's are dotted and t's crossed.


agree, but also point out that this argument can be expanded to essentially make all such efforts meaningless - actual security is never captured by these documents, because it is a function of the system itself, and these documents can only by definition point to some aspects of it...

p =~ np, what can I say.


Putting excess effort into things that your customers don't need isn't MVP.

These efforts aren't meaningless, they are very important when you get into enterprise sales once your product has found market fit and is maturing.


I'd love if https://mvsp.dev/mvsp.en/index.html was a checklist rather than bullet points and tabular data - it makes it actionable an include as part of design/code review


That's a great idea!

I hope that the idea takes hold.


I wanted to say something snarky (oh they added 'secure' to mvp, better call a patent laywer). Ok I would have thought about it and come up with something, but this looks great. I went through the checklist and it makes a ton of sense, covers so many important things that people often miss, and will certainly help improve security if used.


What are 'insecure JavaScript functions' mentioned in 3.3?


The most obvious that comes to mind is eval. However a custom function could also be dangerous, e.g. a function that posts some sensetive information to a REST API where the attacker can control the variable that defines the API endpoint address and thus can send the information to themselves.


how would the list differ for B2C users?


Your page ( https://mvsp.dev/mvsp.en/index.html ) is broken on small screens ie 13-inch laptops. This is because of the use of padding and width: 100% at the same time. You need to remove .w-full from the styles in your content.

Edit: I have opened a quick PR.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: