Hacker News new | past | comments | ask | show | jobs | submit login

How do you find quantitative risk in general?

I find this approach problematic in computer security systems because assigning a meaningful numerical risk turns out to be very, very difficult.

I'm often dealing with rare events that I'd prefer to happen 0 times. Very many of the kinds of failures I'm concerned about are black swans.

It's worse even than other kinds of engineering risk because the absolute risk of a specific attack changes as you add controls in other areas (weakest link effect).

Often the importance of certain things (say the value of certain ux security controls) depends on user behaviour, but usually the user research is lacking and too expensive to do (ie would cost more than the feature).

Vulnerability management is even worse! Any policy based on "risk scoring" (rather than evaluating specifics in context) ends up being basically scientism.

I find most numerical security exercises exasperating and what I've observed is that most people end up shuffling numbers until the scores that pop out match their intuition and judgement. I prefer not to even play that game if I can avoid it.

Perhaps I'm doing it all wrong.

I will say that in certain fields (fraud in particular) where you have enough data points to make meaningful decisions that numerical approaches do work really well.




People tinker with model numbers to match intuition in pretty much every field. Sterman showed[0] that expectation formation (ie, estimates/forecasts tracked over time) strongly follow a moving exponential trendline, even for what should be very sophisticated forecasters.

The explanation? Forecasters fudge the numbers because (1) it takes time to cognitively or emotionally assimilate changes in the outside world and (2) nobody wants to stand out in a crowd of forecasters. Being the same kind of wrong as everyone else is acceptable in polite society.

I don't see the problem of fudging as necessarily a refutation of the attempt to formalise estimation techniques, though. I realise that this puts me in No True Scotsman territory ("you didn't do it right!"), but that's more or less how it is.

I do broadly agree with what I guess is you disliking the CVSS. I spent time performing a very close reading a few years ago and I came away quite dissatisfied. I'm not sure if we yet have an ontology that can be backed by a testable theory, or even whether there exists a large enough body of trustworthy data that could be subjected to factor analytical techniques and/or clustering and/or some light torture.

[0] "Expectation Formation in Behavioural Simulation Models", https://dspace.mit.edu/bitstream/id/1773/SWP-1826-15672771.p...


I highly recommend reading ETS, then 'Measuring and managing information risk', both talk extensively about this. Magoo has written quite a bit as well.

I really appreciate the mission support oriented perspective in ETS, something a lot of security practitioners could learn from.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: