Hacker News new | past | comments | ask | show | jobs | submit login
Security Engineering – A Guide to Building Dependable Distributed Systems (cam.ac.uk)
205 points by go-red-team on April 5, 2020 | hide | past | favorite | 15 comments



This is a great book!

After reading it I found it much harder to enjoy movies showing bad security though (such has heists, nuclear anything, ..).

E.g. from the book I learned about the IAEA recommendations for safekeeping nuclear material [1], and it's pretty clear that smart people spent some time thinking about the various threats.

Anyway, rambling. It's a great and very entertaining book, go read it!

[1] https://www-pub.iaea.org/MTCD/Publications/PDF/Pub1481_web.p...


> Whitfield Diffie and Martin Hellman argued ... that such a machine could be built for $20 million in 1977. IBM, whose scientists invented DES, retorted that they would charge the US government $200 million to build such a machine. (In hindsight, both were right).


Great book (I own past edition). Is the content significantly refocusing on distributed systems? At first, it looked like a new book, but I think he updated the title (and the content)


Recommend this book, the earlier editions are a great introduction to what goes wrong with security.


I got this recommended this book the other day, it reads like a novel which makes it an easy read, but it's also pretty large and intimidating. The kind of book that would have benefited being split in several books.


Still one of the scariest books I've ever read.


Ross Anderson is near the top of a short list of security researchers that, when they write something, you should stop and read it.


Who else would you put on that list? I'd love some suggestions


Awesome. As a CS student, I'm trying to take in as much as I can. Just finished the Phoenix Project. Will start on this now.


Do yourself a favor and read 'Engineering Trustworthy Systems' by O. Sami Saydjari


Are you saying this because you have read both books and prefer ETS?


Yes, absolutely. What I like about ETS, and quantitative risk in general, is that it makes all of the advice in these books actionable.


How do you find quantitative risk in general?

I find this approach problematic in computer security systems because assigning a meaningful numerical risk turns out to be very, very difficult.

I'm often dealing with rare events that I'd prefer to happen 0 times. Very many of the kinds of failures I'm concerned about are black swans.

It's worse even than other kinds of engineering risk because the absolute risk of a specific attack changes as you add controls in other areas (weakest link effect).

Often the importance of certain things (say the value of certain ux security controls) depends on user behaviour, but usually the user research is lacking and too expensive to do (ie would cost more than the feature).

Vulnerability management is even worse! Any policy based on "risk scoring" (rather than evaluating specifics in context) ends up being basically scientism.

I find most numerical security exercises exasperating and what I've observed is that most people end up shuffling numbers until the scores that pop out match their intuition and judgement. I prefer not to even play that game if I can avoid it.

Perhaps I'm doing it all wrong.

I will say that in certain fields (fraud in particular) where you have enough data points to make meaningful decisions that numerical approaches do work really well.


People tinker with model numbers to match intuition in pretty much every field. Sterman showed[0] that expectation formation (ie, estimates/forecasts tracked over time) strongly follow a moving exponential trendline, even for what should be very sophisticated forecasters.

The explanation? Forecasters fudge the numbers because (1) it takes time to cognitively or emotionally assimilate changes in the outside world and (2) nobody wants to stand out in a crowd of forecasters. Being the same kind of wrong as everyone else is acceptable in polite society.

I don't see the problem of fudging as necessarily a refutation of the attempt to formalise estimation techniques, though. I realise that this puts me in No True Scotsman territory ("you didn't do it right!"), but that's more or less how it is.

I do broadly agree with what I guess is you disliking the CVSS. I spent time performing a very close reading a few years ago and I came away quite dissatisfied. I'm not sure if we yet have an ontology that can be backed by a testable theory, or even whether there exists a large enough body of trustworthy data that could be subjected to factor analytical techniques and/or clustering and/or some light torture.

[0] "Expectation Formation in Behavioural Simulation Models", https://dspace.mit.edu/bitstream/id/1773/SWP-1826-15672771.p...


I highly recommend reading ETS, then 'Measuring and managing information risk', both talk extensively about this. Magoo has written quite a bit as well.

I really appreciate the mission support oriented perspective in ETS, something a lot of security practitioners could learn from.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: