You can pick this apart, but the thing I always want to call out is the subtext here about vulnerability research, which Ranum opposed. At the time (the late 90s and early aughts) Marcus Ranum and Bruce Schneier were the intellectual champions of the idea that disclosure of vulnerabilities did more harm than good, and that vendors, not outside researchers, should do all of that work.
Needless to say, that perspective didn't prove out.
It's interesting that you could bundle up external full-disclosure vulnerability research under the aegis of "hacking" in 2002, but you couldn't do that at all today: all of the Big Four academic conferences on security (and, obviously, all the cryptography literature, though that was true at the time too) host offensive research today.
Maybe they were right for their time? I'm not arguing that, I just posit that post fact rationalisation about decisions made in the past have to be considered against the evidence in the past.
Things network exploded size-wise and the numbers of participants in the field exploded too.
It was 100% a reasonable-sounding theory before we knew any better.
In the real world if you saw someone going around a car park, trying the door of every car you'd call the cops - not praise them as a security researcher investigating insecure car doors.
And in the imagination of idealists, the idea of a company covering up a security vulnerability or just not bothering to fix it was inconceivable. The problems were instead things like how to distribute the security patches when your customers brought boxed floppy disks from retail stores.
It just turns out that in practice vendors are less diligent and professional than was hoped; the car door handles get jiggled a hundred times a day, the people doing it are untraceable, and the cops can't do anything.
Completely agree that offensive research has (for better or for worse) become a mainstay at the major venues.
As a result, we’re continually seeing negative externalities from these disclosures in the form of active exploitation. Unfortunately vendors are often too unskilled or obstinate to properly respond to disclosure from academics.
For their part academics have room to improve as well. Rather than the pendulum swinging back the other way, I anticipate that the majors will eventually have more involved expectations for reducing harm from disclosures, such as by expanding the scope of the “vendor” to other possible mitigating parties, like OS or Firewall vendors.
> As a result, we’re continually seeing negative externalities from these disclosures in the form of active exploitation.
That assumes that without these disclosures we wouldn't see active exploits. I'm not sure i agree with that. I think bad actors are perfectly capable of finding exploits by themselves. I suspect the total number of active exploits (and especially targeted exploits) would be much higher without these disclosures.
I was going to respond in detail to this, but realized I'd be recapitulating an age-old debate about full- vs. "responsible-" disclosure, and it occurred to me that I haven't been in one of those debates in many years, because I think the issue is dead and buried.
https://hn.algolia.com/?q=six+dumbest+ideas+in+computer+secu...
You can pick this apart, but the thing I always want to call out is the subtext here about vulnerability research, which Ranum opposed. At the time (the late 90s and early aughts) Marcus Ranum and Bruce Schneier were the intellectual champions of the idea that disclosure of vulnerabilities did more harm than good, and that vendors, not outside researchers, should do all of that work.
Needless to say, that perspective didn't prove out.
It's interesting that you could bundle up external full-disclosure vulnerability research under the aegis of "hacking" in 2002, but you couldn't do that at all today: all of the Big Four academic conferences on security (and, obviously, all the cryptography literature, though that was true at the time too) host offensive research today.