Hacker News new | past | comments | ask | show | jobs | submit login

> Compromising people is the core competency of intelligence, happens all the time, and most cases probably never come to public knowledge.

Yea. It would almost be strange if security service didnt consider the route of getting "kompromat" on a developer to make them "help" them.




> consider the route of getting "kompromat" on a developer to make them "help" them

I suppose that’s an option, but it also introduces an additional risk of exposure for your operation as it doesn’t always work and makes it much more complicated to manage even when it does work.


Does it matter though? They don’t have to say “I am so and so of the Egyptian intelligence service and would like to blackmail you”


They might not even use blackmail, they might just "help out" in a difficult financial situation. Some people are in severe debt, have a gambling problem, are addicted to expensive drugs, or might need a lot of money for a sick relative. There are many possibilities.

The trick is finding the people that can be compromised.


I think you're going overboard on what's required. Take anybody who is simultaneously offered a substantial monetary incentive (let's say 4 years of total current/vesting comp), and also threatened with the release of something that we'll say is little more than moderately embarrassing. And this dev is being asked to do something that stands basically 0 risks of consequences/exposure for himself due to plausible deniability.

For instance, this is the heartbleed bug: "memcpy(bp, pl, payload);". You're copying (horrible naming conventions) payload bytes from pl to bp, without ensuring that the size of pl is >= payload, so an attacker can trivially get random bytes from memory. Somehow nobody caught one of the most blatant overflow vulnerabilities, even though memcpy calls are likely one of the very first places you'd check for this exact issue. Many people think it was intentional because of this, but obviously there's zero evidence, because it's basically impossible for evidence for this to exist. And so accordingly there were also 0 direct consequences, besides being in the spotlight for a few minutes and have a bunch of people ask him how it felt to be responsible for such a huge exploit. "It was a simple programming mistake" ad infinitum.

So, in this context - who's going to say no? If any group, criminal or national, wanted to corrupt people - I really don't think it'd be hard at all. Mixing the carrot and the stick really changes the dynamics vs a basic blackmail thing where it's exclusively a personal loss (and with no guarantee that the criminal won't come back in 3 months to do it again). To me, the fact we've basically never had anybody come forward claiming they were a victim of such an effort means that no agency (or criminal organization) anywhere has ever tried this, or that it works essentially 100% of the time.


This doesn't look intentional at all, because this is basically like how 90% of memory disclosure bugs look


Absolutely. And that's the point I'm making here. It is essentially impossible to discern between an exploit injected due to malice, and one injected due to incompetence. It reminds one of the CIA's 'simple sabotage field manual' in this regard. [1] Many of the suggestions look basically like a synopses of Dilbert sketches, written about 50 years before Dilbert, because they all happen, completely naturally, at essentially any organization. The manual itself even refers to its suggestions as "purposeful stupidity." You're basically exploiting Hanlon's Razor.

[1] - https://www.openculture.com/2015/12/simple-sabotage-field-ma...


Nobody can tell if they are intentional or accidental.


I suppose the point is that even though any given instance of an error like this is overwhelmingly likely to be an innocent mistake, there is some significant probability that one or two such instances were introduced deliberately with plausible deniability. Although this amounts to little more than the claim that "sneaky people might be doing shady things, for all we know", which is true in most walks of life.


> They might not even use blackmail, they might just "help out"

If the target knows or suspects what you’re asking them to do is nefarious then you still run the same risks that they talk before your operation is complete. It’s still far less risky to avoid tipping anyone else off and just slip a trusted asset into a project.


> “I am so and so of the Egyptian intelligence service and would like to blackmail you”

No, but practically by definition the target has to know they’re being forced to “help” and therefore know someone is targeting the project. Some percentage of the time the target comes clean about whatever compromising information was gathered about them, which then potentially alerts the project to the fact they’re being targeted. When it does work you have to keep their mouth shut long enough for your operation to succeed which might mean they have an unfortunate accident, which introduces more risks, or you have to monitor them for the duration which ties up resources. It’s way simpler just to insert a trusted asset into a project.


I would guess there are many projects they could target at any given time.


The more projects they target the more risk of being flagged and preventive measures to be engaged by counter intelligence etc.


I’m reading all this with sadness realizing that one of the Internet’s last remaining high trust spaces is being destroyed.


> one of the Internet’s last remaining high trust spaces is being destroyed

One of the Internet's last remaining high trust spaces is being attacked.

What happens next is still unwritten.


From what I know of today's developer culture the solution will be for one company, probably Microsoft given their ownership of GitHub, to step in and become undisputed king and single point of failure for all open source development. Developers will say this is great and will happily invite this, with security people repeating mantras about how securing things is "hard" and "Microsoft has more security personnel than we do." Then MS will own the whole ecosystem. Anyone objecting will be called old or a paranoid nut. "This is how we do things now."


As an positive counterexample, US recently reduced federal funding for the program which manages CVEs [1]. There was/is risk of CVE data becoming pay-for-play, but OSS developers have also pushed for decentralization [2]. A recent announcement is moving in the right direction, https://medium.com/@cve_program/new-cve-record-format-enable...

  The CVE Board is proud to announce that the CVE Program has evolved its record format to enhance automation capabilities and data enrichment. This format, utilized by CVE Services, facilitates the reservation of CVE IDs and the inclusion of data elements like CVSS, CWE, CPE, and other data into the CVE Record at the time of issuing a security advisory. This means the authoritative source (within their CNA scope) of vulnerability information — those closest to the products themselves — can accurately report enriched data to CVE directly and contribute more substantially to the vulnerability management process.
> solution will be for one company, probably Microsoft given their ownership of GitHub, to step in and become undisputed king and single point of failure for all open source development.

A single vendor solution would be unacceptable to peer competitors who also depend on open-source software. A single-foundation (like LF) solution would also be sub-optimal, but at least it would be multi-vendor. Long term, we'll need a decentralized protocol for collaborative development, perhaps derived from social media protocols which support competing sources of moderation and annotation.

In the meantime, one way to decentralize Github's social features is to use the GH CLI to continually export community content (e.g. issue history) as text that can be committed to a git repository for replication. Supply chain security and identity metadata can be then be layered onto collaboration data.

[1] https://www.darkreading.com/vulnerabilities-threats/nist-nee...

[2] https://github.com/yoctoproject/cve-cna-open-letter/blob/mai...


It can be a stepping stone towards a world in which we use sandboxing and (formal) verification to safeguard against cultural degradation. There's no alternative, too many bad actors are roaming about. I hate that as much as the next guy :(


The most secure systems are those that are also resistant to rubber hose cryptography.


"Rubber Hose Cryptography" comes in the form of a PR.

"Rubber Hose Cryptanalysis" comes in the back door and waits for you in the dark.


No, it 'comes in the form of' a rubber hose...


They would be really bad at their job, if they didn't try.


Developers would be really bad at their newly expanded job, if they didn't resist.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: