It is open source software and it can reverse engineer programs from a lot of different systems.
Some people may be worried about installing a piece of software on their computer that comes from the NSA. I don't think that there are real reasons to worry. One of the tasks of the NSA is defending against cyber attacks. Having more people with good tools helps the defense. Also, you can be pretty certain that some security people have been closely looking at the sources to see if it contains any suspicious features. Besides, if the NSA really wants to install some software on your computer, they can probably do it themselves without your involvement.
I was quite suspicious of it when it was first announced, but an open source RE tool is probably the stupidest place to put a backdoor. Author considerations aside, it’s a great tool, and does pretty well with decompiling.
Isn't it written in Java? Why would it need to do that?
IDA has had watermarks and all sorts of other fancy stuff. The real challenge with IDA was extending the demo to allow it to save databases before they started publishing a working older free version.
I like IDA a lot more than I like all the other tools, especially since I consider the user experience and hotkeys far superior, but the other day I did look at something where the disassembly and decompilation was great compared to other tools(one of the r2+ghidra UIs). I think because flirt signatures were missing and I can't get Hexrays to sell me a new license.
I would be curious to know if anyone has audited this for malicious code, or how one would go about doing that in the first place. Is that kind of software auditing a use case for Ghidra? A demo of using Ghidra to audit Ghidra would be interesting I suppose.
Its used to reverse engineer an unknown binary without the matching source code. Since Ghidra already is open source it be no use to audit Ghidra itself except for learning purposes. It might be useful to reverse engineer a closed source driver so you can write an open source one from scratch.
A security audit is still useful when you have sources to the program. There may still be some intended or just accidental security problems with it. Having the sources makes such an audit a lot easier to do.
Is there a standard process anywhere for vetting some software for information leakage? I would imagine that someone would deploy the software behind an MITM proxy and then look at the traffic, but it would be nice if there was some standard process or framework for this somewhere.
It's a huge code base, of course there are security issues. Same way IDA and radare have security issues. People who reverse malware take that into account.
I would expect there to be self-mutating code such that when the open source code is compiled with a particular compiler it activates a different code path (written into the compiler itself) such that the final resulting binary does not correspond to the source code if it were compiled with another compiler.
And if this resulting binary is distributed, audits of the source code wouldn't catch these modifications.
1) There are many java compilers with diverse origins. Try more than one.
2) The binary (or jar) can't lie about what it contains. Take it into an air gap and reverse engineer it, what's there is there. This includes compilers.
3) see posters comment about the impracticality of stopping someone with the money, talent, skills, and patience of the NSA :)
I don't think there is anything fishy here, although I don't think the NSA can just install anything on my computer, even if I were based in the US. There is a lot of bluffing when it comes to cyber security.
Bribing people in generally corrupt and poor countries to smuggle a USB stick is kind a different than just breaking into random persons home in a country with relatively low corruption. Latter might actually be more difficult. Obviously depends on what your end goal is
The most important was clearly obtaining the PLC zero days to infect the physical machines. It's unclear to me why you choose to be so explicitly obtuse but in any case, for your own personal edification, feel free to read some details on how it went down -
This was a complicated operation that had many difficult steps. If any of these steps would not have worked, the entire project would have failed. Just pointing at one of these step as the most important does not show much appreciation of the other steps.
That was the strategy for that situation. They can use national security letters and gag orders to force multinationals to silently turn over root certificates, they can intercept hardware you buy in the mail, they can MITM your connection with the full cooperation of your ISP. Anyone who thinks they’re going to defend themselves against a targeted attack by the most sophisticated and well funded state-level attacker in the world is dreaming.
Ever heard of a bump key? It's easy to break into a home in a country with relatively low corruption. One might even say easier. It just comes down to whether you have one person corrupt enough to use it. A locked door is nothing more than a social contract. Door is locked means do not come in. Tell that to the person with a bump key.
It's best not to assume a physical presence is required. Who is to say that the people at Let's Encrypt, NoScript, any of the firmwares' authors, or many other places weren't compromised years ago? It's sometimes worthwhile to reflect on where trust is placed.
I don’t know. Seeing how extensively these key signing ceremonies (Let’s Encrypt included) are designed against tampering and collusion, I’d be shocked and impressed if they were infiltrated.
We’ve found instead that the NSA can just take over your unpatched computer easily instead of putting in the effort of hacking Let’s Encrypt.
Unfortunately, a child can take over an unpatched computer using public exploits.
Please explain your comment about how key signing ceremonies stop people from being bribed. The creation of those keys creates a root of trust but doesn't stop leaf certs from being generated.
Sure it doesn’t stop certs for certain domains but again it feels handwavy to say someone can just as easily do these things. Theoretically yeah. But to be a publicly trusted CA, the kind of processes you need to have a pretty extensive.
Still, there are hundreds of publicly trusted CAs so the chance for exploitation is higher.
Actually hacking systems is easier than (some) individuals. It's pretty obvious if you think about it. ICS are operated by group of people, they have well defined accessibility and availability requirements, some sort of documentation exists, internal processes have large inertia.
On the other hand individual security professionals might have wildly different ideas about risk tolerance and convenience, which they also have privilege to change on the whim.
I’m quite sure they could, but mostly just because they could simply walk into your house and tamper with the hardware. You don’t need a fancy zero day when you’re the government.
Turn up with a bunch of fire engines, and a gas company van. Knock on your door, and 3-4 neighbors on each side, for good measure. Tell there's been a report of a gas leak, and you need everyone to leave their houses/apartments immediately for the inspection. 15-20 mins later - you can come back in, all's safe. Thank you for your cooperation.
>although I don't think the NSA can just install anything on my computer
If it's not connected to a network you are probably right....otherwise 100% wrong, if your a enough valuable target. And just lets say for fun your OS is 100% bulletproof, your +30 firmware's are not.
I doubt it. From operations that went public the attack vectors are known and you can extrapolate something about their capabilities.
Of course they could get access if I were a valuable target, but that might just as well be with a large wrench. But they cannot just take control of any device.
And I think many companies might even have better capabilities. Or defense, since intelligence work is very often about industrial espionage.
You don't even exist to them. The NSA wants to infiltrate nations. They do stuff like hire a friendly foreign nation to quietly buy a security company their target depends on and then exploit that vulnerability from a host in a fourth nation.
Yes and we've seen their shit get leaked over the years. From that we can see clear patterns in what they view as valuable and where they spend their significant, but still limited focus.
With unlimited money and the given job to crack encryption, hack into systems and secure the networks of the wealthiest nation and only superpower on earth atm. It IS pretty much a unicorn, and i am pretty sure they are +20 years in the future technology-wise, and with that:
>>Arthur C. Clarke — 'Magic's just science that we don't understand yet.'
So yes Magic Unicorn describes the NSA pretty well.
Yeah, that's bullshit. For NSA-proof personal tech stack you'd rely more on tamper-evident blocks that's all. Also, security in depth and security through obscurity are much more applicable if you're a person and not an organization. Finally, +20 years head start does not mean much if you distrohop and FOMO into bleeding edge stuff like a tech podcaster.
In case you're genuinely curious, 'NSA-proof' is a portmanteau from NSA and 'idiot-proof'. Distrohopping is when people change (usually GNU/Linux) distributions once a month or so (which is an allusion at tongue-in-cheek conjecture that one can change distributions faster than NSA can break them). Have a good day, fellow human.
How many completely different browsers exist? And how many local exploitable user to root exploits exist in the Apple/Linux/BSD world's? If your a valued target and you are connected to a network you WILL be hacked.
Buddy, I'm not gonna follow this thread anymore because you seem to be baiting me to read you a lecture on OPSEC, security in depth and compartmentalization.
Assuming that time travel is impossible, NSA can't break into something that does not exist anymore. Hence the idea when facing such adversary is to provide them a constantly moving target. Although NSA might be able to break any full disc encryption given enough time, they aren't able to decrypt something that no longer exists.
This principle isn't scalable to every computer system out there and will definitely go against other requirements in most organizations, but if you are an individual, it's not hard to pull it off.
This ignores the obvious. What parts are not changing with a distro hop?
Are those parts vulnerable to the NSA?
I believe due to what was made public, that they do have that capability.
I would suggest more research. If you are actually changing distros every month, that seems like a very manual process, with many points to use an insecure config. I think your time could be better spent hardening a current system.
And yes the NSA could own your box every month (and would) if it suited them.
Check out this link, this stuff is fascinating.
> In some cases, the NSA has modified the firmware of computers and network hardware—including systems shipped by Cisco, Dell, Hewlett-Packard, Huawei, and Juniper Networks—to give its operators both eyes and ears inside the offices the agency has targeted. In others, the NSA has crafted custom BIOS exploits that can survive even the reinstallation of operating systems. And in still others, the NSA has built and deployed its own USB cables at target locations—complete with spy hardware and radio transceiver packed inside.
You pose the questions but do not answer them. Assuming distros are selected purposefully you do get quite a lot of variability. Recompiling the kernel with different hardening options alone makes many exploits impractical.
The threat modeling that you see in this thread is laughable. Nobody has infinite resources, not even NSA. They can't throw all their capability at you alone. In fact they are not even interested in any one individual. They might be interested in some groups of people like "terrorist leadership" but even in that case they don't have the need to hack all people matching that group. So at every step of the decision making process there is a cost benefit analysis. And in the end NSA will only hack some terrorist leaders, the ones deemed sufficiently significant but not any more risky then is necessary.
The amount of meetings and paperwork required for carrying out offensive action is significant and everyone involved is very risk averse. Getting superiors to sign up for an operation against an individual capable of detecting attack and thus risking attribution would only be possible if the proposed techniques can be shown to be extraordinarily stealthy. That requires replicating the system in the lab and rigorously testing methodology beforehand.
Yeah, it is hard to protect organizations from nation states. Because all sufficiently complex systems have bugs and given long enough time persistent attackers will find & exploit these bugs. But that's because organizations have other real-world priorities besides fighting NSA. These organizations can't change protocols overnight and replace core systems just for fun of it.
Individuals actually have an advantage here because they can rotate systems at will and have much higher control over their personal lives than any CEO/CTO/CISO has over their organization. As a result, yes you can raise the cost of an attack against you high enough that NSA won't bother hacking you - either because there are other people who are less protected but hacking them would fulfill the same objective or because your ass gets handed to another agency which is able to present more cost-effective solution.
Your link demonstrates this dichotomy between options that NSA has available for hacking organizations vs individuals. Individuals rarely have well documented procurement processes available for third party auditing you know.
You sir have no clue what you talking about, a payload geter in your ssd-firmware survives your distro-hop and can adapt to every OS (if your information is worth the work). And an encrypted disk...on man i stop arguing, it's obvious that you really don't have a clue.
You just keep talking straight past my points without even trying to understand them. Why bother writing answers at all?
I'm not advocating for installing a fresh OS on an exploited hardware and calling it a day, no matter how hard you try to present my words this way.
The point is to keep any single environment around only for a short period of time so that adversaries don't have enough time for replicating your systems and crafting a targeted exploit chain.
It is not meant to be the only line of defense. You would still harden every system you own, putting particular focus on tamper & intrusion detection (including retrospective analysis).
Couple that with strong compartmentalization (e.g. using different hardware for different purposes, Qubes OS style virtualization approaches) and defense in depth (exploit mitigations, traffic anonymization).
Here, I have spelled it out for you. Feel free to outline how you would approach attacking such individual adversary, even with NSA level team at your disposal. Silent assumptions being that 1) if person's physical location is known, CIA is a cheaper option than NSA and 2) failed offensive operation leaving attributable evidence is considered by NSA worse than missed opportunity.
Wow you change your meaning pretty fast, yes if you trow your laptop away after 1 hour you are pretty safe...well if the laptop is from a secure source...like amazon ;)
I guarantee that whatever browser you use, they have 0day for it. Whatever ISP you use, they can inject traffic into it, and they have a much easier time about it if you aren't in the US.
If you're someone who uses the Internet, the NSA can take over whatever you use to browse with and have their way with it. If you don't, well that's what their interdiction program is for.
The thing is though, the economics of 0day indicate that the more you use it, the more likely it is that it'll get burnt, and supply is limited.
They can certainly hack anyone, but it doesn't scale, so they can't simply hack everyone. They can maybe use these techniques on a handful of targets per year, so they make it count, but most of their intelligence comes from the data we all give away for free every day.
Indeed the best protection against getting 0day'd is probably to be into computer security. I feel confident that the NSA is not throwing 0days at computer security professionals; whereas they could use them on the average person with little risk of detection.
I think a lot of people underestimate how hard it would be to build something like Ghidra not from a technical perspective, but from an avoiding big organisation bureaucracy perspective. Unlike a typical bureaucracy however, and amongst other problems[1], the barrier for entry for hiring is extremely high, everything happens within an echo chamber (closed community with little external influence) and paranoia and overbearing security process has a freezing effect on morale and the use of modern workplace practices and technology.
Whilst other companies and organisations hire staff quickly who can more freely experiment with the latest technology from a hip coffee shop or their home, someone at an organisation like the NSA after waiting a year to start the job and after having hiked 8km from their car to a windowless and soulless building in the middle of nowhere instead has to fill out dozens of forms and seek dozens of approvals just to consider the idea of experimenting with some new technology.
I am amazed something as useful as Ghidra could actually be built within such a large bureaucracy in modern times, and then even more amazed that someone managed to get it released as open source software to ensure it continues to be maintained and useful long after the next internal reorganisation and exodus of developers.
There has been a lot of cyber crime in recent years, e.g. see the recent wave of ransomware attacks. These criminals are mostly amateurs that know some exploits and use them. The NSA is a huge organization that employs many professional experts. Spying is one of their main objectives so you can be pretty certain that they are pretty good at it. Computer systems contain a lot of vulnerabilities and you can be pretty certain that they know a lot of them.
The computers of most people are vulnerable even to normal cyber criminals, the NSA is a lot more powerful.
Most people hear "social engineering" and think of someone playing journalist to get access to places. The NSA's idea of social engineering is having the CIA work with the BND to buy Crypto AG.
Social engineering can be highly effective. However, from what we know about the NSA, especially from the Snowden leaks, it appears to be mainly a technical agency. It seems likely that the NSA does not use social engineering on a large scale itself but hands it off to other agencies like the CIA or the FBI.
Yes we can talk about that, let's start with the FSB an why they are always so stupid ;) to left the compilation time with the working-hours of moscow and the cyrillic keyboard in the compiled binary.
The Google Service Framework (which is installed on 99.999% of Android phones) gives a root access to Google on your device, and Google has the power to silently install/uninstall apps...
These are all true statements. Greetings from Seattle, Washington, USA! No need to cc them on this post. No one should kid themselves with what NSA is working on. No one should also kid themselves with what they aren't capable of.
after 10 years~ of lurking, your comment was the one that encouraged me to finally sign up.
I agree with you completely. Let's not joke at the capabilities of a trillion dollar organization focused on "cyber".
If you are fortunate enough to be a United States citizen who gets up and contributes to society on a daily basis -- you will never have anything to worry about. The NSA won't care anything about your dealings on the internet. Everyone can safely get back to their weird browsing habits and making lame comments on youtube -- no one is watching, because no one cares :)
Wow, I'm flattered that you would sign up because of my comment! I haven't been here that long and hardly ever comment but have always kept an account for times like you describe.
I can think of one concern about downloading and installing it, the NSA might be interested in who uses it. No need for anything malicious in the code, they just watch to see who downloads it.
probably makes sense. I see in the government level a lot of open positions for reverse engineers.. having its own tool, helps, at least, to save money in licensing (assuming they are using or planning to use Ghidra to do that)
Running linux* that has basically no security at all, good luck! A rouge extension/bash script can install whatever backdoor it wants without problem.
* under linux I mean mainstream distro here. Unless you use qubes os, it will not have good sandbox, everything runs as your user and can easily modify eg. .bashrc and start up a key logger to get sudo password.
> Documents obtained by Der Spiegel reveal a fantastical collection of surveillance tools dating back to 2007 and 2008 that gave the NSA the power to collect all sorts of data over long periods of time without detection. The tools, ranging from back doors installed in computer network firmware and software to passively powered bugs installed within equipment, give the NSA a persistent ability to monitor some targets with little risk of detection. While the systems targeted by some of the “products” listed in the documents are over five years old and are likely to have been replaced in some cases, the methods and technologies used by all the exploit products could easily still be in use in some form in ongoing NSA surveillance operations.
If you want to harness the power of Ghidra decompiler but without the need of installing Java - Rizin[1][2] and Cutter[3][4] (Rizin's Qt GUI) integrate Ghidra's decompiler part that is written in C++ (libdecomp) as plugin - rz-ghidra[5]. We work currently on improving the integration and the quality of output.
For anyone confused (as I was) rizin is a fork of radare2. I don't have anything constructive to say other than I'm confused why the project was forked.
The reasons behind the fork are described in our FAQ[1]. TLDR: we removed everything irrelevant, not working, rewrote some pieces completely, focus on maintainability, cleaner code, easier onboarding of new contributors, better code documentation (Doxygen), better API and testing.
I used this again just the other day with the cantor.dust plugin. My rev.eng skills are dull and were never great to begin with, but for anything below a real APT with obfuscation, runtime decoding and unpacking, Ghidra is an equalizer. Between this and Chef from gchq, someone with devops skills can probably skill up to an entry level threat analyst level in a few weeks or months. The tooling available today is really good.
If people are worried about running systems backdoored by NSA, they probably shouldn't use things like electricity either. It's a threat actor you can't really do anything about.
It was a wry comment about emanations security and TEMPEST (https://en.wikipedia.org/wiki/Tempest_%28codename%29), which people think about mainly for CRTs, with the implication I have no doubt there exist methods for remote differential power analysis of crypto operations as well.
I think their point is that it's either so futile to attempt, or so unlikely an issue, that the NSA targets US civilians that living off the grid is lower hanging fruit from an absolutist opsec perspective.
Some people like me, can hear data movement on PCB's. The electrical circuit has noise signatures which change if other data is injected by Ethernet over powerline equipment. The distance from which this works is quite large, up to a few houses with consumer hardware. Fear equipment with built-in LoFi.. that's reachable without cooperation of LAN equipment..
When you say you can “hear data movement on PCBs”, do you mean you have some kind of superhuman ability, or that you know how to use some combination of instrumentation and analysis to “hear” the data?
Depending on the particular PCB designs, there may be piezoelectric capacitors and magnetostrictive inductors that produce noises that ordinary, non-super, humans can easily hear. Of course the spectrum of these vibrations extends up into the GHz, but it generally also extends down to near DC, until the physical size of the components is too small to efficiently couple the vibrations into the air. (And PCBs, in particular, lower that high-pass frequency a lot, by providing a large, fairly rigid area that's soldered to a lot of surface-mount components.)
Typically DC-DC converters are the easiest thing to hear, because of the sheer amount of energy involved. Normally these are operated at PWM (pulse) frequencies well outside hearing range—40–300 kHz—but often enough the feedback scheme for controlling those pulses oscillates in a way that generates audible subharmonics whose frequency depends on the power draw at any given moment. Modern computers are full of DC-DC converters.
Also, though, it's common for computers to contain sensitive low-noise audio-frequency amplifiers connected to a periodic sample-and-hold circuit which can alias high frequencies down into the audio range, with the output hooked up to loudspeakers; these are called "sound cards" and it's not at all unusual for them to produce clearly audible sounds that depend on the computation happening, at least if you turn the volume up all the way.
Finally, regular, non-super, humans can directly perceive radio frequency emissions as sounds: "The human auditory response to pulses of radiofrequency (RF) energy, commonly called RF hearing, is a well established phenomenon. RF induced sounds can be characterized as low intensity sounds because, in general, a quiet environment is required for the auditory response... Effective radiofrequencies range from 2.4 to 10000 MHz." https://pubmed.ncbi.nlm.nih.gov/14628312/
So "hearing data movement" because of "noise signatures that change" is not at all unusual. You can probably do it yourself if you have a quiet room to listen in. It's plausible that Ethernet-over-powerline equipment could produce audible sounds from the power supplies in the same house or nearby houses, but I haven't observed that myself and this is the first time I've heard of that happening.
I haven't used Ghidra that extensively, but it worked well when I was using it to assist in modding Minecraft Pi. The big point in its favor for me is that its free and supports ARM32, while IDA's free version only supports x86.
For you it is obviously not news, but for other people it probably is. For me, HN is about learning something new, not just for learning about something that happened in the last 24 hours.
I once was a total jerk on a mailing list (okay in my youth I was a jerk many times on mailing lists).
Someone shared an article I had seen earlier that year. “Why would you share this? This is old news it’s already made the rounds on the web.” Like I expected everyone to have the same experience as me. Luckily someone told me to chill out or I’d be blocked, that the list was for any news people found interesting. I felt very embarrassed and didn’t post there again for a long time, but it was my own fault.
Potentially news to those who recently got into coding/hacking. Ghidra was leaked in '17 and made headline news. Then officially released by the NSA in '19.
Because you know everything that made the headlines up to 2017?
There might or might not be discussion potential on any submission, so I understand arguing about their value, but that "news if you're a beginner" was very condescending. Why not be happy about today's lucky 10,000?
> Because you know everything that made the headlines up to 2017?
No, and I didn't mean to sound condescending. I'll take out the "only" in my message.
Edit: And to clarify what I meant, I may not have known every headline in 2017, but I sure as hell heard about most of the Vault 7 releases. An organization anonymously releasing a world power's cyber tooling is something out of a cyberpunk novel.
I did a double take seeing Ghidra in a headline because just yesterday I was watching a video of someone going through WannaCry with Ghidra. I had never heard of it before yesterday. https://www.youtube.com/watch?v=Sv8yu12y5zM
I used Ghidra for the first time to hack my robot vacuum.
Some months later I used it to reverse engineer the on-board software of a satellite running on a SPARCv8 CPU. It worked great in both cases, can recommend.
I like Ghidra more than IDA. Having "proper" type support is nice - IDA's struct and type annotation support always felt very hacked together and hard to use. Ghidra's typing and decompiler is good enough that I don't even have to look at the disassembly listing for most functions, and struct autogenerating is wonderful.
Unfortunately, Ghidra handles vtables and OOP very poorly still. You have to do a lot of by-hand annotations for virtual calls, even with 3rd party analysis scripts, while IDA's C++ usually Just Works. This is the main pain point, imo. The other main thing is that IDA has been used by the reverse engineering community for so long that there's a massive body of tutorials and StackOverflow answers for it, and a much larger corpus of 3rd party plugins. It's not a big deal for me, personally, but if you already have a good workflow for IDA it's probably not worth it to switch. For beginners I'd recommend Ghidra instead, though, because a free and open source tool with good official documentation and UX is worth its weight in gold (although I've heard BinaryNinja is extremely good nowadays).
* Affordable for sane people (aka, free)... This of course pushed Hex-Rays to finally make a cheaper version of IDA, but it's massively hobbled and useless for uncommon architectures.
* Almost as good architecture coverage. Missing a few big ones for automotive RE still - SuperH is still hit and miss, and no real C167. But the user-contributed Tricore is really quite impressive.
* Decompiler works across all architectures.
* Debugger is still sketchy, but has progressed extremely quickly.
* Preferable UI (IMO), and better struct handling.
* Decent plugin interfaces but fewer available plugins.
IDA:
* Still slightly better decompilation and disassembly for x86-64. Doesn't get as "lost" in vtables and big switches.
* Much better C++ construct support.
* More plugins and scripts available off the shelf.
* Still a few architectures which Ghidra doesn't have yet.
* Debugger is more stable and works a bit better.
For most architectures I would not start using IDA today as a hobbyist, but if I had a good IDA workflow or was joining a company where it were the gold standard, I wouldn't feel compelled to move over.
Ghidra is a very cool utility. I used it to disassemble StarGlider for DOS - a very old fav - to figure out how the game worked. Together with the DosBox debugger I managed to create my own hack so I could play the game without being killed the whole time.
CyberCHEF is another great tool coming from a “spying” agency. I dont see how GCHQ could really benefit from it as there is even a local version for those who would want to keep their data from going over the wire.
Some people may be worried about installing a piece of software on their computer that comes from the NSA. I don't think that there are real reasons to worry. One of the tasks of the NSA is defending against cyber attacks. Having more people with good tools helps the defense. Also, you can be pretty certain that some security people have been closely looking at the sources to see if it contains any suspicious features. Besides, if the NSA really wants to install some software on your computer, they can probably do it themselves without your involvement.