Hacker News new | past | comments | ask | show | jobs | submit login
Zero-days in Cisco Discovery Protocol (armis.com)
364 points by pimterry on Feb 6, 2020 | hide | past | favorite | 138 comments



Does anyone actually have faith in their hw+sw security these days? The speculative execution stuff on CPUs, system code full of buffer overflow / privilege escalation bugs, attacks that can work across airgaps using high frequency sounds, all the attacks on various hash functions, fake digital signatures, sandbox escapes, ... it goes on and on and each time you realise that it's been wide open for ages, and whichever security researcher found it could just as well have sold the exploit for lots of money instead.

Even as a fairly tech savvy guy I wouldn't actually know how to make a truly secure computing environment short of putting the whole thing in a silent Faraday cage and having metal detecting scans going in and out.


The speculative execution stuff is fascinating and a little terrifying but, for the most part, has been of little practical impact so far, at least after the first round of Meltdown/Spectre fixes.

Attacks on various hash functions are not really a thing. We've known not to use SHA-1 for over a decade, and cryptographers working in the space have started saying we may never find a viable preimage attack on MD5, let alone SHA-1. We are on pretty sound footing with hash functions; what you were doing in a competent system 10 years ago (using SHA-2) is still the right thing today.

I've been a vulnerability researcher since around 1995. I feel significantly better about security today than I have at any point in the past. We've been doing an imperfect but steady job of shutting down bug classes for decades, with the obvious and most important factor being the deprecation of insecure systems languages.


I don't see C anywhere near being "deprecated" in general. Usage of Rust/Zig/ATS/whatever is a rounding error by comparison to C/C++, even for new projects. Anecdotally, I've been interviewing with a few startups building ambitious infrastructure-level projects, and none of them are using Rust. C/C++ is still the only game in town outside the HN bubble.


If your definition of "safe language" is "Rust/Zig/ATS/whatever", our premises aren't compatible, because I am talking about things like Java and Python. In the 1990s and early 2000s, we were still building application code in C/C++.


Hmm, I had in mind the phrase “insecure systems languages”, but I don’t want to get into a boring debate about what “systems languages” or “systems programming” mean...


It's at least as much my fault as yours, since I said "systems language" as a shorthand for C/C++, not to describe the problem domain, and it's true that the modern zero-cost-abstraction memory-safe languages that can displace the last of the C/C++ code are very new. We'll be out of the era of C/C++ when a mainstream browser is written in one of them.


As I understand it, Firefox now has substantial parts written in Rust, Mozilla's own language. Also, unless the OS and drivers are written in a memory-safe language as well, that castle is to a degree built on sand.


> Also, unless the OS and drivers are written in a memory-safe language as well, that castle is to a degree built on sand.

You have to start from somewhere. There's Rust. I'd like to see some more language (and tooling) competition with systems that can prove functionality at compile time in practical situations.

Not impressed with current C/C++ static analysis landscape. I've used a ton of static analysis for writing Windows kernel drivers for instance — from known tools like Microsoft's Prefast (requires a ton of annotation, sigh!) to clang. And others. And fuzzing on top.

But it still doesn't come anywhere near replacing a good pair of eyes and some hard thinking. And I'm not talking about logic bugs here, but for example concurrency and memory safety related issues. I've had plenty of "quality time" (not!) with the kernel debugger...

I'm very seriously considering to use Rust instead of C/C++ whenever I have my next Windows (or other OS) kernel mode project. Even though some consider it a bit unproven in this space. At this point, I don't feel like C/C++ is proven either...

To a bit lesser degree, same for anything that touches untrusted inputs, directly or indirectly. Even if you can sandbox it.

I do need to work around some issues with Rust. Like to learn how to minimize code size for those situations where it is a constraint. Like device firmware. However I feel that's easier than writing bulletproof C/C++ code.


Part of me would like to see .NET Native available on all levels of the stack, but with WinDev vs DevTools politics, that is unfortunately just wishful thinking.


The words "to a degree" are doing a lot of work in that sentence, given where browser RCE vulnerabilities actually appear today in code.

Kernel bugs are obviously a part of modern bugchains, but they're used to escalate privileges, and usually not to gain native code execution in the first place. The enabling code execution arises from code that doesn't need to be in C/C++ at all.


From a security standpoint, it really depends on your threat model.

But when it comes to software reliability, you rarely if ever should be worrying about kernel or driver bugs.

Given the above, writing your code using a memory safe language is still a huge net benefit.


> But when it comes to software reliability, you rarely if ever should be worrying about kernel or driver bugs.

Not quite sure what you mean. For an application developer, yeah, you don't need to worry.

In general... If I had a dime just for all various types of kernel memory read/write vulnerabilities due to kernel bugs and bad drivers, I'd have a lot of dimes.


Video codec provided by Cisco is installed with every instance of Firefox. Check your plugins.


This is true, largely because no one outside the HN bubble knows Rust/Zig/ATS/whatever. Most companies and people want to get stuff done, and a new language can get in the way of that. It'll take a while for those newer languages to pick up main-stream traction, and by a while I mean a few years of pushy college grads who used it in CS 101, 202, or 303.


Don’t be so sure about that. I don’t know any C++ dev who hasn’t heard (good things) about Rust. It’s definitely well known in the systems programming community at this point. That’s different from using it in production though.


And that's precisely the point. You don't use C++ because it's the best thing since sliced bread. You use it because you want to easily use decades of optimized libraries and get a result that's highly performant without spending a long time creating wrappers in Rust (that may not even be easy to do w/ libraries that rely on a lot of templating).


A lot of C++ projects (not all!) have relatively few dependencies. I think this "decades of optimized libraries" is a bit hyperbole.

Besides, libraries that were optimized for HW 20-30 years ago don't age all that well and are far from optimal these days. They also tend to still have a ton of bugs.

Now, I'm not saying there aren't cases where you really need to build on legacy library base. Merely that it's a bit rare case.


Libraries that are 20-30 years old and that people still use today are generally in continuous development over that time period, often with the same team (in a ship-of-Theseus sense). That means that they have been optimized for today's hardware for about 20 years worth of todays.

Now there are cases where libraries have ossified and there is room for a new library to come and eat their cake (look at ripgrep for an example), but there are also a lot of cases where they are going to be very hard to beat (a lot of boost is in this area, I believe, and there are many other examples in specialized domains).


boost fills more of a role that standard libraries play in other languages (think go).


And tooling.

Mixed language development experience in Java and .NET IDEs, UI frameworks, game engines and GPGPU shaders.


C is deprecated on Windows, where the focus is now C++ and .NET, old style C code has been made compatibile with C89's C++ subset.

Even the new Universal C runtime is written in C++ using extern "C".

On Android, C is reserved for the Linux kernel and "legacy drivers" (pre-Treble), everything else is C++ and Java/Kotlin.

On Apple OSes, C is left for the UNIX style compatibility layer, everything else is a mix of Objective-C, Swift and C++.

By the way it is either C or C++, not C/C++.


VS now ships clang, so that should improve support for C considerably.

Objective-C is C, with extensions. Swift interoperates with C, not C++.


It ships clang, because that is Microsoft's answer to C support on Windows, for those that still want to use C.

Visual C++ focus is naturally C++.

C with extensions, by definition, is not C, rather something else.

Swift interoperates with Objective-C, thus C interoperability comes for free.

IO Kit, DriverKit, Metal Shaders, LLVM make all use of C++.


> I've been a vulnerability researcher since around 1995. I feel significantly better about security today than I have at any point in the past.

Do you really feel that way considering what is internet (or other accessible networks) connected today compared to 1995?

Personally I feel the opposite (although I haven't been in the game as long) because of a lot of critical infrastructure is getting connected without a lot of security. A lot of surveillance cameras, door stations, networks that in 1995 would be localized failures are now national or global. People find open, password-less VNC into control panels for powerplants just by scanning the internet. Equifax, Turkeys MERNIS and on and on and on.

I don't get the sentiment that things are getting better in this area.


The security is definitely better than in the 90s, for what I've heard. The impact of an attack is however greater. You can literally shut down an entire country today, something you could not do back then.


Documented state-level kinetic computer security attacks go back a lot further than you think they do.


Is there prior art before alleged sabotage leading to pipeline explosion in 1982? [1]

Zetter's Countdown to Zero Day points to that with skepticism and also talks about Kosovo as one of earliest cases, as does Sandworm.

Do you think this is roughly correct?

[1] https://en.m.wikipedia.org/wiki/At_the_Abyss


I'm thinking about Kosovo.


Yeah, that's what I meant. The security might be better, but the attack surface and attack impact has grown a lot faster than than the overall security.


I'm sorry, but how do you literally shut down a country? Hyperbole much?


I suppose if I point out a government hit by ransomware and forced to run with pen and paper just last year[1], an attack on a nuclear power plant which could have been targetted at power generation[2], an attack taking down 1/5th of the world's ocean freight and a company with enough annual revenue to be in the top 100 countries by GDP[3], or a classic software bug taking out power to 45 million people in the USA for two weeks[4], you're just going to claim those aren't important because they didn't "literally" shut down every operation in an entire country, or weren't deliberate attacks trying to do that, and therefore they aren't evidence that it could happen?

[1] https://nunatsiaq.com/stories/article/government-of-nunavut-...

[2] https://en.wikipedia.org/wiki/Stuxnet#Iran_as_a_target

[3] https://www.forbes.com/sites/leemathews/2017/08/16/notpetya-...

[4] https://en.wikipedia.org/wiki/Northeast_blackout_of_2003


Hack the power grid or core network/telecom infrastructure.


See also: suspected Russian attacks on Estonia related to the a statue being taken down

https://en.wikipedia.org/wiki/2007_cyberattacks_on_Estonia


From my point of view, the proliferation of so many new classes of network connected devices has offset much of the gains in better practices. With enough systems being designed, the odds of a single system being both incompetently designed and widely deployed approaches unity.

I think I'll call this the "lightbulb effect" since in 1995 the lightbulbs in everyone's home were secure against all RCE vulnerabilities, while in 2020, this is no longer true.


There was a related blog post from 2016 about a guy who managed to hack his hotel's "smart" lights/curtains/etc system. Unfortunately the images no longer work, but they're not essential anyway: https://mjg59.dreamwidth.org/40505.html


This is a reason not to buy lightbulbs with Internet connections, but not a good reason to despair about the state of overall software and hardware security.


How many people (even very computer savvy) expected their somewhat standard keyboard to contain firmware that can support keylogging? https://www.blackhat.com/presentations/bh-usa-09/CHEN/BHUSA0...

Do you know for sure your mouse is safe from this kind of thing? What about your monitor? Even things like sticks of memory and case fans have little controllers now to do some kind of programmable RGB lighting; everything is getting filled with relatively small amounts of computing power for no good reason, and it seems to be an accelerating trend.


This seems to be another example of something theoretically possible that an actuary would tell you to ignore.


Maybe, but there are online storea full of the stuff.


[flagged]


All he/she said was that he/she felt better about security today. You don't accept that argument?


I worked for a large networking equipment manufacturer. A company that should know better / how to handle things.

The team that fielded inquiries from security researchers and etc were STILL fighting fights with engineering executives whose first response when they saw something that showed a problem in their code from a researcher ... was to go running to the lawyers and talk about lawsuits.

In the meantime some engineers recognize the increased need for better security practices but there were NO new resources allocated to actually make time to do such things. Everything was lip service (if there even was any) but no time, and then later reactive to issues.

In terms of what I saw in a practical manner. There was a shockingly high rate of "we saw this security bug before" re-introduced right after it was fixed. You could almost bank on it happening. In the meantime you'd watch security related bug fixes pushed from one version to the next to the next ...


Sure there are exploits, but security isn't binary. Something isn't "truly secure" or "insecure". It's all a matter of how difficult the attack is to pull off, or how expensive the exploit is to purchase.

Security is economic. There's a cost/benefit to attack, and to defend.

A good way to measure how secure we actually are is to observe the rate of cryptocurrency theft. How many people do you know with crypto? How many people do you know who have had their computers hacked and crypto stolen? That's how secure you are.


> Security is economic. There's a cost/benefit to attack, and to defend.

In some cases, sure. In other cases (e.g. if you're a nation declaring war on the US) you kind of have to assume basically-unlimited cryptanalysis resources and the willingness to prioritize their use to attack your crypto.

Is there a name for the discipline of "cybersecurity doctrine as pertains to protecting against unlimited-strength attacks"? Sort of the crypto equivalent of the Airforce One doctrine, of trying to ensure that an individual survives ICBMs aimed directly at their best-predicted location (with the predictive capacity of a state actor.)


State actors aren't mythical, they're working with budgets same as everyone else, and they don't have "unlimited-strength attacks". They weigh what to burn their zero days on, use social engineering etc low tech attacks when it's more efficient, and so on.


I think it's all relative really. I don't really worry about these sorts of exploits because I'm not a high value target whose going to be attacked by most scary exploits. Certainly there are exceptions, especially when the exploit casts a wide net and doesn't care about who it hits. Your trust in your HW and SW should be proportional to the risk. I would hate to have my private data leaked, but realistically it wouldn't be much more than a nuisance I have to sort out. The same is not true for a government agent or a CEO for example. It would be nice if we could just 100% trust all our software and hardware, but it's just not practical these days.


Every small time accountant, car dealer, mom and pop shop, and school thinks “there are bigger targets than me!”... then they’re dealing with ransomware. It’s just another form of “I don’t break the law! I have nothing to hide”.


Regularly backup and keep those backups off your network, boom problem solved. My point was not that I take no precautions or that you're not at risk because you're not a government agent.


Ok, aside from the time it takes to restore from backup...

What about ransomware + spyware where you just lost account numbers and IP?

Just stop. There is zero reason to be reckless with infosec and IT infrastructure.

Sometimes things will exist outside of your control but that’s a long shot away from “whatever, it won’t happen to me”.


time to restore from backup really is not a personal concern

"whatever it won't happen to me" isn't my point, but okay. I'm not advocating for recklessness or apathy toward infosec / IT. I'm not trying to tell anyone how to live their lives, and explicitly I'm not saying what a company should do. I'm just trying to convey how I view MY risk from a target probability perspective vs the loss it would incur, and thus how I decide to trust some HW/SW

Not sure why you're trying to strawman me here, no need to be hostile


You are a worthwhile target at a minimum that your machines can be used as part of botnets and DOS networks.

Sure, they might not be digging out your bank account number, but since the cost to the attacker of compromising your machine might be nothing more than a few hundred packets, they'll try anyway.


Sure, which is why I mentioned the "wide net". I take reasonable precautions, apply basic monitoring, and take care with what software I'm downloading so that I lower my risk of this. It's not foolproof but it's good enough to keep me from being an easy target, while not sacrificing any convenience or spending a lot of time fortifying my machine.


I have little faith. I aggressively replace digital things (does it have a radio and/or compute) that are not actively supported by the manufacturer. I make thoughtful choices about when to trade security vs convenience (I have no home assistant devices. I do have a Google Pixel, because it is supported and receives monthly security updates.) Limit how much collateral damage can happen when it will happen. Have robust offsite versioned backups, a unique password for every system, strong 2FA, credit freezes, etc.

It isn't perfect, but it works for me.


Like locks and doors, secure and safe are not a binary state, it's a spectrum, where your purpose is to provide more hurdles of time, effort, discovery, consequences, etc to the attacker such that what they would want from you is not worth it. Nothing will leave you entirely secure (and conversely nothing will leave you entirely insecure. You can leave piles of cash on your front porch and it still depends on someone actually walking by to note there's something to steal).

If you're a bank in a metropolis, you get a sophisticated vault. If you're a business you get an alarm system to protect your inventory. If you're a farmer and it's the less expensive farm equipment, you might not even lock the barn/shed.

If you expect state level adversaries, you probably need state level resources, planning, and mitigations. If you're jane/joe schmoe citizen, don't give out your passwords or click on unknown binaries, and be wary of people asking you for SMS codes in IMs. Scale appropriately for the threat you think you'll face, and as much buffer as you think you can stomach.


> attacks that can work across airgaps using high frequency sounds,

Has any real ones been found and confirmed?

I remember one being mentioned a couple of years ago or so by one researcher but last I can remember it was not confirmed and IIRC other researchers were voicing their doubt about it.


On Cisco, not sure, but otherwise the most amazing is probably this one. Reading PGP keys with a microphone up to few meters away from the laptop, from the sound of the power supply. It's more noise when it's busy.

https://m.tau.ac.il/~tromer/acoustic/


Thanks. Yes, this looks more familiar.

For some reason my head was shortcutting "attack" to mean "infect".

Thorrez in sibling comment mentions BadBIOS which is probably the one I thought about.


The one you remember was BadBIOS, and yes it was bunk.


Thanks!

(Also, I realize that my brain thinks 2013 is "a couple of years ago". On a personal level that is possibly even more scary :-/ )


> Does anyone actually have faith in their hw+sw security these days?

Not much.

We actually know how to deal with most of the things you mentioned. There are simple CPU cores that don't have speculative execution problems. It is possible to achieve memory safety. Formally proven OS kernels exist. Physical security is largely understood and can be made very difficult to compromise.

Why don't we have all this as routine? Because security theater is cheaper than actual security. The value of 'cheaper' here is very broad and includes avoiding all of the costs that emerge with robust security including opportunity costs.


Intel seem to be paying a hefty price lately for some security shortcuts they made in hardware compared to AMD (who paid the opportunity cost; security measures don't happen by accident, and they do slow you down).

Good security is getting rarer all the time, and therefore expensive; I think a lot of people would be interested in having a small truly secure system (say, for storing your cryptocurrency keys), and as I wrote, I sincerely don't know where to practically begin with this.

Is it commercially available? Concretely speaking, what can I buy, and what software should I run on it?


> Intel seem to be paying a hefty price lately

But the gains they made have surely outweighed any current hit? Also I haven’t perceived that much of a hit (although I’m not in the industry): are they selling more CPUs? AMD have an opportunity, but I haven’t seen that translated into sales to hyperscalers or consumers (beyond what they were achieving anyway).

Same as Microsoft knowingly for decades chose functionality before security - with the cooperation of plenty of customers that knew better.


Only pc I trust is my Ben Eater 8 bit CPU. Gotta turn that into a router now :)


I think that after you realize the limits of your (and our) technical ability to defend against attacks, you can then apply a threat model to see which attacks are not practical given your assets.

For example, I’m reasonably sure there are weaknesses in my gaming pc but I don’t do banking or other more secure things on it. There’s little motivation to target an attack or burn (use) an exploit to get access to this pc for an attacker and the defenses I have defend against most spray and pray attacks.


This is what state sponsored actors and agencies have been doing for generations.

There is no reason to think that anything has changed with the explosion of security consultants and firms.

If you are doing anything national security related, you should really just boil your security thinking down to thinking about the remotest possibility something could happen, and then multiply that x10, and then apply your mitigation.


> Does anyone actually have faith in their hw+sw security these days?

Well, I can only remember one time I've had any of my personal devices compromised in 25 years of heavy computing, and it windows XP pre-SP3. So yeah, quite a bit of faith seems to be warranted, even if the stream of flaws makes it seem like it shouldn't be


Having worked at Cisco 2 years ago and seeing "how the sausage is made." No. Not a shred of faith.


Cisco is a huge company.


I also worked there. It's... not good. Some of the development practices are nearly forced on every one no matter where in the company they work. I could write a book on how bad it is.


They might have a department somewhere thats doing great thing. But I never saw it. I did see a lot of bad ones though.


Size doesn't matter.

IE is made by a trillion dollar company but you can still find tons of security issues with IE.


I think the grandparent post isn't trying to say "a 100,000 employee company automatically makes secure software" but rather "anecdotal evidence from a year or two of personal experience may not be representative of a 100,000 employee company". Cisco has dozens (hundreds?) of global offices and is well known for growth-via-acquisition. There must be a million little micro-cultures in an organization that large, diverse and distributed.


> There must be a million little micro-cultures in an organization that large

There are, and in those, good code can be written. They're the exception. Source: worked there for 7 years.


That's kind of the problem - if you have a hundred little micro-cultures involved in a project, and 99 of these cultures value safety and one does not, then in the end you get an insecure product. A chain is as strong as its weakest link.


Metal detectors as usually deployed and configured aren’t sensitive enough to pick up MicroSD cards. :D


Yeah, optical storage wouldn't trigger the metal detectors either, but those drives are uncommon now. Similarly, you wouldn't have a MicroSD device in the secure env.

I wonder if you could do a kind of power over ethernet thing, where you increase the system's power draw in a precise manner that gets a signal outside the Faraday cage, since presumably your power will come from an external cable.


A USB drive can be small enough to fit inside a USB connector, barely larger than a MicroSD card. Unless you mandate cavity searches for your employees you're not stopping egress of data like that.


If you designed your application correctly, you shouldn't have to trust network hardware in the first place.


Can you elaborate?


If every network connection your system makes authenticates the client to the server, authenticates the server to the client, and encrypts the dataflows then you don't have to worry if your routers are hacked.

A typical way to do this is to use TLS, verify certificates, and have a password for the client.


It's a well known & age old principle in building networked systems.

https://en.wikipedia.org/wiki/Fallacies_of_distributed_compu...

(But of course these fallacies are so well known exactly because programmers have proven their inability to reliably internalize them...)


> Does anyone actually have faith in their hw+sw security these days?

I worked for 7 years so I know how the software sausage is made. The answer is most definitely "no", and I'm frankly surprised now that the internet works at all.


Not at all.

This is why I rant about C, even though I seldom touch it.

The "experts" that sell our infrastructure have proven multiple times they aren't able to deliver.

3 from 5 exploits are memory corruption bugs.


If this bug was discovered in a Huawei device we'd immediately call it a backdoor and plea to ban all these devices from our critical infrastructures.

And yet, when Cisco has a security bug we patiently wait for the patch and move on with our business.


I think Cisco’s [existing] reputation goes a long way


I think most of the luster Cisco had 20 years ago is firmly and irrevocably tarnished.


You mean their consistently bad security track record makes their bugs less suspicious?


Their reputation for security flaws like this and helping suppress human rights by selling hardware to the Chinese government?


Perhaps. But also their reputation for actually originating ideas and technology (or legally acquiring them) instead of stealing swaths of intellectual property like Huawei has and does.


I've read through the description and now the whitepaper and I've got a few takeaways.

1. These attacks can't be carried out over the internet, you need local access. Thats easier than you'd think with employees with unsecured laptops/vms/etc.

2. The exploit(s) appears to be RCE in the OS that runs on the device, but not neccesarily translating into executing configuration commands on the control plane. But its not a big leap to make configuration changes...

3. The DoS case doesn't seem great either but also requires local access.

4. The string format bug has the most entertainment value for me -> You can spew CDP packets into a switch/device which potentially overwrite other devices CDP data, which means you could potentially issue a poweroff or change the name of said device.

5. The phone vulnerability is good too -> If you can broadcast/spew CDP from your host, you can be annoying to phones.

6. The cameras can only be harassed by plugging directly into it, rendering the vuln somewhat useless.

My takeaway is that you can't easily escalate vlan privileges via CDP which is good, but you can def monkey with the controlplane and maybe someone will come up with a way to change the IOS configuration.


They can be carried over the internet: VPN vulnerabilities are pretty common, eg https://www.us-cert.gov/ncas/current-activity/2019/07/26/vul...

VPNs commonly bridge at layer 2, that's a direct path to pwnage in this case.


You'd have to -

A: own the VPN endpoint (unlikely)

B: own a local laptop and bridge the network together (more likely), but owning a local laptop is better than using a VPN.


C: own the vpn gateway

I think A and C are somewhat "likely" threats roo.


The NX-OS video shows a demo where the attacker remotely enables the management IP and uses that to reconfigure the Nexus switch.


I guess thats what I get for not watching the video and only reading the PDF. That said "enabling the management ip" is unlikely to happen since most people put the management IP on OOB and ACL where you can access it from; the exploit is unlikely to be able to change ACLs.


If you can enable it you can change other config like ACLs as well.


They're not zero-days if they've been privately disclosed and patches are available.


I'd argue with most enterprise's upgrade cycle for this gear it is better than a 0 day, it is a -5 to -3 year exploit.


Great point!


Security best practices have always recommended disabling CDP in an untrusted environment (that applies to things like LLDP, too). It is comical to see how many people leave both of those running on internet exchange points or sometimes worse things (like ospf or isis). The general default on nature of products to enable this is unfortunate.


I remember writing it up as part of internal assessments in the late 90's lol.


Security best practices have always recommended treating all networks as untrusted and having non-internet-hardened stuff turned off by default in network facing products. You're right of course too, but Cisco is the real bad guy here.


Eh, I know a lot of folks who do enable CDP/LLDP for internal device to internal device links for discovery and troubleshooting. But they don't turn it on any customer/external facing interfaces (this enforcement thru offline config generation). So there can be a time and a place for it under the right conditions provided you understand the security risks (possibility of someone plugging something on an internal link, changing the router config incorrectly).


Nice reminder that fuzzing protocol decoders is a good plan. Also a nice time to ask yourself if your last protocol decoding code should have been a structured parser instead of a handmade hack. https://objectcomputing.com/resources/publications/sett/marc...


The NSA used to intercept Cisco routers in-transit to swap them out for back-doored ones.

I'm sure they knew about these, that's the problem, the NSA "hoards" zero-days instead of alerting the manufacturers.

And then we end up with nonsense like this, and you know half of these won't be patched for years, if ever.

Tens of millions of Cisco devices.


If they have a known exploit in the normal firmware, why bother MITMing hardware in order to backdoor it?


They may not have known about these protocol flaws at the time they were intercepting the physical devices.

Either that, or they were not vulnerable to these exploits at the time.


Flytraps! Probably easier than finding rces


This doesn’t surprise me. I’ve avoided Cisco equipment since I wrote cdp-tools over a decade ago. During development (reverse engineering) lots of things I tried crashed switches and routers. No faith in anything Cisco makes after that.

But lots of networks I was connected to learned my topology from CDP so I needed to speak CDP...


Do we really need half of the features pre-installed on our cisco devices. Like honestly- I really wish I could "tune" my ios firmware and remove things I dont need so I could reduce my attack surface.

Since its not called out clearly in the title this affects cicso's LLDP implementation as well https://go.armis.com/hubfs/White-papers/Armis-CDPwn-WP.pdf


That's what I was taught to do by my dad. Take the uncompiled kernel, remove every driver and feature from it that you don't use for your use-case and then compile. Then go through hardening steps at each layer, chroot all the things, tripwire, etc. Compile all applications configuring for only the features you're going to use. Get a report on all versions and subscribe to vulnerability mailing lists.

And then at the end, understand that you have a system that is still vulnerable so set up a regular cadence of maintenance.

It's a bit harder to do these days for various reasons.


Its fairly easy to turn CDP and LLDP off...


Anything about all the other millions of devices(different brands) that implement CDP?


From my CCNA days I seem to recall that this protocol is a local one and would require a break into another host on the network.

Though the diagram shows the core switch directly connected to the internet instead of having an external router connect to the internet - which was the design CISCO suggested back when in did my CCNMA evening class


The article does not suggest that devices can be attacked from the internet. The threat is VLAN hopping or lateral movement. That illustration is oversimplified.


I read through it - it doesn't seem like VLAN hopping would really be possible unless you can straight up get into the control plane and change the port configuration. Thats at least 2-3 more steps from "RCE" basically.


I was being generous :-))))))) though maybe if some PFY had let the management interface exposed to the internet but that is a worse problem I feel.


Or a compromised L2 VPN endpoint


Damn, that’s a wide array of devices...


not a backdoor?


"Backdoor" usually implies the vulnerability was put in there intentionally.


They might be alluding to the fact that depending on who manufactures the device, security vulnerabilities will be reported as a backdoor.


Admittedly I didn't read the whitepaper, but the CVEs that were mentioned on the page summary sounded like your run-of-the-mill stack overflow or string vulnerabilities. So yes, the end result might be the same (i.e. remote takeover), but I would hesitate to call them backdoors unless it was demonstrated that this vulnerability was intentionally known or left unfixed for the purposes of being abused by parties in the know.


The idea if the same bug was in a Chinese product the title would have been "backdoor injected" ... see yesterday and recent articles on HN.


I searched and only found something about a vulnerability in "HiSilicone" hardware mentioned yesterday.

That article mentions a pre-installed telnet server including accounts that can be started with the right command send over TCP. Whereas here, it's apparently a typical buffer overflow resulting in arbitrary (but BYOT (bring your own Telnet)) code execution.

Sure, maybe Cisco is just better at disguising their backdoors for plausible deniability. But with what's known, intent seems far more likely in yesterday's case than this.


The article title was different initially and was something that included backdoor,injected,Huawei(was no direct relation with Huawei). There were also many similar reports about US routes and IOT having default passwords or easy to guess passwords but each time the stupidity is assumed. Also the Windows "NSA key" article and comments were very convincing that for sure it was nothing evil and was never used anyway.


A Telnet server is way too obvious for a backdoor. It’s more of a leftover debug feature.


Well I suppose we don’t have enough information to determine whether it was put there intentionally or not. I recall from the Snowden leaks that the US has a well established program to maintain access to as many foreign networks as possible. A cozy arrangement with Cisco and/or a National Security Letter might help ensure such access is possible.

https://www.aclu.org/other/national-security-letters


There has been more than one Cisco vulnerability that struck people as suspicious.

I know it comes to my mind every time I see the string "Cisco zero day", whether or not it seems likely in any particular case.


except with CISCO "backdoor" has 2 meanings:

1) CISCO's well documented efforts to intentinally put them into place in the canonical sense of the word (for LI): https://news.ycombinator.com/item?id=22251965

2) it is a well-known feature that is used to change the administrative distance of eBGP in order for an interior gateway routing protocol (IGP) to take precedence over an eBGP route. https://community.cisco.com/t5/networking-documents/what-is-...


Represents most of what I hate about the 'security industrial complex'. Self serving and to that point at the bottom of the page is a link saying:

'With the discovery of the CDPwn vulnerabilities, organizations from every industry as well as governments are looking for a way to identify which of their devices are impacted by these vulnerabilities. Armis offers a CDPwn Risk Assessment to help organizations looking to understand their exposure.'

Then - 'Request a CDPWN threat assessment'.


I don't understand where you're coming from. Given that there isn't some global cybersecurity authority employing enough people to find all security vulnerabilities out there, security researchers need to have some way of earning money. This approach of openly working with the wider community and then providing paid professional services on top of it seems to me to pretty much be the nicest possible way of doing this. I'm confident that they could have earned a lot more by almost any other approach.


Finding security vulnerabilities is an industry that gains by the type of wide and open disclosure that is done. And revels and enjoys the process and profits from it. An overwhelming majority of vulnerabilities if not actively pursued and discovered would never be known so as to be exploited other than in extreme cases.

How about if I say 'hey Paul Graham' (using him as an example) I will give you 90 day to remove your home address from public records and if you don't I will then publish that address in the open. Of course anyone can find the info just like I could (let's say). But my act of making the info more widely know would put Paul at greater risk than he'd be if I didn't do that. Now let's say I find a way to profit from that activity. Now multiply that.


>An overwhelming majority of vulnerabilities if not actively pursued and discovered would never be known so as to be exploited other than in extreme cases.

There are two ways I can read it, the first being almost a tautology (if no one pursues it, then it won't likely be found by accident), and the second (if white-hats don't pursue it, then it won't be found by black-hats either) sounds very unreasonable to me. The black market for zero-days is quite active, with prices for a single vulnerability in the millions [0,1]. So I'd appreciate if you can provide any citation that the "overwhelming majority" of security researchers are white hats. I'd find that to be as surprising as a statement that an overwhelming majority of people breaking into houses are locksmiths.

[0] https://en.wikipedia.org/wiki/Market_for_zero-day_exploits [1] https://arstechnica.com/information-technology/2019/01/zerod...


My thoughts as well. I'm interested in hearing about new novel security issues; I'm not interested in reading a advertisement. With widespread deployment of IP Phones and the like, I think this is a interesting find. But at the same time I wish I read about this on bugtraq.


They provided a very detailed, technical, 29 page PDF as well.


I guess for the benefit of having people work on such things you will occasionally have to suffer through the brutal injustice that is ... a single-line text ad at the end of an article?


Same. I'm a CCIE and after a five minute skim I can't tell if this is grandstanding or an actual threat. Best practice is to disable CDP on all eternal interfaces. This is a given.


There's a funny name and a logo, so grandstanding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: