Hacker News new | past | comments | ask | show | jobs | submit login
NetUSB Impacts the Security of Millions of Devices Worldwide (sec-consult.com)
114 points by _jomo on May 20, 2015 | hide | past | favorite | 41 comments



The client can specify the length of the computer name. By specifying a name longer than 64 characters

What sort of programmer writes code to handle a protocol with a length field and yet uses a fixed-size buffer without ever considering the possibility of what would happen if it could be larger than the buffer...?

I've seen plenty of source code out there, written for educational/example purposes, where arrays to hold strings are declared with an arbitrary size and no justification why - and naturally, no consideration of this fact is made evident. It's a horrible habit to get into writing code like that, since it makes others, less knowledgeable, think it's acceptable...


> What sort of programmer writes code to

The sort of programmer who's not passionate about how the code looks, or works, as long as it passes the (very rudimentary) tests which don't cover protocol violations or borderline cases.

The sort of programmer that didn't have any experience and directly went from Sandbox-Java to bare-metal kernel-code in his first project?

I don't want to disillusion you, but I know plenty of people with (at least part-time) programming jobs who don't care at all about all the new programming paradigms, programming languages, libraries, frameworks... boasted often here on HN. I'd say that a huge majority is pretty pleased with what they know, as long as it's enough to do the job.

And, frankly, economically it makes sense: How many plastic-routers are chosen based on their track-record regarding security? And how many "Security-Incident-Handling Stars" does any of the devices mentioned in the article have on Amazon.com? No one cares. The company and their programmers can just continue writing "almost working" code, and patch the security-incident-of-the-month when it surfaces.


I don't want to disillusion you, but I know plenty of people with (at least part-time) programming jobs who don't care at all about all the new programming paradigms, programming languages, libraries, frameworks... boasted often here on HN. I'd say that a huge majority is pretty pleased with what they know, as long as it's enough to do the job.

Actually I'd consider myself in that group; most of my work is in Asm and C, with some C++, sometimes Java, and very occasionally do I do anything with Web technologies.

The difference, however, is that I do consider all possible inputs, think about how much space things take up, and generally try to cover the problem space. If there is a variable-length field, there will be a statement in the documentation/requirements which states any length restrictions, and what happens if that length is exceeded.

The sort of programmer that didn't have any experience and directly went from Sandbox-Java to bare-metal kernel-code in his first project?

I think this has much to do with it - those starting with HLLs that cover them with a safety net, letting them do stupid things without all that much consequence, may not develop the same type of thinking; but, even if this was written in something like Java, a Nullpo or IOOBE is unacceptable, and perhaps they would just patch in code to catch the exception and ignore it, not giving this case the proper thought it deserves.

To put it a bit more bluntly: when you're writing in Asm on a machine running DOS, and any bug is probably going to make you reboot, you quickly tire of hitting the reset button and learn to think more carefully about what you write. Although I've migrated from such an environment a long time ago, the habit has stuck.


> To put it a bit more bluntly: when you're writing in Asm on a > machine running DOS, and any bug is probably going to make you > reboot, you quickly tire of hitting the reset button and learn > to think more carefully about what you write. Although I've > migrated from such an environment a long time ago, the habit > has stuck.

I completely agree, a good argument for teaching programming "from the bottom up".


> I'd say that a huge majority is pretty pleased with what they know, as long as it's enough to do the job.

> And, frankly, economically it makes sense

because they are treated as disposables by their companies, as far as I understand from all the comments from sysadmins and programmers here and at other forums.


> What sort of programmer

I know we like to think all programmers are the best and brightest and most talented citizens of the entire world, but the truth is: most programmers have the dedication of fast food workers and as long as "it works for me," they'll ship it.

The projects people here are used to (modular, decomposed, open source, documented) are rare. The world is full of multi-million line code bases with little usable documentation (either no documentation, outdated documentation, or 8,000 pages of documentation) and comments not describing actual behavior. Plus, everything gets compiled using 300 recursive Makefiles written across 15 years in 8 different countries by people who only keep their job for 8 months at a time.


"Works for me" isn't the only reason to ship imperfect code. Time-to-market really matters in some applications. "Move fast and break things" is a motto for a reason.

The router companies shipping NetUSB routers are meeting a market need; people want to plug printers into routers and print from anywhere on their network. For many users and applications, low-quality code will work.

It's in the long term that high-quality work differentiates itself.


> For many users and applications, low-quality code will work.

Oh, don't pretend "quality" is just a proxy for "has pretty animations and makes the user feel warm and fuzzy."

The failure to ensure "quality" of this embedded router component has put millions of never-going-to-be-updated hardware devices in the wild at risk of 3rd party takeover, personal information leakage, installing proxies on them to send user details to hostile actors (monetary losses, blackmail), ....

Programming errors shouldn't be excused just because we always say "whoops! we'll do better next time!" But, since humans are objectively bad at something as complex as programming, we can't hold people accountable (who wrote it? an intern from 8 years ago? punishing them does no good). We can't hold organizations accountable (recall all the consumer routers!) because public policy doesn't think that way.

We're in a weird period of history where companies can get away with great computational damage with no repercussions at all. The market currently _does_ favor "just do whatever and try to fix it later (or not, whatever)," but that doesn't make it right.


we can't hold people accountable (who wrote it? an intern from 8 years ago? punishing them does no good). We can't hold organizations accountable (recall all the consumer routers!) because public policy doesn't think that way.

Contrast that with the completely different approach taken by the aerospace industry, where people and organisations are held accountable; and it has resulted in great reliability overall --- interesting discussion about that here:

https://news.ycombinator.com/item?id=9569077

We're in a weird period of history where companies can get away with great computational damage with no repercussions at all

In critical software for a plane it is literally a matter of life or death, and people are willing to pay much more for that level of quality; but if a consumer router gets exploited, what's the worse that could happen?

I think it is important to put the risk into perspective: this is bad, but it's not really a people could get killed sort of thing.


At least they stick with one tech longer than five minutes. The ADD-riddled folk that sometimes post on here, changing their tech stack based on the newness in milliseconds of the latest craptastic JS framework is astounding.


The kids have no cost of switching to something new because they probably don't know much already. As you get older, re-learning everything 6 months feels weird since any random 14 year old can know as much (or more) than you do about the new system. Your API-level experience gets invalidated rapidly, so it's almost better to start from scratch, which only the unknowing youths can do.

Then there's a whole "stake you claim" mentality. Want to be the best C person in the world? You can't. It's too wide spread. What to be the best Go person? Sure, fight that battle for your own glory since it's new and you can take part in everything.

Lack of education/experience also creates a great breeding ground for new, duplicate, half-implemented versions of things that already exist. The older a developer gets, the more they see things they already know re-implemented as completely new but with unfamiliar interfaces they'll have to re-learn again every 6 months. It's a huge waste of human capacity to always be "new new new" instead of creating a stable and reasonably extensible base to work from. But, at the same time, we don't want to be stuck on Perl 4 and CORBA forever.

There's a tradeoff between building new things for advancing the future versus building new things just because you think you're better than all the previous research/experience that has come before you. See the case of Cathedral v. Bazaar.


[deleted]


We do that because existing more expressive tools (like Haskell and ML) are not very well suited for low-level work, and Rust has just released a first stable version. I suppose that C++ could be expressive enough to rule out such things, and even C can successfully avoid these pitfalls, but both require much more discipline and time to achieve this.

What would you use for that kind of code 5 years ago? The code in question likely took some time to write, get from proof-of-concept to beta to production to widespread use, before it could start to be a threat for many people.


I personally wish ATS[0] would gain more mindshare.

0: http://www.ats-lang.org/


They probably could have developed this in userspace using libusb, which even has Python bindings.

Of course, the other mentioned vulnerability - that some of these devices are exporting your USB devices to the whole internet - was language-neutral.


Shipping libusb and Python would probably mean spending an extra 50¢ on flash or RAM per unit.

From the point of view of a race-to-the-bottom consumer hardware manufacturer, the choice is obvious.


> Workaround:

> -----------

> Sometimes NetUSB can be disabled via the web interface, but at least on NETGEAR devices this does not mitigate the vulnerability. NETGEAR told us, that there is no workaround available, the TCP port can't be firewalled nor is there a way to disable the service on their devices.

https://www.sec-consult.com/fxdata/seccons/prod/temedia/advi...

Eeesh


Sigh....

I should probably replace my commercial-grade WiFi router with some custom box that can run OpenBSD or something.

When I first got it, I tried to go through and lock down everything I could find. But I suspect that may not be enough.


I had a great experience putting something together using an ALIX 6f2 (http://www.pcengines.ch/alix6f2.htm) which I bought via a Netgate kit with enclosure (http://store.netgate.com/Network-Computers-C2.aspx). Netgate isn't selling that one but is selling other boards which you can put together something similar. The APU line looks really interesting.

The cool thing about the ALIX and APU boards is they often support buying things like 3g modem cards which you can use to multi-home your connections with pfSense. You could also create an SMS gateway or similar service.


The first thing I consider choosing router is OpenWRT availability for the model


I wonder how much it would cost to build a reasonable wifi router out of a raspberry pi, or something similar.

If you could keep performance and consumer costs comparable, you could probably sell quite a few.


It isn't hard to put together low power hardware with two or more NICs, and you can then toss something like pfSense or similar on it.

The biggest challenge is whether this will impact your internet speed. If you have a faster broadband connection, you can quickly exhaust the throughput capabilities of such a limited platform. Things get significantly worse if you are relying on the router for your LAN traffic as well (i.e. you don't have a switch to offload the LAN only traffic).

With the more commercial solutions, whether for SOHO or SMB, the biggest advantage they bring to the table is the ability to utilize hardware optimizations such as offloading for checksum, TCP segmentation, and large receive.

pfSense actually has code to perform the offloading, but you have to ensure the hardware you're using is capable of performing the work.


Mikrotik does this. Though their OS is not open source.


For $200 you can get something like this [1]. Low power Atom CPU, RAM, and two NICs included so you can throw pFSense + Snort on it easily. You can add a USB wifi dongle and make it an AP.

[1] http://www.amazon.com/dp/B008KB5YCK/ref=wl_it_dp_o_pC_nS_ttl...


I really liked my older RouterStation Pro box... I even have a backup one new in the box... though I didn't add any wifi to the router itself.


Why not run DD-WRT? It is frequently buggy (performance wise) but should be fixable.


Looks like most home router manufacturers are mindlessly plugging modules from various vendors to their devices' firmware, to add features.

Are there manufacturers or product lines that are safe(r) from such approach? Are alternative firmware such as OperWRT or independent open-source firmwares (m0n0wall, pfSense, OPNsense) better in this regard?


pFSense and OpenWRT (I have no experience in the others) are safer in that a) when issues are discovered, patches are made quickly and upgrading is simple and b) you can select only the services you want running, thereby reducing your attack surface.


Linksys has been mostly pretty good and yes the openwrt and dd-wrt software is better.


> While NetUSB was not accessible from the internet on the devices we own, there is some indication that a few devices expose TCP port 20005 to the internet.

This is a very important caveat that seems to be a bit buried in the article. If this service is not exposed to the Internet, an attacker would have to be on your local network to exploit the vulnerability--either authenticated into your WiFi, or already resident on one of your devices (through a previous exploit). Both are fairly high hurdles if you encrypt your WiFi.

Coffee shops etc. that run open consumer-grade WiFi access points could be vulnerable to this. Exploiting that router would provide bad guys with a platform to harvest or attack traffic from all the computers that connect to that router.

If your device does expose this service to the Internet, then any script traversing known consumer ISP netblocks could try to hit it. So that is worth nailing down.


I can't tell if the response from NETGEAR is just sensationalized and they are actually working on firmware updates that will fix the flaw or at least allow firewalling or disabling the feature. I would hope they don't think that "it can't be fixed" is actually an acceptable long term answer.


I have reported vulns to NetGear before. They don't have any sort of security department, nor a method to handle vulnerability reports.

I have no idea what the truth actually is, but my experience would lead me to believe worst case.


I know it's illegal, but it'd be eye opening to worm these machines, then have them inject a banner sometimes, to alert the user. I suppose that's an ethics question overall. I know many exploits that can and are being used for financial gain. [1] The vendors respond very poorly (lying or getting angry at me). Companies and customers are at risk. But no one cares. Unless a major incident occurred...

1: One expensive (8 digit) system that was targeted at multi tenant setups used Java for the UI. Annoying but OK. But, how did the Java app determine your login privileges? Oh, easy! The app would download the root credentials for the system, use them to login to MySQL over the Internet, then "SELECT Permissions from user where...".

I met the developers and their response was " yes that's a known issue in the current version ". Ignoring that many users were stuck on that version for a long time. For bonus points, this system logged the root credentials to debug log, in the user's home directory. I'll let you guess if their updated version was vulnerable as all hell, too.

Edit: This was a major VoIP switch vendor (NexTone, now killed/bought by Genband IIRC), so exploits were easily turned into money. (Just route traffic on someone else's trunk for a bit.) Though I've dealt with other VoIP providers, ones that keep much more info (full end user info, CALEA module available) that had SQL injection-> root takeover on the login page. That puts end users at risk, too. Their response? " Our programmers are top notch C/C++ guys, they just aren't perfectly familiar with PHP... "


> We tried to get in contact with KCodes back in February 2015 and provided them with a detailed vulnerability analysis including proof of concept exploit code. They sent a few nonsensical responses and then further ignored us.

Working at a Taiwanese company, unfortunately this does not surprise me at all: both the nonsensical (ie. likely no foreign educated and definitely no native speakers on the team), or being ignored ("I have no idea what they are on about but if we don't reply, maybe they will go away") :(

I love this place to pieces, but could use a few level-ups in English, technical skills, and customer support.


tl;dr: You can cause a remote kernel exploit when your device name is longer than 64 bytes.

> Easy as a pie, the ‘90s are calling and want their vulns back


s/kernel exploit/kernel panic/

exploits actually exploit things


Well, just put a 1* review and reference to the article for every Netgear router with a USB port on amazon that I could find... Since they "can't" fix it... afaik they refuse to fix it. It isn't like it's impossible to limit access to internal ports. Difficult, maybe, costly, maybe... just limiting to currently/recently shipping devices would be better than nothing.


I wonder if OpenWRT is also vulnerable or if it's just the stock firmware.

I used to use one of the affected devices (TP-LINK Archer C2), but primarily bought it to run OpenWRT on it. Eventually I got tired of tinkering with it and replaced it with something else, though.


So can anyone propose a sane way to check devices for vulnerability? Obviously one can disable NetUSB in the web interface, but that may not be enough.

Nmaping port 20005 is not accurate enough.

I want to check 3 of my routers, both from LAN and WAN.


any idea about what hardware is impacted?


Perhaps you missed the link in the article to https://www.sec-consult.com/fxdata/seccons/prod/temedia/advi...


Thank you




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: