What stood out for me was the kernel version of netgear equipment used: 2.6.36.4
That is a kernel nearly 10 years old. Of course, the router itself was released 7 years ago... and of course vendor stopped updating fimware as soon as they could... and of course it's still in production and for sale... and of course it costs $150+.
The bigger problem with these is usually not the vendor of the router, but the vendor of the SoC the device is built on. SoC vendors will usually provide you with a linux kernel that has an incredible amount of frequently shoddy out of tree patches and tweaks which make it very hard to rebase to a new version even if you really wanted to.
But then why should I prefer Netgear over some Chinese no-name device that costs a fraction and does exactly the same while also never getting updates? Where does that extra money go to? But like op said, nowadays I just check OpenWRT compatibility when it comes to buying a home router, a well supported device usually receives updates for a good decade.
(Anecdotally I've had three Netgear devices in my life and all of them caused severe problems. Enough for me to have them on my personal blacklist for life.)
You can go with Asus with has monthly/quarterly updates, as well as folks like Merlin (and others) who use the open source components to roll their own binaries.
There's code in there besides the kernel that could have (security) bugs and backdoors.
The pressure from the device vendors to the SoC vendor to support the kernel version which was initially used for the SoC and fixing bugs and adding new feature is much bigger than the pressure to upgrade to new major kernel versions.
The board manufactures normally do not want to upgrade. I very rarely see that device manufactures are doing a major kernel upgrade when a new SDK with support for a new major kernel is available from the SoC vendor. This only happens when they need new features from the new major kernel which can not easily be backported and a very big customer is requesting this and normally they also need a good internal software engineering department.
> The bigger problem with these is usually not the vendor of the router, but the vendor of the SoC the device is built on.
That just means the device vendor isn't exerting the necessary pressure/valuation on the SoC manufacturer.
Of course the SoC manufacturer is gonna try and get away with bare minimum of work for absolute maximum of profit, just like the device vendor. The pressure needs to start at the end of the chain (which is end users) :/
I totally agree. But the amount of pressure required is incredibly big. As in, Google has been trying it for years in the Android world and is only seeing success now with drastic measures that vendors hate, like forbidding use of any kernel patches outright.
Yeah... this might be something that needs legal action, like manufacturer liability for security issues within a specific timeframe...
(could even be a "best before" date on electronics; at least this way you know what you get when you buy, the vendor can decide on their own how much support they want to offer, and the user can choose between the "good for 2 years" router and the "good for 5 years" one...)
I remember IBM donated ( or released ) an OpenPOWER 14nm CPU design ( with 7nm design coming in a few years time ). I wonder how far are we from having CoreBoot, mostly opened hardware Router and Appliances.
I’ve said it before an I’ll say it again. Netgear are the worst network equipment supplier I’ve ever owned. You just need to look through their “customer support” forums to determine how understaffed and incompetent they are. Will never buy again.
Using an very old and unsupported kernel is completely normal for embedded consumer devices like home routers.
Normally the kernel is chosen by the SoC vendor when they start the new SoC project. Updating to a new major kernel version takes multiple man years of work because of all the vendor patches hacked into the kernel. Probably 500k to 2M lines of kernel code for such a router. After such an update you also have to run all your validation again which also takes effort.
Normally there are 4 years between the start of a new SoC project and the first device with this SoC hitting the consumer market. Then this SoC is used for new projects the next 4 years and shipped in new products an additional 4 years. Now the kernel is 12 years old. ;-)
Even when an SoC vendor provides a new SDK with a more recent kernel, most of the device manufacturers like Netgear would not upgrade existing products to this new SDK with the new kernel. They also have extensions to the kernel and would have to adapt them and then do an extensive validation again. Normally even security updates are only taken if someone proves that this specific devices is affected.
Often board manufacturers do not even want to use the new SDK with the new kernel and security updates for new products when they already have devices with the old SDK, because this would reduce their possibility for reuse.
The SoC vendor wants to reduce effort and will avoid to supporting many different kernel versions. If the major customers do not want to upgrade to a more recent kernel version they will stay at something old because the board manufacturers want to.
The problem is that the customer does not care for security. The customer cares for security features you can print on a box, but not for something like fixing publicly known bugs in the kernel in 6 months.
The home router industry is a hardware business, it is run by hardware experts and they run software like hardware. You start a project, build the system (hardware + software)), you validate the features and then ship it. Now you can start the next project.
If you want to change this you have to request this directly, for small customers this is no really useful, but if you are an ISP and buy 500k units a year, then you can put some peruse on the supply chain. Please communicate your requirements often and to many people in your supply chain and not as one of the 5000 requirements in the excel sheet. If you decide against a vendor because of their software, communicate this to them directly and to many people in his organization, to increase the likelihood that it reaches someone who understand this and fixes it in the future. If they improve you can choose between more vendors next time.
> Updating to a new major kernel version takes multiple man years of work
What factor of man years do you imagine OS vendors put into release engineering ? Some open source vendors support multiple streams of the same kernel across multiple architectures all at the same time for over 10 years. The problem is not impossible, not even hard, just work.
> Probably 500k to 2M lines of kernel code for such a router.
What functionality do they include that is not included in standard Linux ? I have setup a router that does the same as these systems with little work. No fancy GUI though.
I _guess_ they could not upstream their kernels ./arch. However the arch specific code doesn't churn hard between major releases.
Do you think you're overselling the complexity they have to manage ?
> The home router industry is a hardware business, it is run by hardware experts and they run software like hardware.
I think this is the crux of the discussion, they simply do not consider software updates as part of the life cycle, until they get customer demand they will not.
It depends on the hardware, for example, mine D-Link DIR-320 (A) has 32MB of RAM and 4MB of ROM (3801599 bytes, to be precise). It's not enough to fit last kernel (IIRC 4.8 for OpenWRT) even without GUI, I had compile it with some hacks.
Edit. It has 256MB of RAM, 128MB of ROM. There is no excuse for this.
> What stood out for me was the kernel version of netgear equipment used: 2.6.36.4
Does having a modern kernel solve this issue? As I understand even on a modern linux-based router one should disable those ALGs as it's their [lack of] interface with connection state which is broken.
This is very cool, but it seems like it requires an egregious router bug. Why would a router consider a packet to be a legitimate SIP REGISTER when it occurs as part of a TCP connection which has already carried an HTTP header?
Is the story here that ALG is a crude hack, which works by naively looking at each packet, without trying to parse the whole stream, and so is vulnerable to this kind of confusion? If so, there are presumably endless varieties of ways to fool it.
Use guesswork parsing, get egregious bugs. The most popular and probably the most useful guesswork parser is the 'file' utility. When it was used as part of a printing pipeline it caused the hilarious "OpenOffice doesn't print on Tuesdays" bug.
I'm not sure how much has changed in the last 10 years, but when I did lots of VoIP work then, the consensus in many places was "every ALG is broken, start by disabling ALG". SIP is complicated in unexpected places and even with the best intentions ALGs broke packets by rewriting wrong parts or making header formats invalid.
But specifically regarding your question, sip looks like http. Same structure / similar response codes. They probably didn't want to write a very strict sip-detector and ended up checking "looks like a verb+destination+ headers? Good enough". Or maybe it's to support http proxies where a http CONNECT session can turn into sip-tcp? (Never seen that in practice, but I guess it's possible)
Or maybe it's just that the alg processing is per-packet, not per-session? Who knows. Just login to your router and turn off ALG. Your SIP phone most likely already knows how to deal with NAT using stun/turn.
It's not smuggling a SIP session request via HTTP headers - even if it didn't look anything like HTTP it would be vulnerable to this attack, because the controlled fragment is arbitrary binary data from their POST body. The problem is the router's firmware doing detection on each packet without checking that the fragment offset is 0 first.
I didn't say it relies on http headers. Just made a guess on what alg may be doing. Yeah - it's broken with fragmentation, and possibly in other ways too.
> Is the story here that ALG is a crude hack, which works by naively looking at each packet, without trying to parse the whole stream, and so is vulnerable to this kind of confusion? If so, there are presumably endless varieties of ways to fool it.
Yes, ALGs basically try to understand the protocol while implementing as little of it as possible. They are likely always broken in some way, and there is almost always a better way to solve the problem they attempt to solve.
the core of the hack is the realisation that one can generate arbitrary tcp or udp packets from a browser by exploiting ip packet fragmentation (embed the evil packet in a large http post request that gets fragmented at just the right place).
and worst of all: i don't see a quick way to mitigate this. afaict, router firmware will need to be updated to check the fragment offset, right?
Checking the fragment offset is the right way to run the ALG but the router really shouldn't be running a SIP ALG by default either. Not only do most home users not use SIP but most SIP users don't need the ALG. Browsers could also block the ports but that's a hack to work around the issue not a proper fix.
i have good news, the routers spectrum began sending out this year not only have SIP ALG enabled but no way to disable it. in fact, just about the only thing you can change is DNS, and only via a smartphone app(!)
... except these arbitrary tcp/udp packets will be in IP fragments and therefore invalid.
The whole thing hinges on the NAT code NOT reassembling IP packets before passing them to ALG and the ALG also not observing IP fragmentation. These are bugs, and pretty severe at that, so the mitigation is just to patch the code.
Go to your router's panel and disable sip support / alg. Even if you use sip, it's still going to work. (Apart from some really weird edge case network setups)
On OpenBSD, PF will reassemble fragmented packets by default.
Your question still stands, is this sufficient to prevent this attack? Perhaps someone who has a greater understanding of PF and of this exploit can respond.
PF doesn't implement any ALGs by default. To my knowledge, it doesn't even have any ALG capability. So, this type of attack would not work through a NAT implemented by PF unless you have separately added an ALG via hooks (like ftpsesame).
Just heard of draw.io for the first time. The product seems extremely polished. And it's free. And it's open source. And they have a non-trivial team. And they don't force you to sign up. And they make money. (I understand as a paid addon in Atlassian.) I'm buffled. Very interesting.
...and they are competitively priced for Atlassian as well. We migrated away from Gliffy to draw.io on price alone. If Gliffy's pricing had been near draw.io, we would have stayed, but it was >2x higher.
Has anyone been able to reproduce this? Or find a report from someone who could?
The exploit depends on the user’s router having enabled a broken ALG implementation that ignores the fragment offset. I’m sure there are routers in the wild with this bug, but so far I haven’t been able to find any reports of the POC exploit working for anyone who tried it (before it was disabled).
The POC shows a message about ALG not being enabled when it fails.
Samy says he used a Netgear R7000. Would be interesting to get some POC test results from other routers.
The attack relies on the ALG ignoring the IP fragment offset in the UDP case only. In the TCP case, he's splitting up packets by forcing a smaller MSS rather than by relying on IP fragmentation. The IP fragment offset for the attack packet, in the TCP case, will be zero as usual.
To detect the attack in the TCP case, the ALG would have to keep a stateful record of the initial sequence number on the outgoing SYN, and to accept the SIP REGISTER packet iff seq# == ISN+1.
Because home router firmware is invariably written by the lowest bidder, I would be surprised if very many firmwares actually do that.
Might be HN title filtering that did it? I think I’ve seen it downcase uppercase words before. Or OP could be on mobile and their phone did it. It’s anyone’s guess.
The mistake is web applications that rely on session cookies alone to authenticate requests. Of course it's not their fault that it's so difficult to secure web apps, but by now this is common knowledge (CSRF/Confused Deputy Attacks). Not allowing browsers to make requests to internal IPs would break many things. We should be moving towards an internet architecture where there is no such distinction anyway.
There is nothing special here about LAN IPs. This same problem impacts anything that sits behind a firewall that the browser might have access to.
A browser has no way to know that a particular IP address is supposed to be sacred and unreachable by other websites. I’ve seen several dashboards hosted externally that load things from internal IP addresses and that design is MEH but it works.
There absolutely is something special about most of these IPs. There are blocks that are reserved only for use in private networks. IETF standards for IPv4 on this topic have been in place for decades:
No, there isn’t anything special about them from a browser’s perspective. They have been reserved so they can be re-used across private networks without conflicting with routes on the Internet. An address in the rfc1918 space isn’t magic and a shitload of corporate/campus/etc networks depend on treating them just like any other IP address when you load stuff up in a browser.
If a page loads a resource from a private address, it can easily be a local cache on the network. It can also be a dashboard like the example I gave you.
Example:
local.dashboard.example.org resolves to 192.168.255.244
dashboard.example.org resolves to some public address and web server that hosts a page that loads resources from local.dashboard.example.org
You'd be happy to learn that some electronic ID "middleware" (being generous here) rely on the ability to talk to localhost servers (see NexU for instance — https://github.com/nowina-solutions/nexu).
That's not the suggestion. Nobody is suggesting that browsers shouldn't be able to load http://192.168.0.1 in the address bar.
The suggestion is that an http://web.url.com page should be blocked from making requests to LAN addresses, at least by default.
I'm fine with a browser setting for corporate/enterprise users, but most home internet users have no need or desire for web pages that load resources from the LAN.
There's still lots of use cases for internet to local communication. Not everything lives in the cloud completely and there will be services which legitimately need to get data from your internal endpoints.
People argue against IPv6 because they claim that NAT provides some security, even if its just another layer by obscurity. The argument was already pretty shaky, but with attacks like this, it's pretty evident that firewalls provides security, not NAT.
I'd argue that NAT provides negative security in the long term.
Two reasons:
1) This attack is mostly built off the complexity that NAT requires. STUN, TURN, ICE, ALG all exist because of NAT. More complexity == harder to secure.
2) The actual goal of the attack is to allow sending packets to hosts on your network. The only reason this is even considered an attack is because treating NAT as a security layer leads to the neglect of other security aspects. It shouldn't be a problem for your smart doormat to get random packets, but we've been allowing vendors to get away with shipping devices/software that's only safe on a 'local' network for decades.
Maybe I'm wildly optimistic but if ipv6 had been the default for years and every random IoT device was expected to be exposed to the public internet by default, maybe we'd actually have something approaching decent host-level firewalling/etc by default too?
I think you are wildly optimistic.
All the services are not magically going to become secure. Even with IPv6, every router will come with stateful firewall by default, set to "deny all incoming".
And that makes sense. The security scope of "my home" makes a lot of sense. I want every computer in my house to print on the printer without extra authentication. And the tiny CPU in smart doormat can probably be overwhelmed even by a tiny DOS attack.
Well, they basic consumers would want wifi access point, which means beefy cpu and embedded linux. So while the “router” part might be gone, the “firewall” part will be still there, since it is so cheap and provides visible benefits.
And the advanced people would not buy switches as well, as they’d want some sort of dedicated firewall/IDS box.
> The actual goal of the attack is to allow sending packets to hosts on your network.
As I read it, it’s a little narrower than that. This attack allows someone to open up an unexpected port on your NAT gateway back to one specific host - the machine that ran the attack code in its browser. It, on its own, doesn’t get you to all the other hosts like the Smart Doormat or the IoT oven. (Though there’s a reasonable chance this gives the attacker, say, a redis port on your box, which gives them root, which then allows them to attack your doormat from there...)
this was my initial impression, i don't think you're wild OR optimistic but I don't think it'll ultimately be a NAT like solution, which hasn't really been synonymous with security afaik.
Agreed... I don't know why I get basically attacked online whenever I suggest that IPv4 with NAT offers typical internet users more privacy than IPv6, even with IPv6 privacy addresses.
Can you explain what privacy you think it offers over IPv6+privacy addresses?
As far as I can see it offers that activities behind the NAT get grouped behind one IP to the outside world, and I guess harder host discovery. Is there more?
No, I think that's about it. But that is quite significant. It means that an IPv6 address uniquely identifies one computer (albeit temporarily). An IPv4 with NAT does not identify one user, it might be anywhere from 1 to thousands.
A possible solution I'd like to see explored is to have each new web browser tab (and every process on your PC more generally) use a unique, fresh public IPv6 address by default.
One would only use a semi-static IP if running some type of server.
Ah, OK. I don't really find that too compelling, to be honest - what's your NAT buying you if you don't have many users to mix your traffic with?
I live alone and as far as I can see, all my NAT does for my privacy is a smidgen of plausible deniability (maybe a visitor on my guest wifi made that request), and that you can't tell which of my devices I'm using when I access a particular service - which is a small privacy win, but not a meaningful one in my book. Even for what I imagine is the common case of a family sharing one router, I'd guess it's usually not challenging to distinguish traffic between the relatively small number of potential users/devices.
If you're lucky(?) enough to be behind a NAT with enough hosts to be meaningful privacy-wise (dozens? hundreds?), and whoever's running this NAT isn't keeping logs about it (do they? honestly don't know), then you might be seeing a significant privacy upside. But it's not built-in to the protocol, and just adding a NAT doesn't buy you any privacy by itself.
I think the problem is that the privacy enhancing part of NAT is an emergent side-effect, not the main goal. Now it's got me thinking there should be something like NAT but explicitly targeted to privacy, like guaranteeing that your traffic is mixed through an IP that minimum a hundred other people are using. Or something like that.
> If you're lucky(?) enough to be behind a NAT with enough hosts to be meaningful privacy-wise (dozens? hundreds?), and whoever's running this NAT isn't keeping logs about it (do they? honestly don't know), then you might be seeing a significant privacy upside.
I worked out of a co-working space for a while that were explicit about not logging anything on their NATed shared internet connection. I saw more Google captchas using that (without a VPN) that I’d even see with TOR. After I left, a friend I’d made there told me they eventually found out someone there was doing seriously blackhat seo stuff and they kicked him out. Took a few months but eventually all the captchas went away.
As those unique "fresh" addresses would have to come from some shared routable prefix (which is the core problem with the various IPv6 randomized address solutions), that would still provide strictly worse privacy than NAT: if you happened to be one machine before you are now outing to the world the connection correlations between your tabs, and if you were multiple machines before you are now separating the traffic from the multiple users; meanwhile, no one thinks you are a large number of unrelated addresses (as no one is dumb enough to be tricked by your supposedly-separate addresses that all share the same prefix).
FWIW, one can totally implement NAT for IPv6, and in fact people have: everyone should always use NAT. NAT is the correct solution to this problem, whether you are mixing your traffic locally on your own network or are shifting your traffic through remote mixers (which is effectively what Orchid--the open source decentralized privacy network I work on, funded by some major VCs--does, though it frankly isn't there yet with "#privacy" despite having launched a long time ago and being usable today; btw: I haven't finished implementing this yet, but "use a separate circuit for each target host", which I think best achieves your dream of an address per tab, is one of the features I have been intending to have in Orchid for forever ;P).
(It makes sense for me to attach this to this comment, but I appreciate I am likely preaching the choir with respect to responding to you ;P.) The downsides people like to whine about for NAT involving peer-to-peer connections and hole punching are either misunderstandings (people often like to claim things don't work that do), are caused by shitty NAT implementations (that ignore the RFC recommendations on NAT), or were fixed by PCP/NAT-PMP (which is the technology everyone should be implementing support for in their software stacks, not IPv6).
(With respect to the attack in this article, this isn't an attack on NAT per se: it is an attack on these "application level gateways", which I frankly had always assumed were a "crazy" Linux IPMASQ feature, not something people were using in production. I appreciate why people are doing this: no one integrates PCP/NAT-PMP, and there exist older notably unencrypted and thereby already somewhat unfortunate protocols--such as FTP and SIP--that have port allocation assumptions that the router is trying to fix with heuristics. This mechanism doesn't work if the traffic is encrypted anyway--and all of these protocols support encryption as of 15 years ago, and a lot of users actually did switch--so I will claim these features should not be deployed: developers should use PCP/NAT-PMP in their clients or we should potentially integrate it into their operating systems, and not pretend that ALGs are much more than a dangerous MITM attack.)
> some shared routable prefix ... still provide strictly worse privacy than NAT
Why can't you share the prefix at the network level (ie where the NAT currently resides)? Just use cryptography to randomly distribute your device local prefix across that of the entire network.
For example (there's almost certainly a better way to do this), distribute a shared key when you assign device local prefixes via DHCP. To get a new address, allocate from the local prefix, encrypt with the shared key, and (slight hand waving here) map back into the network prefix. This will produce a pseudorandom distribution that is guaranteed not to overlap other devices (that use the same shared key OFC).
Depending on how downstream software made use of it, you might actually end up with better privacy characteristics than NAT. For example, an IPv6 address per browser tab mixed across multiple devices means that an adversary now has to do some amount of work just to reconstruct the range of the prefix that's being shared (where before NAT gave this information to them more or less for free).
You don't need to come up with this shared-key-DHCP scheme. SLAAC addresses already do this with privacy extensions, using NDP to resolve collisions (which are rare anyway, since the random identifier is 48 bits long).
Whoops I didn't realize the privacy extensions had managed to address this already. In other words, just collide and fix it up later.
Are you necessarily guaranteed 48 bits of address space though? Can't an ISP assign you any size prefix they like? A scheme that only works reliably (how efficiently does NDP resolve collisions?) on networks that observe some arbitrary convention seems like a bad idea to me.
The subnet prefix is a /64. Out of the remaining 64 bits, 16 bits are reserved to be FFFE. The remaining 48 bits used to be your MAC, and with privacy extensions are random.
I don't know how SLAAC works if the gateway's subnet is smaller than a /64. But I doubt any ISP would give you anything smaller than a /64; it'll create problems on their end with no benefit due to larger routing tables (which is the reason NDP exists for routing within a subnet in the first place).
It's not "collide and fix it up later". IIRC NDP is used to check for anything using the new address at the moment, and if it isn't then the device starts using it. NDP is already a core part of IPv6 (it's the only way to route within a subnet) so I don't think you have to worry about perf.
Sorry, I think I didn't articulate very well. It's not just ISPs, can't we arbitrarily nest networks? And you might foreseeably desire privacy on one of these interior LANs. Or perhaps your ISP is abusive and you can't switch for some reason. I dunno. (I know, it's not expected to be an issue in practice, but edge cases bother me.)
At what point (if any) will a scheme relying on NDP begin to break down due to collisions becoming unacceptably expensive to resolve? 16 bits of address space? 10 bits? Or is the algorithm behind NDP so robust that it will work right up until the network is completely full, with every last available address in active use?
>It's not just ISPs, can't we arbitrarily nest networks?
You really don't want to have subnets smaller than /64. It's not worth it.
If you do have smaller subnets, all the subnets that together make up the /64 must be configured to share NDP messages with each other, this is called NDP proxying. But again, the point is IPv6 is specifically designed so that routing stops at the /64 level and uses NDP after that. So don't make subnets smaller than /64.
>And you might foreseeably desire privacy on one of these interior LANs. Or perhaps your ISP is abusive and you can't switch for some reason. I dunno.
If you want multiple subnets, ask your ISP to give you a larger block than a /64. They are likely to do so. In the US you have anywhere from tunnelbroker giving you a /48 if you want, to residential ISPs giving you at least a /60 if not a /56 if your router asks for it.
>At what point (if any) will a scheme relying on NDP begin to break down due to collisions becoming unacceptably expensive to resolve?
NDP is just ICMP messages. It doesn't scale with the number of devices like opening a new TCP connection with every other device on the subnet or something. You broadcast a message and wait for a response. Every time a new device connects to a subnet it has to use NDP to find the router. In the same way every time a device chooses a new SLAAC (privacy extensions) IP, it has to use NDP to find a collision. Every device on the subnet is getting every NDP packet anyway, because that's how subnets work.
>Or is the algorithm behind NDP so robust that it will work right up until the network is completely full, with every last available address in active use?
Thinking of edge cases is fine, but this edge case requires having 200 trillion (2^48, 2.8E14) devices on a single subnet. It's very unrealistic.
> IPv6 is specifically designed so that routing stops at the /64 level and uses NDP after that
I thought IPv6 also supported subnets, DHCP, and static configuration, just like IPv4? Have I misunderstood? If SLAAC and NDP specifically require the full /64 that's fine, I just don't understand what the upsides to such a design choice would be. (But I'm obviously not a networking expert!)
> > perhaps your ISP is abusive
> If you want multiple subnets, ask your ISP to give you a larger block than a /64.
Long ago, I recall an ISP that charged extra for more than one active device. Their supplied router was perfectly capable of NAT but you had to pay extra if you wanted to use it (and there were multiple service tiers depending on how many devices you had). Obviously, the only reasonable way to proceed was to NAT everything using your own router behind theirs.
That particular practice seems unlikely to fly these days, but the point is that if an artificial restriction is possible to implement or information is freely available such capabilities will almost inevitably be abused by bad actors (see browser fingerprinting for a concrete example).
> NDP is just ICMP messages. It doesn't scale with the number of devices
Sorry, it seems I'm really bad at articulating this particular question about scaling. I was specifically wondering about a much smaller subnet, say 2^16. Once a significant fraction (say 90%) of the address space is being utilized, any attempt to select a new address at random should result in a proportional number of collisions. So if you were regularly cycling addresses, it seems there would be a _lot_ of overhead in that case.
> Every device on the subnet is getting every NDP packet anyway, because that's how subnets work.
That seems like it would break down at some point. Which is (one of the reasons, I thought) why subnets and VLANs and all that other stuff existed. If the answer is just "put that stuff in a block larger than /64" that's fine, I'd just never heard that before and don't understand the reasoning behind it. (It's not as though a single subnet is ever expected to exceed 2^32 active addresses?! So why insist on having 2^48?)
Privacy addresses rely on SLAAC, which requires /64s, which means you have a 2^64 space to pick random addresses from. I'd expect collisions to be rare up to about the square root of that (essentially this is a birthday attack), i.e. 2^32 active IPs.
NDP actually uses multicast, so your switches can filter out some of the NDP traffic so that only devices with IPs that share the last 24 bits will receive the NDP query. That should make it possible to scale a subnet to a substantially larger size than would be reasonable on v4.
If you're using a /112 then you're not using SLAAC, and therefore aren't using privacy extensions. DHCPv6 does have an option for assigning temporary addresses, in which case the DHCPv6 server is responsible for avoiding collisions as usual... but really you shouldn't be using /112s. If you are then someone is screwing up somewhere.
There are a few advantages to using /64 as the subnet size: it makes it possible to generate a unique address directly from your EUI-64 address, it's used to help prevent L2 MITM attacks via SEND, and it makes it difficult to exhaustively scan a network to look for active hosts, which shuts down network scanning as a viable technique for spreading malware.
There also shouldn't be a need to dedicate more than 64 bits to the network side of the address. There are ~330 million /64s available... per person on the planet. Does it really need to be larger? And that's just in 2000::/3 as well; if we do in fact run out of space then we can restart allocations using a tighter allocation policy in one of the five other untouched /3s we have available.
>I thought IPv6 also supported subnets, DHCP, and static configuration, just like IPv4?
It does support all of those.
Dagger2's comment already explained the benefit of using /64 as a subnet size. I want to clarify that when I was talking earlier about getting a /60 or larger from your ISP, the point was that a /60 can be divided into sixteen /64 subnets. So you do have multiple subnets. The only subnet division you should avoid is dividing smaller than a /64.
It's not impossible to work with subnets smaller than /64, as long as you enable NDP proxying like I said. Eg some guides for setting up IPv6 for Docker containers, where the host doesn't have a delegated prefix and thus cannot be a sub-router, tell you to assign a /1xx to Docker and enable NDP proxying between NICs in the kernel. The nicer way, which I used to use in my homelab, is to delegate a /64 to your Docker host so that it can be a sub-router, and assign the whole /64 to Docker to assign IPs from. This of course requires that my main homelab router gets a bigger block than a /64 from my ISP. (My "ISP" was a tunnelbroker tunnel, and thus I got a /48.)
>Long ago, I recall an ISP that charged extra for more than one active device.
Well yes, if you absolutely must use an ISP that only gives you a /128, and you must use more than one device, then you'll need to set up NAT66, or any of the various IP tunnels to an external server, and forego having universally routable addresses on your devices.
> one can totally implement NAT for IPv6, and in fact people have
Please don't say things like this out loud!
The purpose of IPv6 was to eliminate the need for filthy hacks like NAT.
NAT does nothing for privacy, not in theory, not in practice.
Everyone trying to get at your private information has figured out hundreds and hundreds of methods for tracking you. Single pixel images. Browser fingerprinting. Hardware fingerprinting. Mouse movement patterns. You name it, they're doing it.
They're not at all slowed down by NAT, but the Internet is harmed by it.
I think the situation is quite a bit more complicated than you make out.
To my ISP, NAT obscures device browsing history (assuming there are multiple people and devices within a household). To the best of my knowledge an ISP has no realistic way of engaging in mass browser fingerprinting.
To a web host, NAT obscures the number of users behind a given IP address. Sure, they can likely recover some amount of information by engaging in browser fingerprinting but right off the bat it makes their job harder.
Security and privacy both involve layers. Every bit of information leaked is a concession to an adversary. When I switch my home network over to IPv6 I will almost certainly add NAT to it.
> the Internet is harmed by it
I don't believe you. Shitty software is harmed by it. If you have concrete examples to the contrary, I'm open to them.
> To my ISP, NAT obscures device browsing history (assuming there are multiple people and devices within a household). To the best of my knowledge an ISP has no realistic way of engaging in mass browser fingerprinting.
There's plenty of information that an ISP could silently listen in on, e.g. user-agent header, pre-STARTTLS cipher suites. And realistically how many people are there in a household, and how much do they reflect on each other? What's the threat model where this is a realistic improvement in your privacy?
> Sure, they can likely recover some amount of information by engaging in browser fingerprinting but right off the bat it makes their job harder.
> Security and privacy both involve layers. Every bit of information leaked is a concession to an adversary.
Weak privacy measures are worse than nothing just like weak security measures. Putting in effort to obscure one or two bits is a false economy. One solid layer (e.g. Tor) will protect you far better than any number of weak layers.
> I don't believe you. Shitty software is harmed by it. If you have concrete examples to the contrary, I'm open to them.
Everything peer-to-peer is made needlessly harder, and the result is centralisation that hurts the overall internet. E.g. in a non-NAT world, hosting a multiplayer game and letting your friends join is easy; with NAT, it's hard enough that people rely on the manufacturer providing servers (which they won't do indefinitely) instead.
I thought temporary addresses were supposed to (mostly) solve this and would be my preferred way. Though the default 24h lifetime is not quite as helpful, except for hopefully sidestepping the genius - embedded by MAC into my IPv6 - address generation.
It's in a nearby comment chain, but apparently my understanding was outdated and/or just incomplete. The SLAAC privacy extensions will make use of a full /64 prefix instead of a device specific one. That makes it functionally equivalent to NAT if and only if addresses are rotated on a very frequent (ex per browser tab) basis. (I wonder if you could rotate per origin within the same tab?)
You do realise that most IPv6 implementations randomise the /64 "host" part of the address, right?
IPv4 NAT does not provide any additional privacy over IPv6.
You can jump up and down and claim the contrary, but it's just not the case.
Meanwhile, Facebook, Google, and the like are scraping petabytes of information about every Internet-connected person on the planet and selling it to the highest bidder. These organisations are not slowed down in the slightest by IPv4 or IPv6.
Maybe in an office setting, but in a normal home setup, if all it does is make it so you dont know which computer in my apartment is in use, that hardly seems like a privacy win.
Isn’t this argument against ALG not NAT? You could have ALG with IPv6 no NAT involved, with the router enforcing deny all inbound unless ports explicitly opened (such as via this SIP ALG request).
Exactly this! Especially when you have multiple internet feeds into your router, but no assigned IPv6 block. Eg Router has fibre from one carrier, and LTE from another carrier, and you either want to actively use both in a load balancing fashion, or use them in a HA/failover fashion without paying through the teeth to the ISPs for the privilege.
Fibre with LTE HA is an extremely common scenario in the physical retail world. And retail is notorious for tight margins. With IPv4 this is dead simple for a router to do using NAT. With IPv6 it’s a nightmare because everyone is against NAT in IPv6. It took ages to get some form of NAT, and it’s poorly supported.
Mostly agree, though to nitpick on a particular POV:
> multiple internet feeds .. assigned IPv6 block ... paying through the teeth to the ISPs for the privilege.
People like having their own network blocks, but typical route-my-subnet-here arrangement require all core routers to store the path to it. IMO assignments of globally re-routable addresses should be actively discouraged and you shouldn't get one unless you're an ISP. Not defending current payment arrangements.
Roaming should be done at or close to the endpoints - imagine if all internet routers had to keep and synchronize ~12G entries, usually in special CAM memory (currently at ~800K) - that does not scale, at all.
Gosh I wish IPv4 -> IPv6 was just
10.x.x.x -> 10.x.x.x.x.x
Instead we went to some hex-colon-double-colon BS. Like instead of "ssh 10.0.0.1" I have to now do "ssh -6 :://::2001:db8:ffff:abcd::34ff:2e6f::49af:fdef::"? Fuck that.
The simple fact that I can't memorize IPv6 addresses makes me lazy to use it. Whenever I want to SSH into a machine in 5 seconds flat I just rattle off its IPv4 address and I'm in.
There is a little known protocol invented in 1985 that even had a port number assigned to it by IANA. It's surprisingly well-supported by lots of systems even today!
It's called "The Domain Name System". I know it sounds alien and weird, but I assure you that you'll find that it greatly eases the burden of having to memorise network addresses!
Maybe your distribution has it included, just download it and give it a go.
Firewalls have bugs too. The real problem is bad incentives.
For IoT manufacturer a device which fails to work due to blocking is far worse than a security flaw. The first directly leads to chargebacks and bad reputation. The second leads to.. nothing.
So long as IPv4 exists, they have to at least use NAT, and their tracking is limited by IP pooling. But when we switch to IPv6 only, there's no good financial reason for devices to even have a 'deny all' stateful firewall.
It's more profitable to not bother with a firewall (just blame the client if he doesn't install a firewall to shield the device), than to deal with creating the right exceptions for firewall policy - even a single policy error would have a much stronger financial effect than opening up the device for the entire world.
NAT necessitates a stateful firewall which does provide some security. You can easily have the stateful firewall without the NAT, and on IPv6. That is why this argument is silly.
That’s exactly right. My only point is to push back on the inference that NAT has absolutely zero impact on security posture. I liken NAT to a Ha-ha wall: it sometimes makes inbound more difficult under certain circumstances. It’s not real security. But it’s also not nothing.
It doesn't even give that. Stateful NAT necessitates state tracking but doesn't require a stateful firewall.
I've tested it -- NATing the outbound connections from your router has no impact whatsoever on inbound connections (which makes sense, but people have a really hard time wrapping their heads around the idea).
Regardless of its appropriateness or effectiveness, however weak it may be, in the real world where normal people live, NAT is the first line of defence against inbound attacks for most personal computers and personal devices on the internet.
The real criticism is that the presence of NAT has created dangerous complacency among consumers and operating system vendors, relying upon its pseudo-firewall properties for "security" that can be described as barely good enough, arguably not even that.
NAT can't prevent inbound connections, so... no, the first line of defence for most personal computers and personal devices on the internet, in the real world where normal people live, cannot be NAT. It's just not a thing that it does.
That's an RFC1918 IP. I'm not going to be able to reach it from here.
So long as you aren't also running a firewall, your ISP or anyone on your immediate upstream network could reach it just fine though. If you get me access to that network then I'd be happy to demonstrate.
It means the rewriting of the apparent source address of outbound connections, yes? On Linux, you would do it with `iptables -t nat -A POSTROUTING -o wan0 -j MASQUERADE`. Is that an accurate enough description to convince you that I know what NAT means?
"Rewriting the apparent source address of outbound connections" means that it's an operation you apply to outbound connections. Inbound connections are completely unaffected. This is made even more obvious by the `iptables` command above: it explicitly says "-o wan0", which restricts the rule to applying only to connections going out of wan0, not to ones coming in from it.
So yes, your ISP (or rather, anyone on the immediate upstream network, which is typically the ISP) can connect to your 10.1.5.12 -- that's just basic routing -- and NAT won't stop them because NAT doesn't do anything to inbound connections.
Ah, you seem to be under the misapprehension that consumer NAT devices act as BGP routers. No, in all normal configurations the “WAN” port listens on one IP only. Any traffic bound for a private/bogon range—even if it matches your internal range is discarded, not routed or forwarded.
This is trivial to prove by placing a NAT router on your internal network (“double-NAT”) and attempting to connect to devices on the second layer of NAT from a device on the first.
But even if that were true, it would be a mere technicality that doesn’t refute the point made ten or so posts ago. NAT does remain an imperfect yet semi-functional pseudo-firewall for stray inbound connections from the Internet at large.
I don't think they act as BGP routers. They do act as regular routers though, and will accept and forward traffic for any range unless they have a firewall rejecting it.
> This is trivial to prove by placing a NAT router on your internal network (“double-NAT”) and attempting to connect to devices on the second layer of NAT from a device on the first.
I've done this test before, as I mentioned. But okay, since I can be wrong and you seem confidant that I am, I'll do it again.
I just plugged a NAT router into my internal network, and attached my laptop behind it. The router got 172.16.1.130/24 as its WAN address, and is using 192.168.3.1/24 on the LAN side. My laptop got 192.168.3.158. I disabled the firewall on the router, since we both know that a firewall definitely will drop an inbound connection. On my desktop, connected to the main network, I ran `ip route add 192.168.3.0/24 via 172.16.1.130` to get the routing to work, and then I did this:
# telnet 192.168.3.158 22
Trying 192.168.3.158...
Connected to 192.168.3.158.
Escape character is '^]'.
SSH-2.0-OpenSSH_6.7p1 Debian-5+deb8u3
I verified with `tcpdump` on my desktop that outbound connections from the laptop are being NATed to appear to come from 172.16.1.130, and yet I can still telnet into the laptop from the desktop. This appears to contradict your assertion that this traffic will be discarded, and it exactly matches my own assertion that someone on your immediate upstream network can connect inwards over NAT.
This is the same result I got the last time I tested this. So... can you explain what's going on? Because your trivial test seems to back me up here. If NAT was an imperfect yet semi-functional pseudo-firewall, why isn't it doing something here?
Curious. What brand/model of router did you use? My suspicion is that the manufacturer might be being "helpful" by bundling aspects of NAT configuration into its firewall configuration screen—so by disabling the firewall, you've also reconfigured the pseudo-firewall packet dropping behaviour of a typical NAT implementation.
Regardless as I have repeatedly said, this isn't sufficient to invalidate my claim. To recap, here's claim I made which you are attempting to dispute:
"NAT does provide some security, it’s just uncertain, unreliable..."
and
"...it sometimes makes inbound more difficult under certain circumstances. It’s not real security. But it’s also not nothing."
and
"however weak it may be, in the real world where normal people live, NAT is the first line of defence against inbound attacks for most personal computers"
Your counterpoint is that some non-zero number of NAT routing implementations will pass a very specific kind of traffic onto local IPs. Cool. For the sake of argument let's assume that's true of all consumer routers. All you've proven is that you're potentially at risk if the attacker has assumed control of the first outbound hop at your ISP. This doesn't for one second disprove that NAT has pseudo-firewall behaviour for any traffic passing through your ISP, which is the case for 99.99% of real world security threats.
I used OpenWRT, and bypassed the firewall with `iptables -I FORWARD -j ACCEPT`. There are no NAT-related rules in that chain, so I didn't do anything to the behavior of the NAT.
> Regardless as I have repeatedly said, this isn't sufficient to invalidate my claim
Okay, let's be clear: you've made an unsubstantiated claim with no evidence that isn't backed up by the known behavior of NAT. Your only support for this claim is to say that it's "trivial to prove" by doing this test, and when I actually went and did this test, it failed to prove your point.
Rather than invalidating your claim, it would be more accurate to say that neither of us has been able to validate it.
> This doesn't for one second disprove that NAT has pseudo-firewall behaviour for any traffic passing through your ISP, which is the case for 99.99% of real world security threats.
You're in luck: I have the necessary setup to test this as well. I happen to have a server subnet that's reachable from the internet, so let's try NATing the outbound connections from it and seeing what happens. (I'm going to edit the prefix on these IPs because I don't particularly want to commit them to the public record, but the suffix remains unedited.)
First, let's check what IP the outbound connections come from, and see what happens when we connect to that IP from a client elsewhere on the internet:
internetclient# curl -i http://203.0.113.136/200
HTTP/1.1 200 OK
So yeah, I added NAT and it did nothing to inbound connections coming from the internet, which demonstrates the lack of pseudo-firewall behavior for packets that traverse the ISP and not just ones from the next network over.
It's possible that this attack also works over IPv6, if the IPv6 firewall uses a similar ALG to open ports. It might even be easier, because there is no internal IP address to discover.
hidden img tags to all common gateways (eg 192.168.0.1) are loaded in background
And why would browsers allow this? If I connect to external IP address, why would the browser happly parse such html tags that allows introspection of my local network? Unless of same origin, browsers should not allow this, seems like a bug.
The browser can't know what's an external or internal address (not that even we humans have a meaningful definition, not really a compatible concept with current networking)
This is one of the reasons network level controls are problematic and the solution is being reinvented as "zero trust networking".
RFC1918 space and cgn nat space etc are ambiguous addresses (vs unique like global ip space), but can be external or internal from your pov depending on circumstances.
Replying to myself with an addition: RFC1918 addresses are strictly worse wrt making up network level policy than normal IP addresses, you can apply the same rules for normal addresses on your network with the advantage that you know what they mean. (Good reason to prefer IPv6 too)
It is, but it's a bit leaky simply because it's such a late addition. Img tags aren't subject to CORS (you can display images from anywhere). Access to the loaded image data is CORS controlled, but the onload and onerror handlers or the dimensions of the final img tag aren't restricted.
If you know for example the path of the netgear logo on a router, you can try loading it and determine success/failure. Existing CORS isn't strict enough to prevent this, and it's debatable whether it should be
If we could go back 30 years we might decide that img tags can only show images from the same domain. That would also have solved the whole hotlinking mess of the 2000s. We might also decide that img tags need explicit width/height declarations. That would also have prevented lots of reflow issues.
But we didn't do either of those. Changing that now would be too disruptive, the web is built on the assumption of basically eternal backwards compatibility. And with the amount of insight JavaScript has into the DOM of its own page it's basically impossible to hide the dimensions of a rendered element such as an image. So once you have an img tag without explicit size declaration, onload is basically a performance optimization that could be replaced by polling the position of surrounding elements.
If it was just a Linux box, they'd probably be much better off. It's all the crap that vendors add that usually causes problems.
Most people building their own iptables firewall would not enable any ALGs, since they're almost always the worst way to solve NAT problems. OTOH, vendors want everything to work out of the box as much as possible, even in weird edge cases with broken servers/clients, so they enable all available ALGs and sometimes don't even provide a way to turn them off.
I just tested whether my router has SIP ALG enabled with a test account on antisip.com (any provider will do), and softphones running on two devices. After initiating a call, I obtained the 'Contact:' IP and port from Wireshark live traces and used a port checker tool to test whether the router set up a forwarding rule.
This is a truly marvelous piece of work. Expected nothing less seeing who it was from.
However, don't miss that it all works because Netfilter passes IP fragments to the ALG instead of either reassembling packets first or making the ALG fragment-aware. Either way it's a software bug, rather than an inherent flaw of the protocol stack.
There are two different mechanisms of interest here: IP fragmentation (IP packets get broken up into smaller chunks, because intermediate networks have an MTU which is too large to allow them to pass) and TCP segmentation (the TCP stack breaks up payloads into segments no bigger than the requested MSS).
The author here presents two variants of the attack. The UDP variant does indeed rely on IP fragmentation, but the TCP version does not. To mitigate the TCP version, the ALG needs to be aware of the state of the TCP connection. Simple IP fragment reassembly won't be enough.
Agreed. I'm labouring the point because a lot of comments here are waving this away as a one-line stateless change ("just check the fragmentation offset is zero"), when in fact it might mean switching from a completely stateless ALG to a stateful ALG. That is a bigger deal from the code complexity point of view, because it means the ALG has to start tracking TCP connections.
I don't see any way the ALG could work reliably without being stateful - after all, the string it's looking for might end up split across multiple packets for any number of reasons out of control of the network hardware.
Perhaps this is the reason why all of these kinds of protocols always seem unreliable...
It might, but in a very high proportion of cases the networking stack takes the happy path and a single logical protocol unit translates directly to a single packet on the wire. That being the practical reality, it's quite possible to write a stateless protocol parser and have it apparently work. Except, as you point out, when it doesn't.
If the device is doing NAT, it already keeps track of whether a TCP connection is new or not. It feels like it should be easy enough to check that state when applying the ALG rule.
Or can a SIP REGISTER happen in a packet after the first?
There's this and a hundred other ways to get past a firewall, and NAT is an ugly hack to extend the life of IPv4 not a security feature.
Firewalls are a false god. If something can't be connected to the open network and remain reasonably secure, it is not secure.
Firewalls are a necessary evil because there's a lot of insecure junk, sure, but relying on them encourages more insecure junk to be made and encourages lazy security practices in general.
That's actually not secure either, for a different reason -- DNS Rebinding exposes localhost-only servers to the outside world through your web browser.
Yes. You can see that this bug was marked wontfix. Ultimately, it seems that browsers have decided that breaking DNS rebinding breaks an unacceptable number of legitimate use cases.
You can still fix it at your own router/DNS server, though.
> You could use dd, however you'd want a large bs (block size) so that it would output quickly, eg 1024, however the skip attribute (to tell it to start at the location of the squashfs blob) would respect the block size and 2221098 isn’t obviously divisible in anything quickly in my head other than 2…
This looks neat, but it seems like DNS Rebinding is likely more powerful here -- that would get you access to servers on the entire local network (not just the victim's own machine) without having to know the local browser's own IP first, so both the exploitation potential and the ease of carrying out the attack seem better for DNS Rebinding to me.
Also, DNS Rebinding exposes services which are only listening on local interfaces (127.0.0.1) and I suppose this wouldn't.
Could someone help me understand some common circumstances in which using NAT Slipstreaming would be a better choice?
With DNS rebinding, you can still only send HTTP requests* to the target. With this attack, you have a direct, raw TCP/UDP socket.
(*) I'm simplifying, what I mean is that DNS rebinding still limits you to only what you can do in the browser, which is effectively HTTP. Most non-HTTP services will generally just close your socket once they see you send an HTTP request.
I do not play 3D games in the browser. If that is a usecase, then webGL has to be enabled of curse.
If browser games were a usecase of mine, then I'd keep my safe "daily driver" Firefox for surfing the WWW with the mentioned restrictions in place. I'd use a separate Chrome browser as a "game client".
I've always been really suspicious of ALGs, and disable them whenever possible (or simply don't use them in the first place when setting up my own iptables firewall etc).
They are generally somewhat neglected, and fairly complex, and they try to understand the flow of a protocol without really implementing the protocol, making them ripe for confusion-type attacks.
I understand _how_ the attack would work, but I do not know if the linked page is supposed to be a proof-of-concept of the attack, or where I would click on the cited web page to demonstrate the attack on my machine. What am I missing?
Unpatched Windows 10 (17134 1098) running an ancient Chrome 67.
My understanding from reading this is that it requires ALG to be supported by the firewall/router. pfSense doesn’t seek to have ALG as an option to enable. Am I correct in my assumption that this does not affect a network using pfSense as its firewall?
That is a kernel nearly 10 years old. Of course, the router itself was released 7 years ago... and of course vendor stopped updating fimware as soon as they could... and of course it's still in production and for sale... and of course it costs $150+.
It's a good thing dd-wrt and similar exist.