an application binary interface project for Linux that allows programs to take advantage of the benefits of x86-64 (larger number of CPU registers, better floating-point performance, ... ) while using 32-bit pointers and thus avoiding the overhead of 64-bit pointers
So in other words, it's a 32-bit address space in 64-bit mode? I thought that wasn't possible with x86-64... in fact the biggest mistake I think with x86-64, after some(admittedly not much) reading of the manuals is that it wasn't like the 16/32-bit where you could independently choose the default operand and address sizes, and use 32-bit addresses or registers in 16-bit mode and vice-versa.
Not sure what you were reading, but to the best of my knowledge, long mode retains almost all of the 32-bit instructions, only a relative handful were ripped out.
Code running in 32-bit protected mode can't access 64-bit instructions or registers, of course, but that's a different issue. x32 is basically just long mode code that limits itself to instructions that use 32-bit addresses.
After checking the docs, it seems the fast syscall instructions force the GDT to be laid out in a specific order. Since the segment values (selectors) are really offsets into the GDT, it makes sense that they end up being the same.
Probably not; according to wikipedia you cannot go to virtual8086 mode from "long mode" (aka 64bit mode), which also explains why 64bit windows no longer ship an NTVDM or run 16bit dos and windows applications.
> Code running in 32-bit protected mode can't access 64-bit instructions or registers, of course, but that's a different issue.
That's what I was referring to - in 16-bit realmode you can use 32-bit registers and addressing modes with prefix bytes just like you can use 16-bit operands or addresses in 32-bit mode, and in protected mode both 16 and 32-bit segments can coexist in the same system. But it looks like 64-bit is completely isolated from that, although they could've used some other prefices, enabling access to 64-bit operands (including the extra registers) and addresses from any mode, and also allow coexistence of 64-bit segments with existing 32- and 16-bit segments. Then x32 wouldn't really need to exist; it would just be a 32-bit segment with instructions that use 64-bit operands.
x32 uses 64 bit registers to address memory in a theoretically 64 bit space. It simply decides to map all memory within the bottom 32 bits so that when it stores the pointers to memory they are 32 bits wide.
I don't run Ubuntu myself, but CONFIG_X86_X32 isn't a new architecture, it's an option for 64 bit kernels. Based on the state of that bug report, it looks like it's an option they've enabled in their recent 64 bit kernels (Quantal+)
You need a completely different userland, compiled for x32 instead of amd64. Just running the default amd64 installation with a x32 kernel doesn't make it x32. EDIT: It would also work for just running x32 apps on an amd64 system, but then you only get the x32 benefits for those applications.
You don't need a whole x32 userland in order to be exploited. Just a single x32 binary executable that happens to not depend on on any external libraries.
I did't mean for an exploit, I just meant that you don't have any use for a x32 kernel on your system if you don't have x32 apps and libraries (unless you build a statically-linked x32 app).
I realized after writing my post that you don't really need an entire system, just the x32 libraries (and not even those if you statically link your x32 application).
While we sometimes get laughed at for not running the latest and greatest, I am (once again) quite glad that I'm running RHEL 5 and 6 and their 2.6.18 and 2.6.32 kernels.
I'm not sure you noticed, but this doesn't affect any popular distro. It only affects the kernels build with the X32 ABI, which is not used in production by anyone as far as I can tell. So the fact that you're running old kernels didn't prevent anything this time.
At the bottom there are two lines that taken together really confuse me:
<grsecurity> If you're running Linux 3.4 or newer and enabled CONFIG_X86_X32 , you need to disable it or update
immediately; upstream vuln CVE-2014-0038
and
<grsecurity> In case there's confusion, this vuln is not about 32bit userland on 64bit (CONFIG_X86_32), but the new X32
ABI. Ubuntu enables it recently
Does the second line affect the first? EDIT: I ask because it looks like I need to fix my kernel, but I'd rather be lazy if possible.
x32 is a separate abi, distinct from the normal 64-bit and 32-bit abis.
Most distros don't use it, most don't even offer an x32 install image, but if they enable the kernel option for the x32 abi even without intending to use it, they're vulnerable.
If you're using the x32 abi, either you're using it on purpose, or your distro happens to offer an x32 install image and you downloaded it by accident instead of the normal 32-bit image you probably wanted. In all other cases it should be safe enough to disable the kernel option.
Does "grep CONFIG_X86_X32 /boot/config-`uname -r`" give you any results which aren't commented out? If it does, you're at risk; if not, nothing to worry about (at least, with regards to this vulnerability).
Has anyone seen talk of this leading to an expedited 3.13.2/3.12.10 release? If I need to manually patch my kernels to fix this, I'll do it, but if upstream is going to release fixed versions shortly, that seems like the preferable solution.
So far though, I haven't seen anything about merging in the patch provided by PaX.
As far as I know, no. I'm about 90% sure you have to rebuild the kernel to disable it. That said, I've only read about it, not played with it, and I don't have a box with it enabled at my fingertips.
That said, this is a local exploit, so unless you have untrusted users or otherwise run untrusted native code, your risk is pretty low. Someone would have to exploit something else first to get access to run custom code on the box.
There's always Ksplice, though it's only really available for Oracle (they also distribute free Ubuntu Desktop and Fedora patches to entice you, but I would be willing to bet you're more interested in server).
At this level, you're dealing with an API at the calling end (you should be fairly intimately familiar with the semantics without much reference to the docs if you're working on the implementation) and the functions being calling are almost readable as English.
Code should strive to be written such that it doesn't need comments unless it's being clever - and it should avoid being clever if possible.
Also, I'm not sure where you saw a "wall of code". I don't see any walls of code, not in the commit diff, nor in the smaller diff in the email. A wall of code, for me, would have to be long (say, 70+ lines - but it depends on the language) and dense (e.g. boolean expressions complex enough to need parentheses to clarify precedence).
That's an attitude of "stay the fuck out of my code - you're too stupid to understand it without comments". I've seen it too often in "large code bases"
"you should be fairly intimately familiar with the semantics" - do you see a problem bootstrapping that on an obscure code base?
> …do you see a problem bootstrapping that on an obscure code base?
No, I don't. It just requires that you can read C and understand it. Any large codebase (in whatever language) is going to require you to understand certain semantics and idioms, and it doesn't make any sense to document those idioms every single place you use them. Big codebases require context and a comment in some random function isn't going to give you enough context unless it was a tome.
Also, did you see the 2 patches linked in the bug report? They significantly changed the code. C is just like that. It's an environment ripe for comment bitrot. And the only thing worse than no comments is incorrect comments.
In the end, the code is what matters. If you can read it, you can understand it.
Yes, I've seen, read and kind of felt what the patch does. If you claim you _understand_ the code then I call you on this.
The only thing worse about something is the preconceived idea that there is indeed only a _single_ thing worse - all the rest being better or equal.
No, worse than missing comments is also _bad code_ which was the case here. And bad attitudes like "my code is prone to bit rot so I won't comment it" or "my code is self-explanatory so I won't comment it" or worse "my code is so special and super optimized and smart that I will not comment it so only super special guys will be able to modify it". And also "I'm a kernel maintainer so I don't have to comment anything"
Excuse me, based on which experience or do you claim that it was bad code? Based on which experience do you claim that there should be more comments? It looks to me that you don't have anything to support what you say, and I'd appreciate you showing me the opposite.
Have you actually ever programmed anything for Linux kernel? What have you programmed that would be relevant and give the weight to your (for me dubious) claims?
I don't claim anything. The report claims "This appears to be a serious bug". If it wasn't, then they would have not suggested a patch and we would have not have this conversation.
Now, I don't have to show you anything. What are you talking about?
Do you dismiss me because I extrapolate developer attitudes from their obtuse/obscure code (dubious/discuss) or because I can't keep a stack of callbacks in my mind to get myself out of the maze (true by the way)?
But what about that function? It has 5 arguments! Do you claim that they need no explanation whatsoever? And don't cheat. Tell me without looking what is "type" suppose to contain?
It was bad code because it dereferenced a user mode pointer without going through copy_from_user/copy_to_user. That much is clear.
That's not to say that nobody else has made this mistake or that it won't be made again by otherwise capable programmers. One can talk about this as bad code while still having sympathies for the error. Hate the sin, not the sinner.
The issue I raised is not about this particular piece of code or about it being defective - it could have been perfect for what is worth.
The deeper issue is about code as it is communicated to the next developer.
For various reasons (most - me included) developers do not talk to the next developer but to the compiler. And some other guy comes later who doesn't know about copy_to/from_user shit because he is not exposed to the internals.
He might be competent programmer but he lacks exposure. He might also be able to see in a second the dereference but when he is reviewing the code, he keeps a stack of several levels of irrelevant contexts that blurred his view.
He could have avoided all that if only the code was being written with him in mind.
Having done a small amount of work with the Linux kernel I do not consider this to be an informed position. There are a lot of conventions being followed here and they are followed consistently throughout the code base - they become more and more obvious the more code you read. Not knowing about copy_{from,to}_user is a very rookie mistake. I find it hard to believe the author of this code didn't know about it (they even annotated the parameter as a user pointer), my guess is they probably just figured that the functions they were calling already performed this check and didn't think it needed to be inserted twice.
Edit: Re-reading your comment I also wanted to point out that the problem of handling user mode pointers in a kernel is a difficult one, and requires discipline. You can't keep a straight face and say as you do that you can handle it in a way that newbies can come right in and write bug free code without learning these conventions. The only sane way is to create a workable set of conventions, apply them consistently, and do what you can to educate new contributors as they come along. It's not unique to Linux - I used to work at MS and can tell you the NT kernel has the same issues.
Man - don't take my example as something to extrapolate from. See the big picture and where my example comes from. Extrapolate from there.
This particular piece of code is simple and easy to understand and follow. The function is small - you can take a few hours and understand the stack.
The point is, you have to play the compiler game in order to understand. The original coder is not helping you in any way. It doesn't give you any context. Where's the place of this (any) function? Why was it written in the first place? What problem is it trying to solve?
There are some things that the code alone can't tell. Because the code talks to the compiler and the compiler doesn't care about context - people do. And people understand either by taking the hard way (playing the compiler game - having to jump back & forth) or talking to a human. And the code comments are the best that you can get short of talking directly to the developer.
I think you have abstracted yourself too far away from the details here.
I'm all about leaving code maintainable for the next guy, however I think we disagree about what that means. To me, it's largely about writing your code in consistent patterns that make certain classes of bug "pop out" at you (because doing things that way would stand out and look like a break from the convention).
However as a practical concern, if you're writing kernel code you shouldn't assume your future maintainers don't know about the distinction between user and kernel address spaces. And you shouldn't have to be very detailed about every small little compatibility shim you write - if compat_sys_recvfrom does very little else but call sys_recvfrom without much comment or fanfare that is OK by me (keep in mind there's going to be one of these wrappers for almost every syscall).
Indeed, you have to draw a line somewhere. Do you explain (1) 2+2, (2) user/kernel space, (3) what a _compat usually is for or (4) what this particular _compat does?
I for one would draw the line before (4). The original developer drew it after. If I would explain API conventions I would draw it before (3) but crypto _API_ for example has the line after (4). And there are appropriate places for (1) and (2) also.
The point is, you address to some audience. But even if the code is clean, a newcomer will have a hard time getting up to speed if he has no idea about the context. The way I see it, for some parts of the kernel, either the code is too precious to be touched or the maintainers do not have any interest in explaining it to an outsider. It's like they are loosing knowledge by sharing.
Think new comers. How can you make them contribute? Or maybe the barrier to entry is high enough on purpose?
I'm reading Clean Code (again). I dont see that code as being particularly bad. But it definitely wouldnt follow the axioms in that highly regarded book. I realize it isnt java but can someone familiar with the text comment on the number of parameters in that function and the naming of some of those variables?
Relevant quote:
"The ideal number of arguments for a function is zero (niladic). Next comes one (monadic), followed closely by two (dyadic). Three arguments (triadic) should be avoided where possible. More than three (polyadic) requires very special justification—and then shouldn’t be used anyway."
I find the idea that the ideal function takes no arguments to be astonishing. There are only two kinds of functions that take no arguments:
1. Those that return the same result every time.
2. Those that mutate some internal state (or, roughly equivalently, those that inspect internal state being mutated elsewhere).
#1 isn't all that useful in general, although there are obviously cases where it's exactly what you want. For #2, while state is useful at times and a necessary evil in many other cases, it's hardly ideal.
Even a single parameter seems decidedly un-ideal. To do useful work in general, you're typically going to want to take two parameters: something to be modified, and the modification to make. This can be done with state (modify in place) or functionally (return a new object with the modifications applied). Once again, having a single argument seems to imply either something not very useful (basically a getter or some similar derive-a-value function) or something relying on mutable state. You need two arguments to achieve the sort of combinatorial power that makes functions interesting and properly useful.
I wonder if the book, which I assume is talking about Java, is ignoring the implicit zeroth argument which is the object you're invoking the function on. Obviously in C you have to make that parameter explicit, but I've got enough experience of both languages to know that people often don't consider the object to be a parameter to one of its member functions.
So if you're converting that rule back to C, you'd need to add one to each element of the rule. The one parameter rule is saying that when special cases are common, a special function to handle it is a good thing. So writing `i++` rather than `i = i + 1` or `next(node)` rather than `skip(node,1)`.
That would make sense, except for the part where Java doesn't have functions at all, but rather methods. I imagine you could call them functions, but I haven't seen it done, and that sort of terminology sloppiness leads exactly to confusion like this.
Thanks for the responses. The text is definitely referring to Java methods as functions, and not counting the implicit object instance (this) as a parameter. The text also considers multiple parameters of the same type that are treated identically as lists counting as one parameter.
Appreciate the clarification. The advice sounds much more sane that way. It sounds like a terribly confusing way to put it, but maybe it's better in context.
Zero? So not even pure functional programming is OK, since that tends to pass state as function arguments? We should all just happily poke our global state from our niladic functions?
Haven't read that book, but color me neon skeptical.
Well POSIX requires more than three arguments for some system calls. But around 6 is the limit for most architectures. They are passed in registers for efficiency.
So, the function in question operates on a socket object. In Java that would be implicit as the object on which you invoke the method, but in C it must be an explicit function argument - that accounts for the first `fd` parameter.
The second mmsg parameter points to an array of compat_mmsg objects. In C, the array length must be passed as a separate parameter, so the second and third parameters are really one logical array parameter.
It looks perfectly readable to me, and I'm not even a big C guy. (Though I do have a general familiarity with it, and more with C++). Code shouldn't really need comments in most cases, especially if you're writing for people who are assumed to have a familiarity with the coding style/idioms used (such as boilerplate code for exploiting a memory vulnerability). (Not that I'm saying this is, I don't have enough C experience to say whether it is or not. It just seems reasonable to assume from context)
Yeah, you understand the code - we all do. But it doesn't tell you anything about the context the code runs in. And its context doesn't tell you about its context either.
The function names do. Honestly, this is very, very simple code. Most of the code in a kernel is nothing more than book keeping or glue connecting things. This is glue.
There are cues here that are consistent with conventions in the Linux kernel. For example the sys prefix on the function name tells you everything you need to know about "the context the code runs in" - it's a syscall. "compat" gives you a hint that it's a wrapper for another ABI.
http://en.wikipedia.org/wiki/X32_ABI
an application binary interface project for Linux that allows programs to take advantage of the benefits of x86-64 (larger number of CPU registers, better floating-point performance, ... ) while using 32-bit pointers and thus avoiding the overhead of 64-bit pointers