Hacker News new | past | comments | ask | show | jobs | submit login
Absolute Beginner's Guide to Bit Shifting (stackoverflow.com)
101 points by ksetyadi on March 4, 2012 | hide | past | favorite | 20 comments



Beyond OS kernels , drivers and embedded systems is there any real use for this?

I only ask because we were never taught about bit shifting at university and I can't think of a time where it would have ever been useful in my work but despite this it seems to be a very common thing to ask about at interviews so I have sort of educated myself about it for that reason alone.


I first learned about bit-shifting when I took DIP / Computer Vision as an undergrad. All the assignments were done as plugins for ImageJ, which is apparently widely used in the scientific community (or so the course claimed). ImageJ stores the pixel values for images as bytes, ints, or longs (depending on the color-depth), so to get the individual component values from a 32-bit RGBA image (8 bits per channel), you would do something like this:

int pixel = image.get(x, y);

int alphaVal (pixel & 0xFF000000) >> 24;

int redVal = (pixel & 0x00FF0000) >> 16;

int greenVal = (pixel & 0x0000FF00) >> 8;

int blueVal = (pixel & 0x000000FF);

That's just one example w/ one piece of software, but I know similar approaches are often used within the world of imaging / graphics. Maybe networking? Seem like it would correlate well to IP address operations.


I bookmarked this comment since it more or less answers that question: http://news.ycombinator.com/item?id=3452869

In short, when you're dealing with a performance sensitive application they can be helpful to make it blazing fast. When you have a piece of code being executed many times they can be helpful since micro-optimizations start mattering then, too. For the same reason most of us don't write in assembly, most of us probably don't need it for our applications, we're free to waste, but besides being fun/interesting the practical applications where bit hacks can be beneficial do indeed go beyond your short-list. (Game engines (physics, graphics, AI, networking) and databases are two more general topics I can think of off the top of my head, compression is another but could just be a special case of databases.)


I've used bit shifting a lot when dealing with any sort of audio/video applications. Specifically, when muxing audio and video into a container (i.e. MPEG transport streams) you need to set up a bunch of bit flags that are packed very tightly and also need to frequently need to write data into non-byte boundaries. The result is a couple hundred lines of code of all pointer arithmetic and bit shifts to convert between verbose data structures and the format in question.


> I only ask because we were never taught about bit shifting at university

At CMU, three of the first four intro programming classes discuss bit shifting in depth.


Are OS kernel, driver, and embedded systems development off topic on HN? There were several embedded developer posts in the latest Who's Hiring thread. Web development is by far the largest software segment that HN focuses on, but is by no means the only one.


I never suggested it was off topic, I usually enjoy reading the low level programming threads on HN.

I just wondered why it seems to considered something every CS grad/programmer should understand.


As an embedded developer/EE, I feel the same way, but from the other side. There is too much breadth for any single person to be proficient at all 7 layers, but from time to time, you still see job descriptions that ask for assembly, C, C++, Java, PHP, HTML, Python, CSS, Flash, Haskell, PCB Layout, VHDL, 7 yrs iOS...


> Beyond OS kernels , drivers and embedded systems is there any real use for this?

If you're parsing a binary format you'll probably use bit shifting.

Bit shifting is just bitwise arithmetic. Assuming you did CS or a related degree, it's more likely you forgot about being taught about it. In fact it'd pretty hard to come up with a CS syllabus that does not mention bitwise arithmetic.

These are some subjects where bitwise arithmetic is bound to appear: Intro to Programming, Data Structures, Operating Systems, Computer Architecture/Organization, Graphics, Cryptography, Implementation of Programming Languages.


> we were never taught about bit shifting at university

This is surprising to me. I'm taking a computer architecture course now and it's the third class that bitwise operations have been mentioned and in the greatest detail (exactly how it's implemented in hardware). In fact, I think it was first mentioned in my intro to CS course, which used C++ (more like C with streams, but whatever).

Is it a CS program or something like CIS?


CS courses tend to explain what bit shifting is but not when you would use it.

For example you'll often see a lecturer drawing lots of 1s and 0s on a whiteboard to illustrate it but you're less likely to see a simple example Java program which uses shifting to get RGB values deconstructed.


Please don't read just the selected answer, because he got the optimizations wrong. Bit shifting operations are always faster than multiplications (as the guy with the second most voted answer explains).


In fact, bit shifting is always faster than anything. At the hardware level, it's just wiring; there is no logic involved in shifting the bits, so there is no propagation delay involved, unlike even the single logic gate operations like &, |, ~, etc.

It's so much faster, that many compilers, for integer multiplication, will optimize these multiplications by converting them to shifts and adds. Integer division, however, usually just involves a lookup table.


> In fact, bit shifting is always faster than anything.

This is overly simplistic. A fixed bitwise mapping is simple wiring and about as fast as anything can be, yes. Extending that, the "fastest" 32-bit shifter might be to put down 31 circuits, one for each possible shift amount. But that still requires a mux tree to select between the different circuits, which already goes beyond simple wiring.

While that might be the minimum-delay implementation, it's also very area inefficient. A more area-efficient design would be to cascade lg(32) = 5 muxed shifters, one for each bit in the shift amount. For example, x << 19 is the same as ((x << 16) << 2) << 1. If you read a VLSI design textbook, you'll see that there are all kinds of low-level tricks for designing barrel shifters, especially when it comes to layout.

Finally, the practical reality for programmers is that variable-amount shifts and rotates are relatively expensive on quite a few processors.


As well as the practical details of barrel shifter implementation design that psykotic points out, older CPUs couldn't afford the space for a barrel shifter and implemented shifts as a sequence of one-bit shifts. So for instance the 8086 took 8 clock cycles plus another 4 cycles per bit shift, and on that kind of CPU it was definitely not as fast as an addition.

Incidentally there is a standard trick for integer division by constant which allows you to convert it into a series of 3 to 5 shift and add/subtract operations; any decent C compiler will implement this. _Hacker's Delight_ has the explanation of the algorithm, I think.


Here's an article explaining the trick:

http://ridiculousfish.com/blog/posts/labor-of-division-episo...

And a library from the same guy for generating code at runtime to do fast division by constants:

http://libdivide.com/


You're reading it backwards.

> A good optimizing compiler will substitute shifts for multiplications when possible.

"substitute shifts for multiplications" means that multiplications will be replaced with shifts.

The wordier phrase would be "substitute with shifts for multiplications" or "substitute for multiplications with shifts".


Also, although the question is tagged for "c", the top answer has some Java specific examples.


This could've helped me last week. I was pulling my hair out debugging a CUDA implementation of the SHA3 candidate BLAKE, and after a full week of debugging the same 70 lines (and one complete rewrite) the issue turned out to be most-significant-bit padding with arithmetic right shifts. I just changed every 'char' type to 'uint8_t' and the code worked perfectly.


In machine code language, we refer to these as rotating and shifting, instead of "shifting and non-circular shifting".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: