Is shifting bits faster than multiplying and dividing in Java? .NET?

C#Java.NetOptimizationBit Manipulation

C# Problem Overview


Shifting bits left and right is apparently faster than multiplication and division operations on most, maybe even all, CPUs if you happen to be using a power of 2. However, it can reduce the clarity of code for some readers and some algorithms. Is bit-shifting really necessary for performance, or can I expect the compiler or VM to notice the case and optimize it (in particular, when the power-of-2 is a literal)? I am mainly interested in the Java and .NET behavior but welcome insights into other language implementations as well.

C# Solutions


Solution 1 - C#

Almost any environment worth its salt will optimize this away for you. And if it doesn't, you've got bigger fish to fry. Seriously, do not waste one more second thinking about this. You will know when you have performance problems. And after you run a profiler, you will know what is causing it, and it should be fairly clear how to fix it.

You will never hear anyone say "my application was too slow, then I started randomly replacing x * 2 with x << 1 and everything was fixed!" Performance problems are generally solved by finding a way to do an order of magnitude less work, not by finding a way to do the same work 1% faster.

Solution 2 - C#

Most compilers today will do more than convert multiply or divide by a power-of-two to shift operations. When optimizing, many compilers can optimize a multiply or divide with a compile time constant even if it's not a power of 2. Often a multiply or divide can be decomposed to a series of shifts and adds, and if that series of operations will be faster than the multiply or divide, the compiler will use it.

For division by a constant, the compiler can often convert the operation to a multiply by a 'magic number' followed by a shift. This can be a major clock-cycle saver since multiplication is often much faster than a division operation.

Henry Warren's book, Hacker's Delight, has a wealth of information on this topic, which is also covered quite well on the companion website:

See also a discussion (with a link or two ) in:

Anyway, all this boils down to allowing the compiler to take care of the tedious details of micro-optimizations. It's been years since doing your own shifts outsmarted the compiler.

Solution 3 - C#

Humans are wrong in these cases.

99% when they try to second guess a modern (and all future) compilers.
99.9% when they try to second guess modern (and all future) JITs at the same time.
99.999% when they try to second guess modern (and all future) CPU optimizations.

Program in a way that accurately describes what you want to accomplish, not how to do it. Future versions of the JIT, VM, compiler, and CPU can all be independantly improved and optimized. If you specify something so tiny and specific, you lose the benefit of all future optimizations.

Solution 4 - C#

You can almost certainly depend on the literal-power-of-two multiplication optimisation to a shift operation. This is one of the first optimisations that students of compiler construction will learn. :)

However, I don't think there's any guarantee for this. Your source code should reflect your intent, rather than trying to tell the optimiser what to do. If you're making a quantity larger, use multiplication. If you're moving a bit field from one place to another (think RGB colour manipulation), use a shift operation. Either way, your source code will reflect what you are actually doing.

Solution 5 - C#

Note that shifting down and division will (in Java, certainly) give different results for negative, odd numbers.

int a = -7;
System.out.println("Shift: "+(a >> 1));
System.out.println("Div:   "+(a / 2));

Prints:

Shift: -4
Div:   -3

Since Java doesn't have any unsigned numbers it's not really possible for a Java compiler to optimise this.

Solution 6 - C#

On computers I tested, integer divisions are 4 to 10 times slower than other operations.

When compilers may replace divisions by multiples of 2 and make you see no difference, divisions by not multiples of 2 are significantly slower.

For example, I have a (graphics) program with many many many divisions by 255. Actually my computation is :

r = (((top.R - bottom.R) * alpha + (bottom.R * 255)) * 0x8081) >> 23;

I can ensure that it is a lot faster than my previous computation :

r = ((top.R - bottom.R) * alpha + (bottom.R * 255)) / 255;

so no, compilers cannot do all the tricks of optimization.

Solution 7 - C#

I would ask "what are you doing that it would matter?". First design your code for readability and maintainability. The likelyhood that doing bit shifting verses standard multiplication will make a performance difference is EXTREMELY small.

Solution 8 - C#

It is hardware dependent. If we are talking micro-controller or i386, then shifting might be faster but, as several answers state, your compiler will usually do the optimization for you.

On modern (Pentium Pro and beyond) hardware the pipelining makes this totally irrelevant and straying from the beaten path usually means you loose a lot more optimizations than you can gain.

Micro optimizations are not only a waste of your time, they are also extremely difficult to get right.

Solution 9 - C#

If the compiler (compile-time constant) or JIT (runtime constant) knows that the divisor or multiplicand is a power of two and integer arithmetic is being performed, it will convert it to a shift for you.

Solution 10 - C#

According to the results of this microbenchmark, shifting is twice as fast as dividing (Oracle Java 1.7.0_72).

Solution 11 - C#

I am stunned as I just wrote this code and realized that shifting by one is actually slower than multiplying by 2!

(EDIT: changed the code to stop overflowing after Michael Myers' suggestion, but the results are the same! What is wrong here?)

import java.util.Date;

public class Test {
	public static void main(String[] args) {
    	Date before = new Date();
    	for (int j = 1; j < 50000000; j++) {
    		int a = 1 ;
    		for (int i = 0; i< 10; i++){
    			a *=2;
    		}
    	}
    	Date after = new Date();
    	System.out.println("Multiplying " + (after.getTime()-before.getTime()) + " milliseconds");
    	before = new Date();
    	for (int j = 1; j < 50000000; j++) {
    		int a = 1 ;
    		for (int i = 0; i< 10; i++){
    			a = a << 1;
    		}
    	}
    	after = new Date();
		System.out.println("Shifting " + (after.getTime()-before.getTime()) + " milliseconds");
	}
}

The results are:

>Multiplying 639 milliseconds
>Shifting 718 milliseconds

Solution 12 - C#

Most compilers will turn multiplication and division into bit shifts when appropriate. It is one of the easiest optimizations to do. So, you should do what is more easily readable and appropriate for the given task.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
Questionuser132748View Question on Stackoverflow
Solution 1 - C#Jason CreightonView Answer on Stackoverflow
Solution 2 - C#Michael BurrView Answer on Stackoverflow
Solution 3 - C#Bill CrimView Answer on Stackoverflow
Solution 4 - C#Greg HewgillView Answer on Stackoverflow
Solution 5 - C#izbView Answer on Stackoverflow
Solution 6 - C#agnainView Answer on Stackoverflow
Solution 7 - C#Chris BrandsmaView Answer on Stackoverflow
Solution 8 - C#Henk HoltermanView Answer on Stackoverflow
Solution 9 - C#Sam HarwellView Answer on Stackoverflow
Solution 10 - C#ejainView Answer on Stackoverflow
Solution 11 - C#Savvas DalkitsisView Answer on Stackoverflow
Solution 12 - C#Polaris878View Answer on Stackoverflow