Should I use multiplication or division?

Solution 1:

Python:

time python -c 'for i in xrange(int(1e8)): t=12341234234.234 / 2.0'
real    0m26.676s
user    0m25.154s
sys     0m0.076s

time python -c 'for i in xrange(int(1e8)): t=12341234234.234 * 0.5'
real    0m17.932s
user    0m16.481s
sys     0m0.048s

multiplication is 33% faster

Lua:

time lua -e 'for i=1,1e8 do t=12341234234.234 / 2.0 end'
real    0m7.956s
user    0m7.332s
sys     0m0.032s

time lua -e 'for i=1,1e8 do t=12341234234.234 * 0.5 end'
real    0m7.997s
user    0m7.516s
sys     0m0.036s

=> no real difference

LuaJIT:

time luajit -O -e 'for i=1,1e8 do t=12341234234.234 / 2.0 end'
real    0m1.921s
user    0m1.668s
sys     0m0.004s

time luajit -O -e 'for i=1,1e8 do t=12341234234.234 * 0.5 end'
real    0m1.843s
user    0m1.676s
sys     0m0.000s

=>it's only 5% faster

conclusions: in Python it's faster to multiply than to divide, but as you get closer to the CPU using more advanced VMs or JITs, the advantage disappears. It's quite possible that a future Python VM would make it irrelevant

Solution 2:

Always use whatever is the clearest. Anything else you do is trying to outsmart the compiler. If the compiler is at all intelligent, it will do the best to optimize the result, but nothing can make the next guy not hate you for your crappy bitshifting solution (I love bit manipulation by the way, it's fun. But fun != readable)

Premature optimization is the root of all evil. Always remember the three rules of optimization!

  1. Don't optimize.
  2. If you are an expert, see rule #1
  3. If you are an expert and can justify the need, then use the following procedure:

    • Code it unoptimized
    • determine how fast is "Fast enough"--Note which user requirement/story requires that metric.
    • Write a speed test
    • Test existing code--If it's fast enough, you're done.
    • Recode it optimized
    • Test optimized code. IF it doesn't meet the metric, throw it away and keep the original.
    • If it meets the test, keep the original code in as comments

Also, doing things like removing inner loops when they aren't required or choosing a linked list over an array for an insertion sort are not optimizations, just programming.

Solution 3:

I think this is getting so nitpicky that you would be better off doing whatever makes the code more readable. Unless you perform the operations thousands, if not millions, of times, I doubt anyone will ever notice the difference.

If you really have to make the choice, benchmarking is the only way to go. Find what function(s) are giving you problems, then find out where in the function the problems occur, and fix those sections. However, I still doubt that a single mathematical operation (even one repeated many, many times) would be a cause of any bottleneck.

Solution 4:

Multiplication is faster, division is more accurate. You'll lose some precision if your number isn't a power of 2:

y = x / 3.0;
y = x * 0.333333;  // how many 3's should there be, and how will the compiler round?

Even if you let the compiler figure out the inverted constant to perfect precision, the answer can still be different.

x = 100.0;
x / 3.0 == x * (1.0/3.0)  // is false in the test I just performed

The speed issue is only likely to matter in C/C++ or JIT languages, and even then only if the operation is in a loop at a bottleneck.

Solution 5:

If you want to optimize your code but still be clear, try this:

y = x * (1.0 / 2.0);

The compiler should be able to do the divide at compile-time, so you get a multiply at run-time. I would expect the precision to be the same as in the y = x / 2.0 case.

Where this may matter a LOT is in embedded processors where floating-point emulation is required to compute floating-point arithmetic.