Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2

I am new to TensorFlow. I have recently installed it (Windows CPU version) and received the following message:

Successfully installed tensorflow-1.4.0 tensorflow-tensorboard-0.4.0rc2

Then when I tried to run

import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
sess.run(hello)
'Hello, TensorFlow!'
a = tf.constant(10)
b = tf.constant(32)
sess.run(a + b)
42
sess.close()

(which I found through https://github.com/tensorflow/tensorflow)

I received the following message:

2017-11-02 01:56:21.698935: I C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2

But when I ran

import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))

it ran as it should and output Hello, TensorFlow!, which indicates that the installation was successful indeed but there is something else that is wrong.

Do you know what the problem is and how to fix it?


Solution 1:

What is this warning about?

Modern CPUs provide a lot of low-level instructions, besides the usual arithmetic and logic, known as extensions, e.g. SSE2, SSE4, AVX, etc. From the Wikipedia:

Advanced Vector Extensions (AVX) are extensions to the x86 instruction set architecture for microprocessors from Intel and AMD proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge processor shipping in Q1 2011 and later on by AMD with the Bulldozer processor shipping in Q3 2011. AVX provides new features, new instructions and a new coding scheme.

In particular, AVX introduces fused multiply-accumulate (FMA) operations, which speed up linear algebra computation, namely dot-product, matrix multiply, convolution, etc. Almost every machine-learning training involves a great deal of these operations, hence will be faster on a CPU that supports AVX and FMA (up to 300%). The warning states that your CPU does support AVX (hooray!).

I'd like to stress here: it's all about CPU only.

Why isn't it used then?

Because tensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow) are intended to be compatible with as many CPUs as possible. Another argument is that even with these extensions CPU is a lot slower than a GPU, and it's expected for medium- and large-scale machine-learning training to be performed on a GPU.

What should you do?

If you have a GPU, you shouldn't care about AVX support, because most expensive ops will be dispatched on a GPU device (unless explicitly set not to). In this case, you can simply ignore this warning by

# Just disables the warning, doesn't take advantage of AVX/FMA to run faster
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

... or by setting export TF_CPP_MIN_LOG_LEVEL=2 if you're on Unix. Tensorflow is working fine anyway, but you won't see these annoying warnings.


If you don't have a GPU and want to utilize CPU as much as possible, you should build tensorflow from the source optimized for your CPU with AVX, AVX2, and FMA enabled if your CPU supports them. It's been discussed in this question and also this GitHub issue. Tensorflow uses an ad-hoc build system called bazel and building it is not that trivial, but is certainly doable. After this, not only will the warning disappear, tensorflow performance should also improve.

Solution 2:

Update the tensorflow binary for your CPU & OS using this command

pip install --ignore-installed --upgrade "Download URL"

The download url of the whl file can be found here

https://github.com/lakshayg/tensorflow-build

Solution 3:

CPU optimization with GPU

There are performance gains you can get by installing TensorFlow from the source even if you have a GPU and use it for training and inference. The reason is that some TF operations only have CPU implementation and cannot run on your GPU.

Also, there are some performance enhancement tips that makes good use of your CPU. TensorFlow's performance guide recommends the following:

Placing input pipeline operations on the CPU can significantly improve performance. Utilizing the CPU for the input pipeline frees the GPU to focus on training.

For best performance, you should write your code to utilize your CPU and GPU to work in tandem, and not dump it all on your GPU if you have one. Having your TensorFlow binaries optimized for your CPU could pay off hours of saved running time and you have to do it once.

Solution 4:

For Windows, you can check the official Intel MKL optimization for TensorFlow wheels that are compiled with AVX2. This solution speeds up my inference ~x3.

conda install tensorflow-mkl

Solution 5:

For Windows (Thanks to the owner f040225), go to here: https://github.com/fo40225/tensorflow-windows-wheel to fetch the url for your environment based on the combination of "tf + python + cpu_instruction_extension". Then use this cmd to install:

pip install --ignore-installed --upgrade "URL"

If you encounter the "File is not a zip file" error, download the .whl to your local computer, and use this cmd to install:

pip install --ignore-installed --upgrade /path/target.whl