'invalid value encountered in double_scalars' warning, possibly numpy

As I run my code I get these warnings, always in groups of four, sporadically. I have tried to locate the source by placing debug messages before and after certain statements to pin-point its origin.

Warning: invalid value encountered in double_scalars
Warning: invalid value encountered in double_scalars
Warning: invalid value encountered in double_scalars
Warning: invalid value encountered in double_scalars

Is this is a Numpy warning, and what is a double scalar?

From Numpy I use

min(), argmin(), mean() and random.randn()

I also use Matplotlib


In my case, I found out it was division by zero.


It looks like a floating-point calculation error. Check the numpy.seterr function to get more information about where it happens.


Sometimes NaNs or null values in data will generate this error with Numpy. If you are ingesting data from say, a CSV file or something like that, and then operating on the data using numpy arrays, the problem could have originated with your data ingest. You could try feeding your code a small set of data with known values, and see if you get the same result.


Zero-size array passed to numpy.mean raises this warning (as indicated in several comments).

For some other candidates:

  • median also raises this warning on zero-sized array.

other candidates do not raise this warning:

  • min,argmin both raise ValueError on empty array
  • randn takes *arg; using randn(*[]) returns a single random number
  • std,var return nan on an empty array

I ran into similar problem - Invalid value encountered in ... After spending a lot of time trying to figure out what is causing this error I believe in my case it was due to NaN in my dataframe. Check out working with missing data in pandas.

None == None True

np.nan == np.nan False

When NaN is not equal to NaN then arithmetic operations like division and multiplication causes it throw this error.

Couple of things you can do to avoid this problem:

  1. Use pd.set_option to set number of decimal to consider in your analysis so an infinitesmall number does not trigger similar problem - ('display.float_format', lambda x: '%.3f' % x).

  2. Use df.round() to round the numbers so Panda drops the remaining digits from analysis. And most importantly,

  3. Set NaN to zero df=df.fillna(0). Be careful if Filling NaN with zero does not apply to your data sets because this will treat the record as zero so N in the mean, std etc also changes.