librosa.util.exceptions.ParameterError: Invalid shape for monophonic audio: ndim=2, shape=(172972, 2)

Please somebody help me to solve this

I was following this tutorial: https://data-flair.training/blogs/python-mini-project-speech-emotion-recognition/

And used their dataset which they took from the RAVDESS Dataset and lowered the sample rate of them. I can train using this data easily. But when I use Original data from here: https://zenodo.org/record/1188976

Just "Audio_Speech_Actors_01-24.zip" and try to train model it gives me below error:

Traceback (most recent call last):
  File "C:/Users/raj.pandey/Desktop/speech-emotion-recognition/main.py", line 64, in <module>
    x_train, x_test, y_train, y_test = load_data(test_size=0.20)
  File "C:/Users/raj.pandey/Desktop/speech-emotion-recognition/main.py", line 57, in load_data
    feature = extract_feature(file, mfcc=True, chroma=True, mel=True)
  File "C:/Users/raj.pandey/Desktop/speech-emotion-recognition/main.py", line 32, in extract_feature
    stft = np.abs(librosa.stft(X))
  File "C:\Users\raj.pandey\Desktop\speech-emotion-recognition\lib\site-packages\librosa\core\spectrum.py", line 215, in stft
    util.valid_audio(y)
  File "C:\Users\raj.pandey\Desktop\speech-emotion-recognition\lib\site-packages\librosa\util\utils.py", line 268, in valid_audio
    'ndim={:d}, shape={}'.format(y.ndim, y.shape))
librosa.util.exceptions.ParameterError: Invalid shape for monophonic audio: ndim=2, shape=(172972, 2)

The tutorial provided by the trains at the same dataset but just that they have lowered the sample rate. Why isn't it running on the original one?

Does it have to do anything with this in the code:

X = sound_file.read(dtype="float32")

I have was also just out of curiosity tried to predict from a .mp3 file and it resulted in an error. Then I converted that .mp3 file in wav and tried but still gives error in the title.

How to solve this error and make it train on the original data? If it starts training on original then I think it can predict on the .mp3 to wav converted file.

Below is the code that I am using:

import librosa
import soundfile
import os
import glob
import pickle
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score

# DataFlair - Emotions in the RAVDESS dataset
emotions = {
    '01': 'neutral',
    '02': 'calm',
    '03': 'happy',
    '04': 'sad',
    '05': 'angry',
    '06': 'fearful',
    '07': 'disgust',
    '08': 'surprised'
}
# DataFlair - Emotions to observe
observed_emotions = ['calm', 'happy', 'fearful', 'disgust']


# DataFlair - Extract features (mfcc, chroma, mel) from a sound file
def extract_feature(file_name, mfcc, chroma, mel):
    with soundfile.SoundFile(file_name) as sound_file:
        X = sound_file.read(dtype="float32")
        sample_rate = sound_file.samplerate
        if chroma:
            stft = np.abs(librosa.stft(X))
        result = np.array([])
        if mfcc:
            mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=40).T, axis=0)
            result = np.hstack((result, mfccs))
        if chroma:
            chroma = np.mean(librosa.feature.chroma_stft(S=stft, sr=sample_rate).T, axis=0)
            result = np.hstack((result, chroma))
        if mel:
            mel = np.mean(librosa.feature.melspectrogram(X, sr=sample_rate).T, axis=0)
            result = np.hstack((result, mel))
    return result


# DataFlair - Load the data and extract features for each sound file
def load_data(test_size=0.2):
    x, y = [], []
    for file in glob.glob("C:\\Users\\raj.pandey\\Desktop\\speech-emotion-recognition\\Dataset\\Actor_*\\*.wav"):
        # for file in glob.glob("C:\\Users\\raj.pandey\\Desktop\\speech-emotion-recognition\\Dataset\\newactor\\*.wav"):

        file_name = os.path.basename(file)
        emotion = emotions[file_name.split("-")[2]]

        if emotion not in observed_emotions:
            continue
        feature = extract_feature(file, mfcc=True, chroma=True, mel=True)
        x.append(feature)
        y.append(emotion)
    return train_test_split(np.array(x), y, test_size=test_size, random_state=9)


# DataFlair - Split the dataset
x_train, x_test, y_train, y_test = load_data(test_size=0.20)

# DataFlair - Get the shape of the training and testing datasets
# print((x_train.shape[0], x_test.shape[0]))

# DataFlair - Get the number of features extracted
# print(f'Features extracted: {x_train.shape[1]}')

# DataFlair - Initialize the Multi Layer Perceptron Classifier
model = MLPClassifier(alpha=0.01, batch_size=256, epsilon=1e-08, hidden_layer_sizes=(300,), learning_rate='adaptive',
                      max_iter=500)

# DataFlair - Train the model
model.fit(x_train, y_train)

# print(model.fit(x_train, y_train))

# DataFlair - Predict for the test set
y_pred = model.predict(x_test)
# print("This is y_pred: ", y_pred)


# DataFlair - Calculate the accuracy of our model
accuracy = accuracy_score(y_true=y_test, y_pred=y_pred)

# DataFlair - Print the accuracy
# print("Accuracy: {:.2f}%".format(accuracy * 100))

# Predicting random files
tar_file = "C:\\Users\\raj.pandey\\Desktop\\speech-emotion-recognition\\Dataset\\newactor\\pls-hold-while-try.wav"
new_feature = extract_feature(tar_file, mfcc=True, chroma=True, mel=True)
data = []
data.append(new_feature)
data = np.array(data)
z_pred = model.predict(data)
print("This is output: ", z_pred)

The dataset provided by the tutorial to train was this: https://drive.google.com/file/d/1wWsrN2Ep7x6lWqOXfr4rpKGYrJhWc8z7/view

The original dataset you can get from here(which isn't working with the program):https://zenodo.org/record/1188976 (Audio_speech_actor one)

In predicting random files if you put any .wav files with a speech in it, it results in an error. And if you try text to speech converter and get the .wav and pass it here it will always say "fearfull". I have tried converting a .mp3 to .wav to get it to work nicely but nope still an error.

Anyone checked yet how can I get it working?


I've just ran into the same problem. For anyone reading this that prefers not to delete the stereo files, it is possible to convert them to mono using the command line tool ffmpeg:

ffmpeg -i stereo_file_name.wav -ac 1 mono_file_name.wav 

Link to ffmpeg Related Stack Overflow Post


from pydub import AudioSegment

    file_name=os.path.basename(file)
    #converting stereo audio to mono
    sound = AudioSegment.from_wav(file)
    sound = sound.set_channels(1)
    sound.export(file, format="wav")
    emotion=emotions[file_name.split("-")[2]]
    if emotion not in observed_emotions:
        continue
    feature=extract_feature(file, mfcc=True, chroma=True, mel=True)
    x.append(feature)
    y.append(emotion)
return train_test_split(np.array(x), y, test_size=test_size, random_state=9)