Keras - Difference between categorical_accuracy and sparse_categorical_accuracy

What is the difference between categorical_accuracy and sparse_categorical_accuracy in Keras? There is no hint in the documentation for these metrics, and by asking Dr. Google, I did not find answers for that either.

The source code can be found here:

def categorical_accuracy(y_true, y_pred):
    return K.cast(K.equal(K.argmax(y_true, axis=-1),
                          K.argmax(y_pred, axis=-1)),
                  K.floatx())


def sparse_categorical_accuracy(y_true, y_pred):
    return K.cast(K.equal(K.max(y_true, axis=-1),
                          K.cast(K.argmax(y_pred, axis=-1), K.floatx())),
                  K.floatx())

So in categorical_accuracy you need to specify your target (y) as one-hot encoded vector (e.g. in case of 3 classes, when a true class is second class, y should be (0, 1, 0). In sparse_categorical_accuracy you need should only provide an integer of the true class (in the case from previous example - it would be 1 as classes indexing is 0-based).


Looking at the source

def categorical_accuracy(y_true, y_pred):
    return K.cast(K.equal(K.argmax(y_true, axis=-1),
                          K.argmax(y_pred, axis=-1)),
                  K.floatx())


def sparse_categorical_accuracy(y_true, y_pred):
    return K.cast(K.equal(K.max(y_true, axis=-1),
                          K.cast(K.argmax(y_pred, axis=-1), K.floatx())),
K.floatx())

categorical_accuracy checks to see if the index of the maximal true value is equal to the index of the maximal predicted value.

sparse_categorical_accuracy checks to see if the maximal true value is equal to the index of the maximal predicted value.

From Marcin's answer above the categorical_accuracy corresponds to a one-hot encoded vector for y_true.