How does Keras handle multilabel classification?

PythonNeural NetworkKerasMultilabel Classification

Python Problem Overview


I am unsure how to interpret the default behavior of Keras in the following situation:

My Y (ground truth) was set up using scikit-learn's MultilabelBinarizer().

Therefore, to give a random example, one row of my y column is one-hot encoded as such: [0,0,0,1,0,1,0,0,0,0,1].

So I have 11 classes that could be predicted, and more than one can be true; hence the multilabel nature of the problem. There are three labels for this particular sample.

I train the model as I would for a non multilabel problem (business as usual) and I get no errors.

from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD

model = Sequential()
model.add(Dense(5000, activation='relu', input_dim=X_train.shape[1]))
model.add(Dropout(0.1))
model.add(Dense(600, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(y_train.shape[1], activation='softmax'))

sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
              optimizer=sgd,
              metrics=['accuracy',])

model.fit(X_train, y_train,epochs=5,batch_size=2000)

score = model.evaluate(X_test, y_test, batch_size=2000)
score

What does Keras do when it encounters my y_train and sees that it is "multi" one-hot encoded, meaning there is more than one 'one' present in each row of y_train? Basically, does Keras automatically perform multilabel classification? Any differences in the interpretation of the scoring metrics?

Python Solutions


Solution 1 - Python

In short

Don't use softmax.

Use sigmoid for activation of your output layer.

Use binary_crossentropy for loss function.

Use predict for evaluation.

Why

In softmax when increasing score for one label, all others are lowered (it's a probability distribution). You don't want that when you have multiple labels.

Complete Code

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation
from tensorflow.keras.optimizers import SGD

model = Sequential()
model.add(Dense(5000, activation='relu', input_dim=X_train.shape[1]))
model.add(Dropout(0.1))
model.add(Dense(600, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(y_train.shape[1], activation='sigmoid'))

sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy',
              optimizer=sgd)

model.fit(X_train, y_train, epochs=5, batch_size=2000)

preds = model.predict(X_test)
preds[preds>=0.5] = 1
preds[preds<0.5] = 0
# score = compare preds and y_test

Solution 2 - Python

Answer from Keras Documentation

I am quoting from keras document itself.

They have used output layer as dense layer with sigmoid activation. Means they also treat multi-label classification as multi-binary classification with binary cross entropy loss

Following is model created in Keras documentation

shallow_mlp_model = keras.Sequential( [ layers.Dense(512, activation="relu"), layers.Dense(256, activation="relu"), layers.Dense(lookup.vocabulary_size(), activation="sigmoid"), ] # More on why "sigmoid" has been used here in a moment.

Keras doc link:: https://keras.io/examples/nlp/multi_label_classification/

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
Questionuser798719View Question on Stackoverflow
Solution 1 - PythonYLJView Answer on Stackoverflow
Solution 2 - Pythonshantanu pathakView Answer on Stackoverflow