How to use a custom loss function with neural compressor for distillation

Solution 1:

In neural compressor source there is a class called PyTorchKnowledgeDistillationLoss which has SoftCrossEntropy and KullbackLeiblerDivergence as member functions if you want to give your own custom loss function add a new member function to PyTorchKnowledgeDistillationLoss class, which takes in togits and targets as parameters,

eg

class PyTorchKnowledgeDistillationLoss(KnowledgeDistillationLoss):
...
...
    def customLossFunction(self, logits, targets):
        //calculate the custom loss
        return custom_loss

And then init function(constructor) of the PyTorchKnowledgeDistillationLoss assign

self.teacher_student_loss = self.customLossFunction
self.student_targets_loss= self.customLossFunction