Why doesn't trainer report evaluation metrics while training in the tutorial?
Solution 1:
The evaluate function returns the metrics, it doesn't print them. Does
metrics=trainer.evaluate()
print(metrics)
work? Also, the message is saying you're using the base bert model, which was not pretrained for sentence classification, but rather the base language model .Therefore it doesn't have the initialized weights for the task and should be trained
Solution 2:
Why are you doing trainer.evaluate()
? This just runs validation on the validation set. If you want to fine-tune or train, you need to do:
trainer.train()
Solution 3:
I think you need to tell the trainer how often to evaluate performance with evaluation_strategy
and eval_steps
in TrainingArguments