sentence transformer using huggingface/transformers pre-trained model vs SentenceTransformer

Solution 1:

You are comparing 2 different things:

training_stsbenchmark.py - This example shows how to create a SentenceTransformer model from scratch by using a pre-trained transformer model together with a pooling layer.

In other words, you are creating your own model SentenceTransformer using your own data, therefore fine-tuning.

training_stsbenchmark_continue_training.py - This example shows how to continue training on STS data for a previously created & trained SentenceTransformer model.

In that example, they load a model trained on NLI data.

So, to answer "wont that always be better than the first method?"

It depends on you final results. Try both methods and check for yourself which will deliver better cross-validation results.