How to truncate input in the Huggingface pipeline?

this way should work:

classifier(text, padding=True, truncation=True)

if it doesn't try to load tokenizer as:

tokenizer = AutoTokenizer.from_pretrained(model_name, model_max_len=512)

you can use tokenizer_kwargs while inference :

model_pipline = pipeline("text-classification",model=model,tokenizer=tokenizer,device=0, return_all_scores=True)

tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512,'return_tensors':'pt'}

prediction = model_pipeline('sample text to predict',**tokenizer_kwargs)

for more details you can check this link