Fijnafstemmen via training

Introductie tot LLM’s in Python

Jasmin Ludolf

Senior Data Science Content Developer, DataCamp

TrainingArguments

from transformers import Trainer, 
TrainingArguments

training_args = TrainingArguments(

output_dir="./finetuned",
evaluation_strategy="epoch",
num_train_epochs=3,
learning_rate=2e-5,



)
  • TrainingArguments(): trainingsinstellingen aanpassen
  • Zie de documentatie voor alle parameters
  • Waarden hangen af van gebruik, dataset, snelheid
  • output_dir: uitvoermap
  • eval_strategy: wanneer evalueren: "epoch", "steps" of "none"
  • num_train_epochs: aantal epochs
  • learning_rate: voor de optimizer
Introductie tot LLM’s in Python

TrainingArguments

from transformers import Trainer, 
TrainingArguments

training_args = TrainingArguments(
  output_dir="./finetuned",
  evaluation_strategy="epoch",
  num_train_epochs=3,
  learning_rate=2e-5,

per_device_train_batch_size=8, per_device_eval_batch_size=8,
weight_decay=0.01,
)
  • per_device_train_batch_size en per_device_eval_batch_size: batchgrootte
  • weight_decay: toegepast in de optimizer om overfitting te voorkomen
Introductie tot LLM’s in Python

Trainer-klasse

from transformers import Trainer, 
TrainingArguments

training_args = TrainingArguments(...)

trainer = Trainer(

model=model,
args=training_args,
train_dataset=tokenized_training_data,
eval_dataset=tokenized_test_data,
tokenizer=tokenizer
)
trainer.train()
  • model: het te fijnafstemmen model
  • args: de trainingsargumenten
  • train_dataset: data voor training
  • eval_dataset: data voor evaluatie
  • tokenizer: de tokenizer

Aantal trainingsloops: Datasetgrootte, num_train_epochs, per_device_train_batch_size en per_device_eval_batch_size

Introductie tot LLM’s in Python

Trainer-uitvoer

{'eval_loss': 0.398524671792984, 'eval_runtime': 33.3145, 'eval_samples_per_second': 46.916, 
'eval_steps_per_second': 5.883, 'epoch': 1.0}
{'eval_loss': 0.1745782047510147, 'eval_runtime': 33.5202, 'eval_samples_per_second': 46.629, 
'eval_steps_per_second': 5.847, 'epoch': 2.0}
{'loss': 0.4272, 'grad_norm': 15.558795928955078, 'learning_rate': 2.993197278911565e-06, 
'epoch': 2.5510204081632653}
{'eval_loss': 0.12216147780418396, 'eval_runtime': 33.2238, 'eval_samples_per_second': 47.045, 
'eval_steps_per_second': 5.899, 'epoch': 3.0}
{'train_runtime': 673.0528, 'train_samples_per_second': 6.967, 'train_steps_per_second': 0.874, 
'train_loss': 0.40028538347101533, 'epoch': 3.0}
TrainOutput(global_step=588, training_loss=0.40028538347101533, metrics={'train_runtime': 673.0528, 
'train_samples_per_second': 6.967, 'train_steps_per_second': 0.874, 
'train_loss': 0.40028538347101533, 'epoch': 3.0})
Introductie tot LLM’s in Python

Het fijnafgestemde model gebruiken

new_data = ["This is movie was disappointing!", "This is the best movie ever!"]


new_input = tokenizer(new_data, return_tensors="pt", padding=True, truncation=True, max_length=64)
with torch.no_grad(): outputs = model(**new_input)
predicted_labels = torch.argmax(outputs.logits, dim=1).tolist() label_map = {0: "NEGATIVE", 1: "POSITIVE"} for i, predicted_label in enumerate(predicted_labels): sentiment = label_map[predicted_label] print(f"\nInput Text {i + 1}: {new_data[i]}") print(f"Predicted Label: {sentiment}")
Introductie tot LLM’s in Python

Resultaten van fine-tuning

Input Text 1: This is movie was disappointing!
Predicted Sentiment: NEGATIVE

Input Text 2: This is the best movie ever!
Predicted Sentiment: POSITIVE
Introductie tot LLM’s in Python

Modellen en tokenizers opslaan

model.save_pretrained("my_finetuned_files")

 

tokenizer.save_pretrained("my_finetuned_files")

 

# Loading a saved model
model = AutoModelForSequenceClassification.from_pretrained("my_finetuned_files")
tokenizer = AutoTokenizer.from_pretrained("my_finetuned_files")
Introductie tot LLM’s in Python

Laten we oefenen!

Introductie tot LLM’s in Python

Preparing Video For Download...