Fine-tuning with TorchTune

Fine-Tuning with Llama 3

Francesca Donadoni

Curriculum Manager, DataCamp

The components of TorchTune fine-tuning

  • Model

    • Defines the architecture and pre-trained weights to fine-tune
    • Different versions and number of parameters available
  • Dataset

    • Specifies the data used for training
  • Recipe

    • Central configuration file combining the model, dataset, and training parameters
    • Ensures consistency and reproducibility

iStock-2151073200.jpg

iStock-1281282682.jpg

iStock-2173849278.jpg

Fine-Tuning with Llama 3

The components of TorchTune fine-tuning

  • Model

    • !tune ls
      llama3/8B_full
      llama3_1/8B_full
      llama3_2/1B_full ...
      
  • Dataset

    • ds.save_to_disk("new_dataset")
  • Recipe

    • custom_recipe.yaml

iStock-2151073200.jpg

iStock-1281282682.jpg

iStock-2173849278.jpg

Fine-Tuning with Llama 3

The components of a TorchTune recipe

  • General Settings and ouput directory
    • Batch size, device and epochs

 

  • Model
    • Specifies architecture and model configurations

 

  • Optimizer

    • Includes learning rate
  • Dataset

    • Defines preprocessing and dataset path
batch_size: 4
device: cuda
epochs: 20
output_dir: /tmp/full-llama3.2-finetune

model:
  _component_: 
      torchtune.models.llama3_2.llama3_2_1b

optimizer:
  _component_: bitsandbytes.optim.PagedAdamW8bit
  lr: 2.0e-05

dataset:
  _component_: torchtune.datasets.alpaca_dataset
Fine-Tuning with Llama 3

Configuring TorchTune recipes

  • More parameters available
  • Configurable in Python using yaml
import yaml

config_dict = {"batch_size": 4, "device": "cuda", "model": { "_component_": "torchtune.models.llama3_2.llama3_2_1b" }, ... }
yaml_file_path = "custom_recipe.yaml" with open(yaml_file_path, "w") as yaml_file: yaml.dump(config_dict, yaml_file)
Fine-Tuning with Llama 3

Running custom fine-tuning

tune run --config custom_recipe.yaml
INFO:torchtune.utils.logging:Running 
Writing logs to /tmp/full-llama3.2-finetune/log_1732815689.txt
INFO:torchtune.utils.logging:Model is initialized with precision torch.bfloat16.
INFO:torchtune.utils.logging:Tokenizer is initialized from file.
1|52|Loss: 2.3697006702423096:   0%|?                     | 52/25880
  • Saved logs
  • Successful initialization
  • Epoch and step count progress
  • Loss metrics
Fine-Tuning with Llama 3

Let's practice!

Fine-Tuning with Llama 3

Preparing Video For Download...