The Llama fine-tuning libraries

Fine-Tuning with Llama 3

Francesca Donadoni

Curriculum Manager, DataCamp

When to use fine-tuning

  • Pre-trained model
  • Uses specialized data

An AI Engineer looking at a representation of model parameters.

  • Improve accuracy
  • Reduce bias
  • Improve knowledge base
Fine-Tuning with Llama 3

How to use fine-tuning

  • Quality of the data
  • Model's capacity
  • Task definition

 

  • Fine-tuning process

The training loop. Training produces a fine-tuned model. Evaluation occurs with the fine-tuned model and an evaluation dataset.

Fine-Tuning with Llama 3

How to use fine-tuning

  • Quality of the data
  • Model's capacity
  • Task definition

 

  • Fine-tuning process

The training loop. A training dataset is used to start training, which produces a fine-tuned model. Evaluation occurs with the fine-tuned model and an evaluation dataset.

Fine-Tuning with Llama 3

How to use fine-tuning

  • Quality of the data
  • Model's capacity
  • Task definition

 

  • Fine-tuning process

The training loop. A training dataset, model, and tokenizer are used to start training, which produces a fine-tuned model. Evaluation occurs with the fine-tuned model and an evaluation dataset.

Fine-Tuning with Llama 3

How to use fine-tuning

  • Quality of the data
  • Model's capacity
  • Task definition

 

  • Fine-tuning process

The training loop. A training dataset, arguments, model, and tokenizer are used to start training, which produces a fine-tuned model. Evaluation occurs with the fine-tuned model and an evaluation dataset.

Fine-Tuning with Llama 3

How to use fine-tuning

  • Quality of the data
  • Model's capacity
  • Task definition

 

  • Fine-tuning process
  • New model
  • Evaluation

The training loop. A training dataset, arguments, model, tokenizer, and fine-tuning class are used to start training, which produces a fine-tuned model. Evaluation occurs with the fine-tuned model and an evaluation dataset.

Fine-Tuning with Llama 3

The Llama fine-tuning libraries

 

  • 📚 Several libraries for fine-tuning

 

  • 🦙 TorchTune for Llama fine-tuning

 

  • 🚀 Launching a fine-tuning task with TorchTune
Fine-Tuning with Llama 3

Options for Llama fine-tuning

  • TorchTune
    • Based on configurable templates
    • Ideal for: scaling quickly

An icon of a clock representing quick experimentation.

  • SFTTrainer from Hugging Face
    • Access to other LLMs
    • Ideal for: fine-tuning multiple models

An icon representing multiple models.

  • Unsloth
    • Efficient memory usage
    • Ideal for: limited hardware

An icon representing CPU hardware.

  • Axolotl
    • Modular approach
    • Ideal for: no extensive reconfiguration

An icon of a gear and a hand representing a configuration.

Fine-Tuning with Llama 3

TorchTune and the recipes for fine-tuning

 

 

  • TorchTune recipes:

    • Modular templates
    • Configurable to be adapted to different projects
    • Keep code organized
    • Ensure reproducibility

An illustration of a soup being prepared to depict the concept of a recipe.

Fine-Tuning with Llama 3

TorchTune list

 

  • Run from a terminal
  • Environment with Python
  • Install TorchTune
    pip3 install torchtune
    
  • List available recipes

    tune ls
    
  • ! if using IPython

    !tune ls
    
Fine-Tuning with Llama 3

TorchTune list

!tune ls
  • Output:
RECIPE                                   CONFIG                                  
full_finetune_single_device              llama3/8B_full_single_device            
                                         llama3_1/8B_full_single_device          
                                         llama3_2/1B_full_single_device          
                                         llama3_2/3B_full_single_device       
full_finetune_distributed                llama3/8B_full                          
                                         llama3_1/8B_full                        
                                         llama3_2/1B_full                          
                                         ...
Fine-Tuning with Llama 3

TorchTune run

 

  • Use recipe + --config + configuration
  • Run fine-tuning

    tune run full_finetune_single_device --config \
    llama3_1/8B_lora_single_device
    
  • Parameters device=cpu or device=cuda

  • epochs=<int> (<int> is 0 or a positive integer)
Fine-Tuning with Llama 3

Let's practice!

Fine-Tuning with Llama 3

Preparing Video For Download...