Fine-tuning approaches

Introduction to LLMs in Python

Jasmin Ludolf

Senior Data Science Content Developer, DataCamp

Fine-tuning

 

LLMs fine-tuning chemistry use case

Introduction to LLMs in Python

Full fine-tuning

 

  • The entire model weights are updated
  • Computationally expensive

 

Full fine-tuning

Introduction to LLMs in Python

Partial fine-tuning

 

  • Some layers are fixed
  • Only task-specific layers are updated

 

Partial fine-tuning

Introduction to LLMs in Python

Transfer learning

 

  • A pre-trained model is adapted to a different but related task
  • Leverages knowledge from one domain to a related one

The transfer learning paradigm

Introduction to LLMs in Python

N-shot learning

  • Zero-shot learning: no examples
  • One-shot learning: one example
  • Few-shot learning: several examples
Introduction to LLMs in Python

One-shot learning

from transformers import pipeline

generator = pipeline(task="sentiment-analysis", model="distilbert-base-uncased-finetuned-sst-2-english")

input_text = """
Classify the sentiment of this sentence as either Positive or Negative.
Example:
Text: "I'm feeling great today!" Sentiment: Positive
Text: "The weather today is lovely." Sentiment:
"""

result = generator(input_text, max_length=100)
print(result[0]["label"])
POSITIVE
Introduction to LLMs in Python

Let's practice!

Introduction to LLMs in Python

Preparing Video For Download...