Preparing for fine-tuning

Introduction to LLMs in Python

Jasmin Ludolf

Senior Data Science Content Developer, DataCamp

Pipelines and auto classes

Pipelines: pipeline()

  • Streamlines tasks
  • Automatic model and tokenizer selection
  • Limited control

Hugging Face Transformers' pipelines

Auto classes (AutoModel class)

  • Customization
  • Manual adjustments
  • Supports fine-tuning

Hugging Face Transformers' AutoModel class showing customization options

Introduction to LLMs in Python

LLM lifecycle

LLMs lifecycle

Pre-training

  • Broad data
  • Learn general patterns
Introduction to LLMs in Python

LLM lifecycle

LLMs lifecycle

Pre-training                                                             Fine-tuning

  • Broad data                                                          Domain specific
  • Learn general patterns                                       Specialized tasks
Introduction to LLMs in Python

Loading a dataset for fine-tuning

from datasets import load_dataset


train_data = load_dataset("imdb", split="train")
train_data = data.shard(num_shards=4, index=0)
test_data = load_dataset("imdb", split="test")
test_data = data.shard(num_shards=4, index=0)
  • load_dataset(): loads a dataset from Hugging Face hub
    • imdb: review classification
Introduction to LLMs in Python

Auto classes

from transformers import AutoModel, AutoTokenizer

from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased") tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
Introduction to LLMs in Python

Tokenization

from transformers import AutoTokenizer, AutoModelForSequenceClassification
from datasets import load_dataset

train_data = load_dataset("imdb", split="train")
train_data = data.shard(num_shards=4, index=0)
test_data = load_dataset("imdb", split="test")
test_data = data.shard(num_shards=4, index=0)

model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")


# Tokenize the data tokenized_training_data = tokenizer(train_data["text"], return_tensors="pt", padding=True, truncation=True, max_length=64) tokenized_test_data = tokenizer(test_data["text"], return_tensors="pt", padding=True, truncation=True, max_length=64)
Introduction to LLMs in Python

Tokenization output

print(tokenized_training_data)
{'input_ids': tensor([[  101,  1045, 12524,  1045,  2572,  8025,  1011,  3756,  
2013,  2026, 2678,  3573,  2138,  1997,  2035,  1996,  6704,  2008,  5129,  2009, 
2043,  2009, 2001,  2034,  2207,  1999,  3476,  1012,  1045,  2036, ...
Introduction to LLMs in Python

Tokenizing row by row

def tokenize_function(text_data):
    return tokenizer(text_data["text"], return_tensors="pt", padding=True, truncation=True, max_length=64)

# Tokenize in batches
tokenized_in_batches = train_data.map(tokenize_function, batched=True)

# Tokenize row by row tokenized_by_row = train_data.map(tokenize_function, batched=False)
Dataset({
    features: ['text', 'label', 'input_ids', 'token_type_ids', 'attention_mask'],
    num_rows: 1563
})
Introduction to LLMs in Python

Subword tokenization

  • Common in modern tokenizers
  • Words split into meaningful sub-parts

 

Unbelievably

Introduction to LLMs in Python

Subword tokenization

  • Common in modern tokenizers
  • Words split into meaningful sub-parts

 

Unbelievably tokenized as un, believ, ably.

Introduction to LLMs in Python

Let's practice!

Introduction to LLMs in Python

Preparing Video For Download...