Making models smaller with quantization

Fine-Tuning with Llama 3

Francesca Donadoni

Curriculum Manager, DataCamp

What is quantization?

 

  • Reducing model precision
  • 32-bit float to:
    • 8-bit integer
    • 4-bit integer
  • Quantization-aware training

abstract block.jpg

Fine-Tuning with Llama 3

Types of quantization

 

  • Weight quantization: reduce weight precision
  • Activation quantization: reduces precision of activation values
  • Post-Training Quantization: reduce model precision after training
Fine-Tuning with Llama 3

Configuring quantization with bitsandbytes

from transformers import BitsAndBytesConfig

bnb_config = BitsAndBytesConfig(
  • set precision (load_in_4_bit, load_in_8_bit)
    load_in_4bit=True,
  • set quantization type ('fp4' or 4-bit float, 'nf4' or normalized 4-bit float)
    bnb_4bit_quant_type="nf4",
  • set compute precision (32-bit float or 16-bit bfloat)
    bnb_4bit_compute_dtype=torch.bfloat16)
Fine-Tuning with Llama 3

Loading model with quantization

from transformers import BitsAndBytesConfig, AutoModelForCausalLM

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16
)

model = AutoModelForCausalLM.from_pretrained( "nvidia/Llama3-ChatQA-1.5-8B",
quantization_config=bnb_config
)
Fine-Tuning with Llama 3

Using a quantized model

promptstr = """System: You are a helpful chatbot who answers questions about planets.
User: Explain the history of Mars
Assistant: """

inputs = tokenizer.encode(promptstr, return_tensors="pt")
outputs = model.generate(inputs, max_length=200)
decoded_outputs = tokenizer.decode(outputs[0, inputs.shape[1]:], skip_special_tokens = True)
print(decoded_outputs)
Here is a brief history of Mars:
- 4.6 billion years ago: Mars formed as part of the solar system.
- 3.8 billion years ago: Mars had a thick atmosphere and liquid water on its surface.
- 3.8 billion years ago to 3.5 billion years ago: Mars lost its magnetic field and atmosphere, 
and became a cold, dry planet.
- 3.5 billion years ago to present: Mars has been cold and dry, with a thin atmosphere.
Fine-Tuning with Llama 3

Finetuning a quantized model

  • Full quantization does not support fine-tuning
  • LoRA adaptation
trainer = SFTTrainer(
    model=model,

peft_config=peft_config,
train_dataset=ds, max_seq_length=250, dataset_text_field='conversation', tokenizer=tokenizer, args=training_arguments
)
trainer.train()
Fine-Tuning with Llama 3

Let's practice!

Fine-Tuning with Llama 3

Preparing Video For Download...