Fine-Tuning with Llama 3
Francesca Donadoni
Curriculum Manager, DataCamp
from transformers import BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16)
from transformers import BitsAndBytesConfig, AutoModelForCausalLM bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 )
model = AutoModelForCausalLM.from_pretrained( "nvidia/Llama3-ChatQA-1.5-8B",
quantization_config=bnb_config
)
promptstr = """System: You are a helpful chatbot who answers questions about planets. User: Explain the history of Mars Assistant: """
inputs = tokenizer.encode(promptstr, return_tensors="pt")
outputs = model.generate(inputs, max_length=200)
decoded_outputs = tokenizer.decode(outputs[0, inputs.shape[1]:], skip_special_tokens = True)
print(decoded_outputs)
Here is a brief history of Mars:
- 4.6 billion years ago: Mars formed as part of the solar system.
- 3.8 billion years ago: Mars had a thick atmosphere and liquid water on its surface.
- 3.8 billion years ago to 3.5 billion years ago: Mars lost its magnetic field and atmosphere,
and became a cold, dry planet.
- 3.5 billion years ago to present: Mars has been cold and dry, with a thin atmosphere.
trainer = SFTTrainer( model=model,
peft_config=peft_config,
train_dataset=ds, max_seq_length=250, dataset_text_field='conversation', tokenizer=tokenizer, args=training_arguments
)
trainer.train()
Fine-Tuning with Llama 3