Congratulations!

Working with Llama 3

Imtihan Ahmed

Machine Learning Engineer

Let's recall

Step 1 - running Llama locally

from llama_cpp import Llama

llm = Llama(model_path = "path/to/model.gguf")
Working with Llama 3

Let's recall

Step 2 - tune parameters

$$

  • temperature, top_k, top_p parameters
Working with Llama 3

Let's recall

Step 3 - assign roles

message_list = [{"role": "system", "content": system_message},
               {"role": "user", "content": user_message}]
Working with Llama 3

Let's recall

Step 4 - guide outputs

  • Precise prompts
  • stop words
  • Zero-shot/Few-shot prompting
Working with Llama 3

Let's recall

Step 5 - explore JSON responses $$

response_format = {"type": "json_object"}
Working with Llama 3

Let's recall

Step 6 - build multi-turn conversations

  • Conversation class
  • .create_completion() method
Working with Llama 3

What's next?

Working with Llama 3

Thank you!

Working with Llama 3

Preparing Video For Download...