Developing LLM Applications with LangChain
Jonathan Bennion
AI Engineer & LangChain Contributor
from langchain_core.prompts import PromptTemplate
template = "Expain this concept simply and concisely: {concept}"
prompt_template = PromptTemplate.from_template( template=template )
prompt = prompt_template.invoke({"concept": "Prompting LLMs"}) print(prompt)
text='Expain this concept simply and concisely: Prompting LLMs'
llm = HuggingFacePipeline.from_model_id( model_id="meta-llama/Llama-3.3-70B-Instruct", task="text-generation" )
llm_chain = prompt_template | llm
concept = "Prompting LLMs" print(llm_chain.invoke({"concept": concept}))
Prompting LLMs (Large Language Models) refers to the process of giving a model a
specific input or question to generate a response.
|
(pipe) operatorsystem
, human
, ai
from langchain_core.prompts import ChatPromptTemplate
template = ChatPromptTemplate.from_messages( [ ("system", "You are a calculator that responds with math."), ("human", "Answer this math question: What is two plus two?"), ("ai", "2+2=4"), ("human", "Answer this math question: {math}") ] )
llm = ChatOpenAI(model="gpt-4o-mini", api_key='<OPENAI_API_TOKEN>')
llm_chain = template | llm
math='What is five times five?'
response = llm_chain.invoke({"math": math}) print(response.content)
5x5=25
Developing LLM Applications with LangChain