As generative AI applications become more complex, developers are searching for tools to simplify LLM integration and orchestration. The LangChain API provides a flexible, open-source framework to do just that—especially with Python.
In this LangChain Python tutorial, you’ll learn how to build intelligent applications and agents powered by LLMs like OpenAI’s GPT-4. We'll cover step-by-step instructions to set up LangChain, build your first chain, integrate memory, and even connect vector stores like Pinecone.
LangChain is an orchestration framework that allows developers to:
Connect with multiple LLMs (OpenAI, Cohere, Hugging Face, etc.)
Build multi-step reasoning pipelines with chains
Create autonomous agents that interact with tools
Maintain context using memory modules
Retrieve documents using semantic search from vector stores
Python is LangChain’s most mature and feature-rich implementation—making it ideal for production-ready LLM apps.
bashpip install langchain langchain-openai
If you plan to use vector databases or additional integrations:
bashpip install langchain[all]
You’ll need an OpenAI API key. Set it in your environment:
bashexport OPENAI_API_KEY="your-api-key"
In Python:
pythonfrom langchain_openai import ChatOpenAI llm = ChatOpenAI(model="gpt-4")
This completes your basic LangChain OpenAI integration.
Use a prompt template to generate structured outputs.
pythonfrom langchain.prompts import PromptTemplate from langchain.chains import LLMChain prompt = PromptTemplate.from_template("Summarize this: {text}") chain = LLMChain(llm=llm, prompt=prompt) result = chain.run("LangChain makes it easier to build apps with LLMs.") print(result)
LangChain supports memory for conversational and multi-turn use cases.
pythonfrom langchain.memory import ConversationBufferMemory from langchain.chains import ConversationChain memory = ConversationBufferMemory() chat_chain = ConversationChain(llm=llm, memory=memory) chat_chain.run("Hi, I’m Alex.") chat_chain.run("What’s my name?")
Use the LangChain memory module to maintain context and user history.
Agents in LangChain allow LLMs to take actions using tools like search, math functions, or APIs.
pythonfrom langchain.agents import load_tools, initialize_agent tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) response = agent.run("What is the weather in Paris and square root of 144?") print(response)
With agents, you can create LangChain autonomous agents that plan, act, and reason over multiple steps.
For knowledge retrieval and RAG apps, integrate with a vector database like Pinecone.
pythonfrom langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import PineconeVectorStore embeddings = OpenAIEmbeddings() vectorstore = PineconeVectorStore.from_existing_index("your-index", embeddings)
Now your app can fetch relevant content and summarize it—an essential RAG technique.
LangChain typically requires API keys for LLM providers such as OpenAI, Hugging Face, or others.
bashexport OPENAI_API_KEY="your_openai_key"
pythonfrom langchain.llms import OpenAI llm = OpenAI(openai_api_key="your_openai_key")
Prompt templates allow you to define reusable, parameterized prompts for your LLMs.
pythonfrom langchain import PromptTemplate template = """Question: {question} Let's think step by step. Answer: """ prompt = PromptTemplate(template=template, input_variables=["question"]) formatted_prompt = prompt.format( question="Can Barack Obama have a conversation with George Washington?" ) print(formatted_prompt)
LangChain supports a variety of LLM providers.
pythonfrom langchain.llms import OpenAI llm = OpenAI(model_name="text-davinci-003", openai_api_key="your_openai_key") response = llm("Tell me a joke about data scientist") print(response)
pythonfrom langchain import HuggingFaceHub llm = HuggingFaceHub( repo_id="google/flan-t5-xl", huggingfacehub_api_token="your_hf_token" ) print(llm("Tell me a joke about data scientist"))
Chains allow you to combine prompts and LLMs into reusable, multi-step workflows.
pythonfrom langchain.chains import LLMChain llm_chain = LLMChain(prompt=prompt, llm=llm) question = "Can Barack Obama have a conversation with George Washington?" print(llm_chain.run(question))
pythonfrom langchain.chains import SimpleSequentialChain # First prompt: get the most popular city in a country first_prompt = PromptTemplate( input_variables=["country"], template="What is the most popular city in {country} for tourists? Just return the name of the city." ) chain_one = LLMChain(llm=llm, prompt=first_prompt) # Second prompt: get top things to do in the city second_prompt = PromptTemplate( input_variables=["city"], template="What are the top three things to do in {city} for tourists? Just return three bullet points." ) chain_two = LLMChain(llm=llm, prompt=second_prompt) # Combine chains overall_chain = SimpleSequentialChain(chains=[chain_one, chain_two], verbose=True) final_answer = overall_chain.run("Canada") print(final_answer)
Combine LLM chains, memory, and vector search to build fully functional chatbots:
ConversationChain
for contextual dialogue
RetrievalQA
for knowledge-base questions
ConversationalRetrievalChain
for hybrid chat + RAG
Use these tools to build AI customer support bots, sales assistants, and more.
LangChain supports:
OpenAI, Cohere, Google Vertex AI
Document loaders (PDF, Notion, Markdown)
Tools and plugins (Google Search, Python REPL, APIs)
Cloud integration (AWS, GCP)
Deployment with LangServe
Observability via LangSmith
Orchestration via LangGraph
LangChain for enterprise AI is reliable, scalable, and model-agnostic.
Use tools like:
LangSmith: Monitor agent decisions, trace errors
Callbacks: Log data at each stage
LangGraph: Manage multi-agent or multi-chain workflows
Component | Purpose |
---|---|
Installation | pip install langchain langchain-openai |
LLM Setup | ChatOpenAI(model="gpt-4") |
Chain | Prompt → LLM → Response |
PromptTemplate | Template-based prompt formatting |
Memory | ConversationBufferMemory for context retention |
Agent | Multi-tool reasoning capability |
Vector Store | Retrieval with Pinecone-powered embeddings |
The official LangChain tutorials cover all these as well as advanced topics like RAG, agents, tool chaining, migrations, and memory management.
For hands-on learning:
SitePoint's guide on LangChain in Python details agent, model, chunk, and chain components.
DataCamp’s tutorial walks through LLM app development with LangChain.
Try interactive YouTube crash courses, such as “LangChain Tutorial in Python”
The LangChain subreddit emphasizes the importance of practice-based learning:
“Just set a goal and code something using it… Keep a tab open with the documentation.”
After mastering the basics, scale up:
Use Retrieval QA chains over PDFs or knowledge bases.
Migrate memory flows to LangGraph for long-term context.
Monitor production flows using LangSmith.
Deploy via LangServe.
Integrate more tools and implement custom agen
Feature | LangChain | CrewAI |
---|---|---|
Use Case | General LLM apps | Task-driven agent teams |
Agent Logic | Dynamic and modular | Pre-configured team roles |
Developer Control | High | Medium |
Best For | Custom workflows | Team-based agent logic |
LangChain gives developers flexibility; CrewAI offers a no-code-like agent team experience.
Bookmark the LangChain Docs
Try LangChain JS if building in JavaScript
Explore open-source projects on GitHub
Watch LangChain Python tutorials on YouTube
Start with one use case (e.g., chatbot or data QA)
This LangChain Python tutorial has shown you how to:
Set up LangChain with OpenAI
Build prompt chains and agents
Add memory and vector search
Scale apps for real-world use
Whether you're building simple chatbots or complex autonomous systems, LangChain in Python provides the structure, flexibility, and power needed to succeed in the world of LLMs.