As the demand for AI-driven applications grows, developers are increasingly turning to frameworks that simplify the integration of large language models (LLMs) into their applications. LangChain.js, the JavaScript/TypeScript version of LangChain, empowers web developers to build conversational agents, retrieval-augmented generation (RAG) systems, and AI workflows—entirely in JavaScript.
In this article, we’ll explore the LangChain JavaScript API, its core modules, use cases, and how to get started.
LangChain.js is the official JavaScript and TypeScript SDK for LangChain, designed for building LLM-based applications across environments including Node.js, Deno, Vercel Edge Functions, and the browser.
It mirrors many of the Python features while taking advantage of JavaScript’s async/await model, modular design, and ecosystem flexibility.
LangChain.js is modular and broken into packages for flexibility. Key components include:
Package | Purpose |
---|---|
@langchain/core |
Core abstractions like chains, prompts, memory |
@langchain/openai |
Integrate with OpenAI LLMs and embeddings |
@langchain/community |
Prebuilt integrations (e.g., tools, retrievers) |
@langchain/pinecone |
Pinecone vector store integration |
langchain (meta-package) |
Convenience package combining common tools |
To get started with LangChain.js:
bashnpm install langchain
Or, if you prefer specific modules:
bashnpm install @langchain/core @langchain/openai
javascriptimport { OpenAI } from "@langchain/openai"; const llm = new OpenAI({ modelName: "gpt-4", temperature: 0.7 }); const result = await llm.invoke("What is the capital of France?"); console.log(result); // Paris
javascriptimport { PineconeClient } from "@pinecone-database/pinecone"; import { PineconeStore } from "@langchain/pinecone"; import { OpenAIEmbeddings } from "@langchain/openai"; import { RetrievalQAChain } from "langchain/chains"; const pinecone = new PineconeClient(); await pinecone.init({ apiKey: "your-key", environment: "your-env" }); const index = pinecone.Index("langchain-js"); const embeddings = new OpenAIEmbeddings(); const vectorstore = await PineconeStore.fromExistingIndex({ pineconeIndex: index, embedding: embeddings, namespace: "demo-namespace", }); const retriever = vectorstore.asRetriever(); const llm = new OpenAI(); const qaChain = RetrievalQAChain.fromLLM(llm, retriever); const answer = await qaChain.call({ query: "What is vector search?" }); console.log(answer);
LangChain.js supports tool use within agent workflows, enabling the model to decide when to invoke a function or external API.
javascriptimport { initializeAgentExecutorWithOptions } from "langchain/agents"; import { Calculator } from "langchain/tools/calculator"; const tools = [new Calculator()]; const llm = new OpenAI({ modelName: "gpt-4o", temperature: 0 }); const executor = await initializeAgentExecutorWithOptions(tools, llm, { agentType: "openai-functions", }); const result = await executor.call({ input: "What is 5 times 7?" }); console.log(result.output); // 35
Node.js apps and CLIs
Browser-based AI apps
Edge functions (Vercel, Cloudflare Workers)
Serverless APIs (AWS Lambda, GCP Cloud Functions)
Frameworks like Next.js, SvelteKit, Astro
Streaming responses: Use streams for real-time token outputs.
Memory modules: Store prior messages for conversation continuity.
Structured output: Schema-aware response formats.
Graph agents: Orchestrate multi-step workflows with LangGraph.js.
Use environment variables to manage API keys securely.
Stream responses when needed to reduce latency.
Use semantic filtering (score_threshold
) to control vector search cost.
Cache embeddings locally or in Redis if reusing documents.
Feature | Benefit |
---|---|
TypeScript support | Strong typings, autocomplete |
Modular architecture | Use only what you need |
OpenAI & Pinecone integration | Easy RAG setup |
Serverless-ready | Perfect for modern JS stacks |
Active development | Backed by LangChain team and community |
LangChain.js simplifies building LLM-based applications by providing:
Modular packages: Use only the functionality you need—chains, agents, retrievers, embeddings, and memory modules.
Unified abstractions: Standardized interfaces across models and tools for consistent logic.
Cross-environment support: Works in Node.js, Deno, Vercel Edge Functions, and browsers.
Native TypeScript support: Strong typing and auto-complete for better dev experience.
Out-of-the-box integrations: Includes OpenAI, Pinecone, Hugging Face, and more.
With LangChain.js, developers can build complex workflows like RAG pipelines, autonomous agents, and interactive chat UIs without dealing with raw API calls.
Here’s how to get started with LangChain.js:
Step 1: Initialize a project
bashnpm init -y
Step 2: Install core packages
bashnpm install langchain
Or install specific modules:
bashnpm install @langchain/core @langchain/openai
Step 3: Configure your environment
Create a .env
file:
envOPENAI_API_KEY=your-api-key
Use dotenv
or process.env
to access keys securely.
Step 4: Write your first LLM call
jsimport { OpenAI } from "@langchain/openai"; const llm = new OpenAI({ temperature: 0.7 }); const res = await llm.invoke("What is LangChain?"); console.log(res);
LangSmith enables tracing, debugging, and evaluating your LangChain apps.
To use LangSmith with LangChain.js:
Install LangSmith SDK (included in langchain
package).
Set environment variables:
envLANGCHAIN_API_KEY=your-langsmith-api-key LANGCHAIN_PROJECT=your-project-name
Enable tracing in your code:
jsimport { traceable } from "@langchain/core/tracing"; const response = await traceable(llm).invoke("Hello world");
Use the LangSmith dashboard to:
Visualize chains and agent steps
Monitor input/output data
Debug token usage and prompt formats
LangSmith is especially useful for RAG pipelines and multi-agent workflows.
Here’s a basic conversational chain using memory:
jsimport { ConversationChain } from "langchain/chains"; import { OpenAI } from "@langchain/openai"; import { BufferMemory } from "@langchain/core/memory"; const llm = new OpenAI(); const memory = new BufferMemory(); const chain = new ConversationChain({ llm, memory }); await chain.call({ input: "Hi, I'm John." }); const res = await chain.call({ input: "What's my name?" }); console.log(res.response); // Output: "Your name is John."
Other supported chain types:
LLMChain for prompt-based pipelines
RetrievalQAChain for question answering with vector search
RouterChain for branching logic between tools or agents
The best resources for LangChain.js development include:
Official LangChain.js Docs:
https://js.langchain.com
LangChain GitHub Repos:
https://github.com/langchain-ai/langchainjs
LangSmith for JavaScript:
https://docs.smith.langchain.com
NPM Package Pages:
Look up packages like @langchain/core
, @langchain/openai
, @langchain/community
💡 Community Examples:
CodeSandbox templates
Tutorials on Medium, Hashnode, Dev.to
GitHub examples in the examples/
folder
Question | Summary Answer |
---|---|
LangChain.js simplifies | Modular APIs, TS support, cross-environment flexibility |
Install steps | npm install langchain , set API keys, write LLM code |
LangSmith integration | Add API key, use traceable() or environment hooks |
Conversational chains | Use ConversationChain with BufferMemory |
Docs and guides | Visit js.langchain.com and GitHub |
LangChain.js brings the power of LangChain to the JavaScript world, giving frontend and full-stack developers an efficient way to build with LLMs. Whether you’re building a chatbot, an AI assistant, or a document Q&A tool, LangChain’s JavaScript API provides a production-ready, extensible, and developer-friendly interface.