This template scaffolds a LangChain.js + Next.js starter app. It showcases how to use and combine LangChain modules for several use cases. Specifically:
Most of them use Vercel's AI SDK to stream tokens to the client and display the incoming messages.
You can check out a hosted version of this repo here: https://langchain-nextjs-template.vercel.app/
First, clone this repo and download it locally.
Next, you'll need to set up environment variables in your repo's
.env.local file. Copy the
.env.example file to
To start with the basic examples, you'll just need to add your OpenAI API key.
Next, install the required packages using your preferred package manager (e.g.
Now you're ready to run the development server:
Open http://localhost:3000 with your browser to see the result! Ask the bot something and you'll see a streamed response:
You can start editing the page by modifying
app/page.tsx. The page auto-updates as you edit the file.
Backend logic lives in
app/api/chat/route.ts. From here, you can change the prompt and model, or add other modules and logic.
The second example shows how to have a model return output according to a specific schema using OpenAI Functions.
Structured Output link in the navbar to try it out:
The chain in this example uses a popular library called Zod to construct a schema, then formats it in the way OpenAI expects.
It then passes that schema as a function into OpenAI and passes a
function_call parameter to force OpenAI to return arguments in the specified format.
For more details, check out this documentation page.
To try out the agent example, you'll need to give the agent access to the internet by populating the
Head over to the SERP API website and get an API key if you don't already have one.
You can then click the
Agent example and try asking it more complex questions:
This example uses the OpenAI Functions agent, but there are a few other options you can try as well. See this documentation page for more details.
The retrieval examples both use Supabase as a vector store. However, you can swap in
another supported vector store if preferred by changing
the code under
For Supabase, follow these instructions to set up your
database, then get your database URL and private key and paste them into
You can then switch to the
Retrieval Agent examples. The default document text is pulled from the LangChain.js retrieval
use case docs, but you can change them to whatever text you'd like.
For a given text, you'll only need to press
Upload once. Pressing it again will re-ingest the docs, resulting in duplicates.
You can clear your Supabase vector store by navigating to the console and running
DELETE FROM docuemnts;.
After splitting, embedding, and uploading some text, you're ready to ask questions!
For more info on retrieval chains, see this page. The specific variant of the conversational retrieval chain used here is composed using LangChain Expression Language, which you can read more about here.
For more info on retrieval agents, see this page.
The example chains in the
app/api/chat/retrieval/route.ts files use
LangChain Expression Language to
compose different LangChain modules together. You can integrate other retrievers, agents, preconfigured chains, and more too, though keep in mind
BytesOutputParser is meant to be used directly with model output.
To learn more about what you can do with LangChain.js, check out the docs here:
When ready, you can deploy your app on the Vercel Platform.
Check out the Next.js deployment documentation for more details.