
Dec 22, 2025
Next.js doesn’t provide AI models out of the box — and that’s perfectly fine.
What it does provide is a strong, secure environment to run AI-powered features safely and efficiently:
Server-only execution using Route Handlers
Streaming responses for real-time output
Support for both Edge and Node runtimes
Secure environment variables for API keys
A React-first rendering experience
With this foundation in place, integrating AI into a Next.js application becomes straightforward. The actual intelligence comes from external providers such as OpenAI, Anthropic, or Gemini, while SDKs handle the connection between your app and these models.
This is where AI starts to change the product experience.
Modern users no longer see AI as a bonus feature — they expect it as part of the application itself:
Chat-like interfaces for instant interaction
Smart assistants that improve productivity
AI-powered forms that reduce manual effort
Real-time suggestions that feel contextual and helpful
If you’re building with Next.js, adding these capabilities is easier than ever — as long as you choose the right tools and follow the right patterns.
In this article, we’ll look at how to implement AI in Next.js using:
Vercel AI SDK for smooth, streaming UI experiences
TanStack AI for scalable, type-safe AI logic
No hype. Just practical implementation ideas you can use in real projects.
Before integrating AI features, you’ll need access to an AI provider.
Start by creating an account with OpenAI, then generate an API key from the dashboard. This key allows your application to securely communicate with the AI model.
Once you have the key, add it to your project’s environment configuration:
OPENAI_API_KEY=your_openai_api_key_here
This API key is used only on the server, ensuring it is never exposed to the client. Next.js automatically makes environment variables available to server-side code such as Route Handlers.
If your goal is to add AI features quickly — especially chat-style interfaces — Vercel AI SDK is one of the easiest ways to get started in a Next.js app.
It’s designed to work seamlessly with the Next.js App Router and abstracts away most of the hard parts of building AI-powered UIs.
Instead of worrying about low-level streaming logic, you focus on UI and product behavior.
With Vercel AI SDK, AI logic lives on the server using Route Handlers.
/app/api/chat/route.tsimport { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai("gpt-4o-mini"),
messages,
});
return result.toDataStreamResponse();
}
On the client, Vercel AI SDK provides a high-level hook that handles everything for you.
app/page.tsx'use client';
import { useChat } from '@ai-sdk/react';
import { useState } from 'react';
export default function Chat() {
const [input, setInput] = useState('');
const { messages, sendMessage } = useChat();
return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map(message => (
<div key={message.id} className="whitespace-pre-wrap">
{message.role === 'user' ? 'User: ' : 'AI: '}
{message.parts.map((part, i) => {
switch (part.type) {
case 'text':
return <div key={`${message.id}-${i}`}>{part.text}</div>;
}
})}
</div>
))}
<form
onSubmit={e => {
e.preventDefault();
sendMessage({ text: input });
setInput('');
}}
>
<input
className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
value={input}
placeholder="Say something..."
onChange={e => setInput(e.currentTarget.value)}
/>
</form>
</div>
);
}
Implementing AI with TanStack AI
Instead of giving you a “ready-made chat”, it gives you powerful primitives to build your own AI experience.
Think of it as:
“TanStack Query, but for AI.”
TanStack AI takes a very different approach compared to high-level AI SDKs.
Instead of hiding complexity behind magic hooks, it gives you explicit, type-safe building blocks to design AI features exactly the way your application needs.
TanStack AI focuses on:
End-to-end TypeScript safety across prompts, responses, and tools
Streaming as a first-class concept, not an afterthought
Tool and function calling for real application logic
Provider-agnostic design, so you can switch models freely
With TanStack AI, AI logic lives cleanly on the server, where it belongs.
/app/api/chat/route.tsimport { chat, toStreamResponse } from "@tanstack/ai";
import { openaiText } from "@tanstack/ai-openai";
export async function POST(request: Request) {
// Check for API key
if (!process.env.OPENAI_API_KEY) {
return new Response(
JSON.stringify({
error: "OPENAI_API_KEY not configured",
}),
{
status: 500,
headers: { "Content-Type": "application/json" },
}
);
}
const { messages, conversationId } = await request.json();
try {
// Create a streaming chat response
const stream = chat({
adapter: openaiText("gpt-4o"),
messages,
conversationId
});
// Convert stream to HTTP response
return toStreamResponse(stream);
} catch (error) {
return new Response(
JSON.stringify({
error: error instanceof Error ? error.message : "An error occurred",
}),
{
status: 500,
headers: { "Content-Type": "application/json" },
}
);
}
}
On the client side, TanStack AI gives you headless hooks — no UI assumptions.
components/Chat.tsximport { useState } from "react";
import { useChat, fetchServerSentEvents } from "@tanstack/ai-react";
export function Chat() {
const [input, setInput] = useState("");
const { messages, sendMessage, isLoading } = useChat({
connection: fetchServerSentEvents("/api/chat"),
});
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (!input.trim() || isLoading) return;
sendMessage(input);
setInput("");
};
return (
<div className="flex flex-col h-screen">
{/* Messages */}
<div className="flex-1 overflow-y-auto p-4">
{messages.map((message) => (
<div key={message.id} className="mb-4">
<div className="font-semibold mb-1">
{message.role === "assistant" ? "Assistant" : "You"}
</div>
{message.parts.map((part, idx) => {
if (part.type === "thinking") {
return (
<div
key={idx}
className="text-sm text-gray-500 italic mb-2"
>
💭 Thinking: {part.content}
</div>
);
}
if (part.type === "text") {
return <div key={idx}>{part.content}</div>;
}
return null;
})}
</div>
))}
</div>
{/* Input */}
<form onSubmit={handleSubmit} className="p-4 border-t">
<div className="flex gap-2">
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Type a message..."
className="flex-1 px-4 py-2 border rounded-lg"
disabled={isLoading}
/>
<button
type="submit"
disabled={!input.trim() || isLoading}
className="px-6 py-2 bg-blue-600 text-white rounded-lg disabled:opacity-50"
>
Send
</button>
</div>
</form>
</div>
);
}
Implementing AI in Next.js is no longer complex. With the right tools, you can build secure, streaming, and scalable AI features while keeping your application clean and maintainable. Whether you start simple or build advanced workflows, Next.js provides a solid foundation for modern AI-powered experiences.

22 Dec 2025
🚀 TanStack Start: The New React Framework Built for Modern Apps

18 Aug 2025
🚀 Future of Work: AI Agents as Digital Coworkers

12 Aug 2025
How AI is Transforming Customer Support Beyond Simple Chatbots