InkpilotsInkpilots News
Build a Next.js AI Chat App with the OpenAI API (Step-by-Step Guide)

Build a Next.js AI Chat App with the OpenAI API (Step-by-Step Guide)

Learn how to build a Next.js OpenAI chat application using the OpenAI API with secure server-side calls, a simple chat UI, and production-ready best practices.

Building a Next.js OpenAI chat application is one of the most practical ways to learn modern full-stack patterns: server actions or API routes, secure secret handling, streaming UI updates, and production-ready deployment. In this guide, you’ll create a simple AI chat app with Next.js (App Router) that sends user messages to the OpenAI API and renders assistant replies—optionally streamed for a smoother chat experience.

You’ll also learn the key security and architecture decisions that keep your OpenAI API key safe, reduce latency, and make your chat UI feel responsive.

What You’ll Build

  • A Next.js App Router project with a chat UI
  • A server-side endpoint that calls the OpenAI API (no API key in the browser)
  • Conversation state stored in the client (with a clean message format)
  • Optional streaming responses for real-time output
  • Basic production considerations (rate limiting, validation, logging)

Prerequisites

  • Node.js (current LTS recommended)
  • Basic React and Next.js familiarity
  • An OpenAI API key from the OpenAI platform
  • A code editor (VS Code or similar)

1) Create the Next.js Project

Create a new Next.js project using the App Router. When prompted, you can choose TypeScript (recommended) and Tailwind CSS if you want quick styling.

npx create-next-app@latest nextjs-openai-chat
cd nextjs-openai-chat
npm run dev

Open http://localhost:3000 to confirm everything runs.

2) Install the OpenAI SDK

Install the official OpenAI JavaScript SDK. This lets you call the OpenAI API from your server-side code.

npm install openai

3) Add Your OpenAI API Key (Securely)

Never expose your OpenAI API key in client-side code. Store it in an environment variable and only use it on the server.

Create a .env.local file in the project root:

OPENAI_API_KEY=your_api_key_here

Next.js automatically loads .env.local in development. In production (Vercel, etc.), set the same variable in your deployment environment settings.

4) Define a Simple Message Type

A chat app is easier to maintain with a consistent message structure. For the OpenAI Chat Completions API, messages typically include a role and content.

// types/chat.ts
export type ChatMessage = {
  role: 'system' | 'user' | 'assistant';
  content: string;
};

5) Create a Server Route that Calls OpenAI

With the Next.js App Router, you can create a Route Handler under app/api. This endpoint receives chat messages from the browser and calls OpenAI from the server.

Create app/api/chat/route.ts:

// app/api/chat/route.ts
import { NextResponse } from 'next/server';
import OpenAI from 'openai';
import type { ChatMessage } from '@/types/chat';

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

export async function POST(req: Request) {
  try {
    const body = (await req.json()) as { messages?: ChatMessage[] };

    if (!body.messages || !Array.isArray(body.messages)) {
      return NextResponse.json({ error: 'Invalid payload: messages[] is required.' }, { status: 400 });
    }

    // Basic server-side validation (keep it minimal but real)
    for (const m of body.messages) {
      if (!m || typeof m.content !== 'string' || !m.content.trim()) {
        return NextResponse.json({ error: 'Each message must have non-empty content.' }, { status: 400 });
      }
      if (!['system', 'user', 'assistant'].includes(m.role)) {
        return NextResponse.json({ error: 'Invalid role in messages.' }, { status: 400 });
      }
    }

    // NOTE: Model names evolve over time. Use a model available to your account.
    // Refer to OpenAI docs for current recommended chat models.
    const completion = await client.chat.completions.create({
      model: 'gpt-4o-mini',
      messages: body.messages,
    });

    const text = completion.choices?.[0]?.message?.content ?? '';
    return NextResponse.json({ text });
  } catch (err) {
    return NextResponse.json(
      { error: 'Failed to generate response.' },
      { status: 500 }
    );
  }
}

Why this matters: your browser calls /api/chat, but only the server calls OpenAI. That keeps your key private and lets you add validation, rate limiting, and logging later.

6) Build the Chat UI (Client Component)

Now create a simple chat page that stores conversation state in React and posts messages to your API route.

Create app/page.tsx:

// app/page.tsx
'use client';

import { useMemo, useState } from 'react';
import type { ChatMessage } from '@/types/chat';

export default function Home() {
  const [messages, setMessages] = useState<ChatMessage[]>([
    {
      role: 'system',
      content: 'You are a helpful assistant. Keep responses concise.',
    },
  ]);
  const [input, setInput] = useState('');
  const [loading, setLoading] = useState(false);

  const visibleMessages = useMemo(
    () => messages.filter((m) => m.role !== 'system'),
    [messages]
  );

  async function sendMessage() {
    const text = input.trim();
    if (!text || loading) return;

    const nextMessages: ChatMessage[] = [...messages, { role: 'user', content: text }];
    setMessages(nextMessages);
    setInput('');
    setLoading(true);

    try {
      const res = await fetch('/api/chat', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ messages: nextMessages }),
      });

      if (!res.ok) {
        const err = await res.json().catch(() => ({}));
        throw new Error(err?.error || 'Request failed');
      }

      const data = (await res.json()) as { text: string };
      setMessages((prev) => [...prev, { role: 'assistant', content: data.text || '' }]);
    } catch (e: any) {
      setMessages((prev) => [
        ...prev,
        {
          role: 'assistant',
          content: `Error: ${e?.message ?? 'Something went wrong.'}`,
        },
      ]);
    } finally {
      setLoading(false);
    }
  }

  return (
    <main style={{ maxWidth: 800, margin: '40px auto', padding: 16 }}>
      <h1 style={{ fontSize: 28, fontWeight: 700, marginBottom: 12 }}>
        Next.js OpenAI Chat Application
      </h1>

      <div
        style={{
          border: '1px solid #e5e7eb',
          borderRadius: 12,
          padding: 16,
          minHeight: 320,
          marginBottom: 12,
        }}
      >
        {visibleMessages.length === 0 ? (
          <p style={{ color: '#6b7280' }}>Ask a question to start the conversation.</p>
        ) : (
          visibleMessages.map((m, i) => (
            <div key={i} style={{ marginBottom: 12 }}>
              <div style={{ fontSize: 12, color: '#6b7280', marginBottom: 4 }}>
                {m.role === 'user' ? 'You' : 'Assistant'}
              </div>
              <div style={{ whiteSpace: 'pre-wrap', lineHeight: 1.5 }}>{m.content}</div>
            </div>
          ))
        )}
      </div>

      <div style={{ display: 'flex', gap: 8 }}>
        <input
          value={input}
          onChange={(e) => setInput(e.target.value)}
          onKeyDown={(e) => {
            if (e.key === 'Enter') sendMessage();
          }}
          placeholder="Type your message…"
          style={{
            flex: 1,
            border: '1px solid #e5e7eb',
            borderRadius: 10,
            padding: '10px 12px',
          }}
        />
        <button
          onClick={sendMessage}
          disabled={loading}
          style={{
            borderRadius: 10,
            padding: '10px 14px',
            border: '1px solid #111827',
            background: loading ? '#9ca3af' : '#111827',
            color: 'white',
            cursor: loading ? 'not-allowed' : 'pointer',
          }}
        >
          {loading ? 'Sending…' : 'Send'}
        </button>
      </div>

      <p style={{ marginTop: 12, color: '#6b7280', fontSize: 13 }}>
        Tip: Keep the system message for behavior control, and send only user/assistant messages to the UI.
      </p>
    </main>
  );
}

7) Optional: Stream Responses for a Real Chat Feel

Streaming makes the assistant text appear progressively, which feels faster and more “chat-like.” In Next.js, streaming can be implemented by returning a streamed response from the route handler and reading it in the client as chunks. The OpenAI API supports streaming for chat completions, but the exact streaming implementation depends on the SDK version and the response format you choose.

Because streaming APIs can change over time, follow the current OpenAI SDK documentation for “streaming chat completions” and then adapt your Next.js route to return a ReadableStream to the client. Once you have a stream, update the last assistant message incrementally as chunks arrive.

8) Common Issues (and How to Fix Them)

  • 401/Unauthorized: Confirm OPENAI_API_KEY is set in .env.local and that you restarted the dev server after adding it.
  • API key exposed: If you see the key in the browser bundle, you accidentally used it in a client component. Move OpenAI calls to a server route or server action.
  • CORS confusion: When calling OpenAI, do it server-side. Your browser should call your Next.js endpoint instead.
  • Model not found: Use a model that’s available to your account and supported by the endpoint you’re calling. Check OpenAI docs for current model names and recommendations.
  • Slow responses: Reduce message history, shorten the system prompt, or add UI streaming.

9) Production Hardening Checklist

A demo is easy; a reliable Next.js OpenAI chat application needs guardrails:

  • Input validation: Enforce max message length and max history size server-side.
  • Rate limiting: Add per-IP or per-user limits to prevent abuse.
  • Authentication: Tie usage to signed-in users if the app is public.
  • Logging and monitoring: Record request IDs, latency, and errors (avoid logging sensitive user content unless you have a clear policy).
  • Timeouts and retries: Handle slow upstream responses gracefully.
  • Cost controls: Limit history size and consider summarizing older context.

10) Next Steps: Make It Your Own

  • Add conversation persistence with a database (store messages per user/session).
  • Use file uploads or retrieval to answer questions from your own documents (RAG).
  • Add “regenerate response” and “stop generating” controls.
  • Improve UI with Markdown rendering and code formatting.
  • Deploy to Vercel and set OPENAI_API_KEY in project environment variables.

With this foundation, you have a working Next.js OpenAI chat application that follows the most important rule: keep the OpenAI API key on the server. From here, you can add streaming, user accounts, and persistence to turn a simple demo into a real product feature.

Last Updated 1/14/2026
Next.js OpenAI chat applicationNext.js AI chat appOpenAI API Next.js
Powered by   Inkpilots