DEV Community

Cover image for AI Integration Tutorials 2026 | TanStack AI & Claude
Md. Maruf Rahman
Md. Maruf Rahman

Posted on • Edited on • Originally published at practicaldev.online

AI Integration Tutorials 2026 | TanStack AI & Claude

Instead of clicking through a CRUD admin panel, imagine asking a chatbot to "add a new post, mark my first todo as done, and show me the updated tables". In this tutorial, we'll build that exact experience with TanStack Start, TanStack AI, Anthropic Claude, and a simple json-server backend.

Traditional admin dashboards do the job, but they rarely feel delightful. For simple operations—like checking a couple of posts or updating a todo status—you often have to dig through several screens and forms. With an AI-powered assistant, you flip that around: users describe what they want in natural language, and the assistant decides which tools to call behind the scenes.

What the Assistant Can Do

The demo uses a json-server backend backed by a simple db.json file. The assistant can:

  • List, search, create, update, and delete posts (title + views)
  • Manage comments for each post
  • Manage todos with completion status
  • Read and update a small profile document
  • Set a browser-side counter stored in localStorage

All of that is accessible from a single chat box. The user doesn't need to know anything about endpoints or payload shapes—the AI agent takes care of calling the right tools with the right arguments.

Architecture Overview

The architecture is surprisingly simple:

  1. Backend: A single /api/chat route powered by @tanstack/ai and the anthropicText adapter
  2. Tools: Tool definitions that wrap json-server endpoints using Zod schemas
  3. Frontend: A React chat UI using @tanstack/ai-react with Server-Sent Events
  4. LLM: Claude Haiku handles natural language understanding and tool selection

Backend: Tools Over a json-server API

On the backend side we don't need a giant framework. A single /api/chat route, powered by @tanstack/ai and the anthropicText adapter, is enough. All the real work happens in tool definitions that wrap the json-server endpoints.

Installation

npm install @tanstack/ai @tanstack/ai-anthropic @tanstack/react-router zod
Enter fullscreen mode Exit fullscreen mode

Chat API Route

Here's the complete chat API route:

import { chat, toServerSentEventsResponse, toolDefinition } from "@tanstack/ai";
import { anthropicText } from "@tanstack/ai-anthropic";
import { createFileRoute } from "@tanstack/react-router";
import z from "zod";

const API_BASE_URL = "http://localhost:4000";

export const Route = createFileRoute("/api/chat")({
  server: { handlers: { POST } },
});

export async function POST({ request }: { request: Request }) {
  if (!process.env.ANTHROPIC_API_KEY) {
    return new Response(
      JSON.stringify({ error: "ANTHROPIC_API_KEY not configured" }),
      { status: 500, headers: { "Content-Type": "application/json" } }
    );
  }

  const body = await request.json();
  const rawMessages = Array.isArray(body.messages) ? body.messages : [];
  const messages = rawMessages.map(cleanMessage).filter(Boolean);

  const systemMessage = {
    role: "system" as const,
    content:
      "You are a precise, professional assistant embedded in a demo dashboard. " +
      "Use the available tools to read and update posts, comments, todos, the profile, and the counter. " +
      "After using tools, always send a clear natural-language summary for the user. " +
      "Format tabular data as Markdown tables and keep answers concise and copy-friendly.",
  };

  const stream = chat({
    adapter: anthropicText("claude-haiku-4-5"),
    messages: [systemMessage, ...messages],
    tools: [
      listPostsTool,
      addPostTool,
      editPostTool,
      deletePostTool,
      listTodosTool,
      addTodoTool,
      editTodoTool,
      deleteTodoTool,
      getProfileTool,
      updateProfileTool,
      updateCounterToolDef,
    ],
  });

  return toServerSentEventsResponse(stream);
}
Enter fullscreen mode Exit fullscreen mode

Tool Definitions

Tools for posts, todos, and profile follow the same pattern: describe input and output with Zod, then implement a small server function that talks to json-server. Here's an example for posts:

const listPostsToolDef = toolDefinition({
  name: "list_posts",
  description: "Fetch all posts from json-server. Can optionally filter by search query.",
  inputSchema: z.object({
    query: z.string().optional(),
  }),
  outputSchema: z.array(
    z.object({
      id: z.string(),
      title: z.string(),
      views: z.number(),
    })
  ),
});

const listPostsTool = listPostsToolDef.server(async (args: any) => {
  const { query } = args as { query?: string };
  const url = new URL(API_BASE_URL + "/posts");
  if (query) url.searchParams.set("q", query);
  const response = await fetch(url.toString());
  if (!response.ok) {
    throw new Error("Failed to fetch posts: " + response.statusText);
  }
  return response.json();
});
Enter fullscreen mode Exit fullscreen mode

Frontend: Chat UI with TanStack AI React

On the client we use @tanstack/ai-react to manage the streaming chat state and connect to /api/chat over Server-Sent Events.

"use client";

import { useEffect, useRef, useState } from "react";
import { fetchServerSentEvents, useChat } from "@tanstack/ai-react";
import { clientTools } from "@tanstack/ai-client";

export function Chat() {
  const [input, setInput] = useState("");
  const bottomRef = useRef<HTMLDivElement | null>(null);

  const { messages, sendMessage, isLoading } = useChat({
    connection: fetchServerSentEvents("/api/chat"),
    tools: clientTools(updateCounterTool),
    initialMessages,
  });

  const handleSubmit = (event: React.FormEvent) => {
    event.preventDefault();
    if (!input.trim() || isLoading) return;
    sendMessage(input.trim());
    setInput("");
  };

  return (
    <div className="flex flex-col h-screen">
      {/* Chat UI */}
      <form onSubmit={handleSubmit}>
        <input
          type="text"
          value={input}
          onChange={(event) => setInput(event.target.value)}
          placeholder="Ask the assistant to manage your posts, todos, or profile…"
        />
        <button type="submit" disabled={!input.trim() || isLoading}>
          Send
        </button>
      </form>
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

Best Practices

  1. Clear tool descriptions - Write descriptive tool descriptions so the LLM understands when to use them
  2. Zod schemas - Use Zod for input/output validation and type safety
  3. Error handling - Handle errors gracefully in tool implementations
  4. System prompts - Craft clear system prompts that guide the assistant's behavior
  5. Message persistence - Store chat history for better user experience
  6. Streaming - Use Server-Sent Events for real-time responses

📖 Read the Complete Guide

This is just a brief overview! The complete guide on my blog includes:

  • Complete Tool Definitions - All tools for posts, todos, and profile
  • Full Chat Component - Complete React chat UI implementation
  • json-server Setup - Database setup and configuration
  • Error Handling - Comprehensive error management
  • Advanced Patterns - Extending the assistant
  • Real-world examples from production applications

👉 Read the full article with all code examples here


What's your experience with AI-powered interfaces? Share your tips in the comments! 🚀

For more React guides, check out my blog covering TanStack Table, React Router, TypeScript, and more.

Top comments (0)