A typed and extensible TypeScript library for creating and running Iterative AI Agents that guarantee structured JSON output.
Structured JSON Agent provides a safe and unified interface for working with various Large Language Models (LLMs), ensuring structured JSON responses every time. It abstracts the complexity of different providers, making it incredibly easy to test, swap, and compare models like OpenAI, Google Gemini, Anthropic Claude, and DeepSeek without changing your core logic.
Under the hood, it leverages native structured output capabilities where available—such as OpenAI's Structured Outputs, Google GenAI's JSON mode, and Anthropic's experimental features. For models that don't natively support strict JSON schemas, the library offers a robust fallback: simply configure a Reviewer. The agent will automatically detect validation errors and engage the reviewer model to correct the output, guaranteeing type safety and reliability across any provider.
Enforces strict adherence to Zod Schemas using a Generator ↔ Reviewer cycle.
Built-in adapters for OpenAI, Google GenAI (Gemini), Anthropic (Claude), and DeepSeek.
Leverages native structured output capabilities (e.g., OpenAI Structured Outputs, Anthropic Beta).
Built with TypeScript and Zod for full type inference and safety from input to output.
Automatically detects validation errors and feeds them back to the model to fix the output.
Mix and match different providers for generation and review phases.
To install the library, run:
npm install structured-json-agent
You need a Node.js version that supports ES modules. The library is tested with Node.js 18 and above.
To create a schema, you need to install zod:
npm install zod
You need to inject a LLM service instance to the agent. You can use openai, @anthropic-ai/sdk, or @google/genai. Each package has its own installation instructions.
Note: The OpenAI instance can work with other providers, like Deepseek, you can use it by passing the url while initializing the instance.
The lib provides an adapter for each provider (OpenAI, Anthropic, Google Gemini). You can implement your own LLM service by following the interface LLMService from the library.
First, define your input and output schemas using zod. These schemas will guide the structure of the JSON output.
import { z } from "zod";
// Input Schema
const inputSchema = z.object({
topic: z.string(),
depth: z.enum(["basic", "advanced"])
});
// Output Schema
const outputSchema = z.object({
title: z.string(),
keyPoints: z.array(z.string()),
summary: z.string()
});
Next, initialize instances of the LLM services you plan to use. For example, if you're using OpenAI and Anthropic:
import OpenAI from "openai";
import Anthropic from "@anthropic-ai/sdk";
const openAiInstance = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const anthropicInstance = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
import { StructuredAgent } from "structured-json-agent";
// Initialize the Agent
const myAgent = new StructuredAgent({
// Generator Configuration
generator: {
llmService: openAiInstance,
model: "gpt-4o-mini",
},
// Reviewer Configuration (Optional)
reviewer: {
llmService: anthropicInstance,
model: "claude-3-5-sonnet-20240620",
},
// Schemas & Prompt
inputSchema,
outputSchema,
systemPrompt: `You are an expert summarizer.
Create a structured summary based on the topic.`,
});
To run the agent, call the run() method with your input data. The agent will return a promise that resolves to the parsed output schema.
async function main() {
try {
const result = await myAgent.run({
topic: "Clean Architecture",
depth: "advanced"
});
console.log("Result:", result.output); // Result is typed as
// inferred from outputSchema
console.log("Metadata:", result.metadata); // Metadata is a array of
// objects, each one with the provider, model, input
// tokens, output tokens, steps and validation.
} catch (error) {
console.error("Agent failed:", error);
}
}
main();
You can also pass a reference to the agent run, which will be included in the metadata.
async function main() {
try {
const result = await myAgent.run({
topic: "Clean Architecture",
depth: "advanced"
}, "my-ref-123"); // Pass a reference to the agent run
console.log("Result:", result.output);
console.log("Metadata:", result.metadata);
console.log("Ref:", result.ref); // Ref is "my-ref-123"
} catch (error) {
console.error("Agent failed:", error);
}
}
main();
In Typescript, you can include the run< T >() method, where T is the type of the output schema.
The AgentResult type is defined as:
export interface AgentResult {
output: T;
metadata: MetadataAgentResult[];
ref?: string | number;
}
Configuration object passed to new StructuredAgent(config).
| Property | Type | Description |
|---|---|---|
generator |
LLMConfig |
Configuration for the generation model. |
reviewer |
LLMConfig? |
Configuration for the reviewer model (optional). |
inputSchema |
ZodSchema |
Zod Schema for validating the input. |
outputSchema |
ZodSchema |
Zod Schema for the expected output. |
systemPrompt |
string |
Core instructions for the agent. |
maxIterations |
number? |
Max retries for correction. Default: 5. |
Configuration object for LLM models.
| Property | Type | Description |
|---|---|---|
llmService |
OpenAI | GoogleGenAI | Anthropic | ILLMService |
The provider instance or custom service. |
model |
string |
Model ID (e.g., gpt-4o, claude-3-5-sonnet). |
config |
ModelConfig? |
Optional parameters (temperature, max_tokens, etc.). |
Configuration options for fine-tuning the model's behavior.
| Property | Type | Description |
|---|---|---|
temperature |
number? |
Controls randomness (0-2). |
top_p |
number? |
Nucleus sampling (0-1). |
max_tokens |
number? |
Max tokens to generate. |
presence_penalty |
number? |
Penalize new tokens based on presence (-2 to 2). |
frequency_penalty |
number? |
Penalize new tokens based on frequency (-2 to 2). |
Interface for the result of an agent run.
| Property | Type | Description |
|---|---|---|
output |
T |
The parsed output of the agent run. |
metadata |
MetadataAgentResult[] |
Array of metadata records. |
ref |
string | number? |
Optional reference ID for tracking. |
Interface of metadata records on the agent result.
| Property | Type | Description |
|---|---|---|
provider |
string |
The name of the provider (e.g., openai). |
model |
string |
The name of the model used (e.g., gpt-5-nano). |
inputTokens? |
number |
Number of tokens in the input prompt. |
outputTokens? |
number |
Number of tokens in the output. |
step |
string |
The name of the step (e.g., generation, review-1, review-2). |
validation? |
ValidationResult |
Validation result for the output. |
Interface for implementing custom LLM adapters, if you want to use a different provider.
interface ILLMService {
complete(params: {
messages: ChatMessage[];
model: string;
config?: ModelConfig;
outputFormat?: ZodSchema;
}): Promise< ResponseComplete >;
}
interface ResponseComplete {
data: string;
meta?: {
provider: string;
model: string;
inputTokens?: number;
outputTokens?: number;
};
}
interface ChatMessage {
role: "system" | "user" | "assistant";
content: string;
}
Just use in the agent configuration:
import { StructuredAgent } from "structured-json-agent";
import MyCustomProvider from "./my-custom-provider";
const myAgent = new StructuredAgent({
generator: {
llmService: MyCustomProvider, // Instance of your custom provider
model: "gpt-4o-mini",
},
inputSchema,
outputSchema,
systemPrompt: "...",
});
Structured JSON Agent is an open-source project. We welcome contributions, bug reports, and feature requests to make AI agents more reliable and structured.