AI-UI Renderer

@yashiel/mihcm-ai-ui lets a language model return a UI as JSON. The renderer validates the descriptor, looks up the component in an explicit allowlist, caps recursion depth, and renders via the primitives. The same renderer works on React, Next.js, and React Native.

Server (Vercel AI SDK 6)

import { streamText } from 'ai';
import { renderUITool } from '@yashiel/mihcm-ai-ui/tools';
 
const result = streamText({
  model: 'anthropic/claude-sonnet-4-6',
  tools: { renderUI: renderUITool },
  prompt: userMessage,
});

Client

import { renderDescriptor } from '@yashiel/mihcm-ai-ui';
 
return renderDescriptor(descriptor, {
  actions: { save, cancel },
});

Security model

The renderer enforces five guarantees:

  1. Zod parse before render. Malformed descriptors throw and never paint.
  2. Allowlist — component names not in ALLOWED_COMPONENTS are rejected.
  3. Depth cap of 5 prevents prompt-injected explosions.
  4. Action lookup by id — handlers are passed in by the caller; the model can only reference them by string. No arbitrary URLs or function bodies.
  5. No raw HTML — only validated descriptors render.

See docs/security-playbook.md §6 for the full AI-UI threat model.

Allowlist (today)

  • Button
  • Text
  • Stack (recursive — children can be Button, Text, or another Stack)

Add a primitive → add its Zod schema → register in the allowlist + render switch.