AI-UI Renderer
@yashiel/mihcm-ai-ui lets a language model return a UI as JSON. The renderer validates the descriptor, looks up the component in an explicit allowlist, caps recursion depth, and renders via the primitives. The same renderer works on React, Next.js, and React Native.
Server (Vercel AI SDK 6)
import { streamText } from 'ai';
import { renderUITool } from '@yashiel/mihcm-ai-ui/tools';
const result = streamText({
model: 'anthropic/claude-sonnet-4-6',
tools: { renderUI: renderUITool },
prompt: userMessage,
});Client
import { renderDescriptor } from '@yashiel/mihcm-ai-ui';
return renderDescriptor(descriptor, {
actions: { save, cancel },
});Security model
The renderer enforces five guarantees:
- Zod parse before render. Malformed descriptors throw and never paint.
- Allowlist — component names not in
ALLOWED_COMPONENTSare rejected. - Depth cap of 5 prevents prompt-injected explosions.
- Action lookup by id — handlers are passed in by the caller; the model can only reference them by string. No arbitrary URLs or function bodies.
- No raw HTML — only validated descriptors render.
See docs/security-playbook.md §6 for the full AI-UI threat model.
Allowlist (today)
ButtonTextStack(recursive — children can beButton,Text, or anotherStack)
Add a primitive → add its Zod schema → register in the allowlist + render switch.