Notes

How Chat UIs Render Widgets

I was building a chat interface that could render rich UI widgets. For my project, I wanted to be able to render weekly plans for a training app. But a question occurred to me. “How does the LLM know when and how to render each widget?”

Plan my week

Proposed weekly plan


MonClimbingWedStrengthFriClimbingSatConditioning

The misconception

My first instinct was that I’d need to teach the model about each widget. Describe the component structure, its expected inputs, maybe even its visual layout. This felt immediately wrong. Coupling an LLM to a specific UI framework seemed fragile and needlessly complex.

What actually happens

The answer is simpler than I expected. The LLM doesn’t need to know anything about the widget. It just needs to return structured JSON that conforms to a schema you define. Something like:

{ "type": "plan", "data": [{ "day": "Mon", "workout": "Climbing" }...]}

A separate piece of your application reads that response, looks at the type field, and mounts the corresponding component with the right props. The LLM and the UI never touch each other directly. The schema is the contract between them.

LLM to UI Widget rendering flow

A horizontal flowchart showing three stages: an LLM box on the left outputs JSON to a hatched Application box in the center, which then mounts the correct UI Widget shown on the right. Arrows connect each stage left to right.

LLMJSONApplicationrenderUI Widget

Once I saw it this way, the pattern felt obvious. It’s the same separation of concerns that shows up everywhere in software: data and presentation, decoupled by a contract.