I was building a chat interface that could render rich UI widgets. For my project, I wanted to be able to render weekly plans for a training app. But a question occurred to me. “How does the LLM know when and how to render each widget?”
Proposed weekly plan
The misconception
My first instinct was that I’d need to teach the model about each widget. Describe the component structure, its expected inputs, maybe even its visual layout. This felt immediately wrong. Coupling an LLM to a specific UI framework seemed fragile and needlessly complex.
What actually happens
The answer is simpler than I expected. The LLM doesn’t need to know anything about the widget. It just needs to return structured JSON that conforms to a schema you define. Something like:
{ "type": "plan", "data": [{ "day": "Mon", "workout": "Climbing" }...]}
A separate piece of your application reads that response, looks at the type field, and mounts the corresponding component with the right props. The LLM and the UI never touch each other directly. The schema is the contract between them.
Once I saw it this way, the pattern felt obvious. It’s the same separation of concerns that shows up everywhere in software: data and presentation, decoupled by a contract.