UX for AI

What Is UX for AI?
AI for UX? Nope. Let’s talk about UX for AI.
There are thousands of articles out there about using AI tools to make your job easier. This isn’t one of them. Written by Brian Edgin, Director of User Experience at Click Here Labs, this piece focuses on UX for AI and how UX methods and deliverables are evolving as interfaces themselves become AI-driven.
As generative systems move beyond chat and begin shaping layouts, flows, and interactions in real time, the role of UX changes fundamentally. Designing static screens is no longer enough. We’re now designing systems, rules, and outcomes that guide how AI presents information and adapts to users.
This shift introduces new challenges and new opportunities: AI-driven interfaces that must remain usable, trustworthy, and intuitive while operating in highly dynamic ways. Strong UX foundations become more important (not less) as personalization increases and predictability decreases.
If you haven’t yet explored how AI is reshaping our discipline, this is a starting point for rethinking how UX principles apply when the interface itself is generated.
What is GenUI?
Generative UI, or GenUI for short, is defined as a bespoke UI that is dynamically generated by artificial intelligence, specific to the task at hand and, in the best implementations, hyper-personalized to the individual user. With GenUI, AI systems are no longer limited to a chat-style interface. They can leverage their training and current context window (the information available to the AI in the current session) to dynamically build the best UI for the job.
This isn’t the same as a human designer using AI tools to rapidly design and build UI from prompts. Rather, this means enabling an AI solution to generate a UI, just like it generates text, in order to better communicate with the user.
That UI could be an image gallery, a diagram, a layout of copy and/or images, a card-based layout of options for a user to choose from, a form requesting user input, or something else.
With the possible exception of Gemini 3’s “Visual Layout” mode, where Gemini creates a custom visual experience similar to a mini-website or app tailored to the query, pure GenUI is still largely theoretical for both technical and practical reasons. I say “possible exception” because Gemini is likely using, at least in part, one of the other methods discussed here: AI-Generated, AI-Directed, and AI-Blind.
Below, we’ll break down the basic architecture for each of these approaches as well as how UX practice needs to evolve to support them.

AI Generated
Of the three approaches discussed here, AI-Generated is the most powerful, and the most challenging to implement. With this approach, the AI literally generates the UI code dynamically, leveraging its full context window to create custom interfaces tailored to both the task and the user.
The Personalization Advantage
Because it is dynamic, AI can use its full context window, including everything it knows about the user, to prioritize and personalize in a way that traditional code can’t.
Consider a car shopping scenario:
- If a user highly values Apple CarPlay, the AI can include that information as an element right in the results cards
- If the user is a mileage buyer, the AI can include mileage directly in the cards instead of, or in addition to, other features
- If the AI knows the user is colorblind, it can create the UI using high contrast and avoid ambiguous colors
Three Core Challenges
What makes this approach largely theoretical (as of December 2025) comes down to three primary challenges.
1. Performance Overhead
The processing overhead required to add UI generation is significant. Current models can’t generate UI with the speed users are accustomed to, certainly not at a reasonable operating cost.
2. Consistency vs. Flexibility
Constraining models to generate UI that adheres to brand standards, conforms to UX best practices, and provides relatively consistent output is tricky. Users are most comfortable with consistent interfaces. If someone returns to a site and the layout is different every time, that’s disorienting. AI-Generated UI needs to remain consistent for the same user unless new information clearly justifies the disruption. Defining that threshold becomes a core part of the specification work.
3. The Relationship Between Privacy and Personalization
This challenge isn’t directly about AI, but about what’s required to empower it: information. Hyper-personalization requires access to user data, which can easily cross privacy boundaries. In today’s environments, this level of personalization would likely require login and onboarding flows.
That said, it’s easy to imagine a future where a personal AI assistant generates UI based on preferences it already has access to, rather than relying on third parties. Designing systems to support that shift would significantly change trust boundaries, and presents an exciting opportunity.
What This Means for UX Professionals
With an AI-Generated solution, the UX professional’s role changes dramatically:
- Instead of wireframes for every screen, we do Intent-Based Outcome Specifications.
- Designing specific micro-interactions becomes much less relevant.
- Where a handful of personas would normally suffice, exponentially more will be required to understand the range of personalization
The work becomes both broader and more detailed. Instead of designing interfaces directly, we define the rules the AI follows when generating UI. Those rules must account for:
- Predictable requests, tasks, and personalization scenarios
- Guardrails for edge cases
- What the AI is allowed, and not allowed, to do
Defining this is far more complex than designing a traditional interface.

AI Directed
Given the technical limitations and complexity of fully AI-Generated UI, the AI-Directed approach represents the most practical path to deeply flexible GenUI today.
How It Works
Rather than generating UI code, the AI-Directed approach relies on a predefined component library, buttons, cards, forms, galleries, filters, comparison tables, and teaches the AI how to assemble them. The AI becomes a thoughtful assembler rather than a code generator.
Depending on the flexibility of the component library, AI can still create bespoke combinations while maintaining control over:
- Visual consistency and brand adherence
- Micro-interaction quality (validation, hints, error states)
- UX best practices
- Common patterns (addresses, credit card forms, etc.)
With proper planning, element-level customization is still achievable without sacrificing control.
What This Means for UX Professionals
This approach blends traditional UX work with newer GenUI considerations.
Specifications no longer need to define how the AI generates code. Instead, they define which components best support specific tasks.
- Traditional UX work continues: Micro-interaction design and component wireframes remain essential
- Content and component rules: AI needs guidance on which components to use for different content types, often based on context—not just format
- User research: Intent-Based Outcome Specifications and deep research are still required, but with a different emphasis
The Trade-Off
While less flexible than AI-Generated for unknown tasks, AI-Directed offers strong personalization with significantly more control, better performance, and far less implementation complexity.

AI Bind
AI-Blind is the most straightforward implementation. While still powerful, it’s also the simplest to build. In this model, the AI has no awareness of UI components, it only supplies data.
How It Works
A controller layer handles all UI decisions. The AI provides data (such as product IDs or search results), and the controller determines which components to display.
Example flow:
- AI interprets the query
- AI sends related data to the controller
- The controller applies hard-coded logic to select components
- The UI is rendered on the front end
This cleanly decouples information flow from presentation.
When It Works Best
AI-Blind works best when all potential display scenarios are known, or when falling back to text output for unknown cases is acceptable. It’s particularly effective when AI interprets user intent and works with structured systems like a DAM.
UX Requirements
Supporting AI-Blind requires a deep understanding of all content types and contexts. User journey mapping and entity/actor documentation are especially helpful for planning outcomes.
Traditional Deliverables Still Apply
Wireframes and micro-interaction design remain relevant, though they’re used within an AI-led outcome framework. As with all approaches, usability testing is critical to ensure atypical UI and workflows feel intuitive.
Ongoing Evolution
Every article written about AI is chasing a moving target. The technology evolves rapidly, and best practices shift just as fast. Content only months old can already be outdated.
By focusing on principles rather than specific tools or technical implementations, the goal is to keep this discussion relevant long enough to be useful.
The Human Constant
Human behavior changes slowly. Perception, understanding, and long-established habits evolve at a very different pace than technology. That gap is where UX professionals create the most value.
Balancing deep human understanding with fast-moving technical advancement isn’t easy, but it’s essential.
Our Unchanging Mission
Regardless of how AI evolves, our role as UX professionals remains the same: advocate for the user, deliver the best experience possible, and support the goals of the project.
The conversation around UX for AI is just beginning, and it’s an exciting, and challenging, time to be working in this field.