Finding Focus logoFinding Focus

Finding Focus Assistant

My team and I designed an LLM-powered AI assistant to provide our teachers with on-demand, personalized support directly from within their interface.

Finding Focus AI Assistant

Visit Our Help Center

Hi! I'm your AI Assistant,
trained on all things Finding Focus
Hi! I'm your AI Assistant,
trained on all things Finding Focus

Here are some examples of questions you can ask me:

lightbulb-ai

How can I introduce Finding Focus so my students take it seriously?

list-ai

My students aren't getting account confirmation emails. Help!

options-ai

What is the connection between the 10-Day Course and the Focus Coach?

Finding Focus AI Assistant can make mistakes. Verify key information.

My Role

UX Lead

Team

Mike Mrazek, Co-founder
Thomas Kennedy, SWE

Timeline

Aug – Nov 2024

Overview

Context

Turning a grant requirement into an opportunity for personalized support.

Our most successful teachers shared one common thread — direct support from our team during implementation. With grant requirements pushing us toward AI integration, we recognized that an LLM-powered assistant could provide that same hands-on support to every teacher at scale.

Teacher reach diagram
Less than 5% of all of Finding Focus's sign-ups received one-on-one support

User Insight

Teachers don't like chatbots.

The teachers we talked to were burned out from previous experiences with other chatbots — and for good reason. Traditional chatbots run on rigid decision trees. So when a teacher's question didn't match a pre-defined path, the conversation simply stalled — leaving them frustrated.

User question"How do I log in?"Is it in thedecision tree?YESScripted reply ✓NOInput notrecognized?stuck

Limited Responses

Reliance on decision trees creates a rigid conversational flow. If a user's input doesn't fit the pre-defined options, the chatbot gets stuck or provides unhelpful responses.

Lack of Context

Chatbots struggle to grasp the overall meaning or intent behind a message, especially when the language is complex or not straightforward.

Inefficiency

Users end up resorting to other options — like messaging the support team directly — which is time-consuming for everyone involved.

You caught me at an awkward breakpoint 🫣

NORTH STAR

Create a genuinely helpful assistant that provides relevant answers to any teacher question.

Research

Evaluation

Two options. One clear winner.

Before anything else, Finding Focus had to decide on the 'brain' of our chat interface — the core technology that would understand and respond to user requests. I evaluated two main approaches: Rule-Based NLU systems and Large Language Model (LLM) APIs.

Rule-Based NLU APIs

Diagflow, Amazon Lex, Rasa

Pros

FastAccuratePredictableCost Effective

Cons

RoboticLess FlexibleKnowledge GapsContext Blind

Excels with well-defined interactions and predictable inputs — fast, accurate, and cost-effective. But rigid.

Winner

LLM APIs

OpenAI (GPT), Anthropic (Claude), Gemini

Pros

VersatileGenerativeContextually AwareNatural

Cons

CostLess ControlHallucinationsHigh Maintenance

Provides dynamic, contextually aware responses that adapt to any query — at the cost of predictability.

The Winning Choice

LLM Powered API

OpenAI's Assistants API was the clear choice — its ability to truly understand queries, respond naturally, and connect directly to our external knowledge base made it the right fit.

Comparative Analysis

Before designing anything, we did our homework.

I conducted a comprehensive comparative analysis of leading LLM chat interfaces — Gemini, Claude, Meta AI, and ChatGPT — focusing on three key areas that would shape our design direction.

Text Output Behavior

How text is displayed in responses — letter-by-letter, word-by-word, or all at once. Pacing and visual feedback directly impact how responsive and fast the AI feels.

ChatGPT text output
Gemini text output
GIF
Stop

ChatGPT

Streams text letter-by-letter, with a cursor dot as a visual reference

Message Structure and Layout

The visual organization and differentiation of user and AI messages. Clear hierarchy helps users easily follow the conversation and distinguish between their messages and the AI's responses.

Gemini layout
Meta AI layout
Image

Gemini

User messages and LLM responses both appear on the left, differentiated by icons

Dynamic Page Behavior

How the interface adapts to new messages — scrolling, anchoring, and focus management. Smooth, stable behavior ensures users can follow the conversation without losing their place.

Claude page behavior
Gemini page behavior
GIF
Stop

Claude

Responses push content upward as text streams in, disrupting mid-read

Key Insights

Three ingredients for a great LLM chat experience.

The comparative analysis of leading AI chat products revealed consistent patterns that separate frustrating experiences from genuinely effective ones — three design decisions that every LLM chat interface should get right.

Implement letter-by-letter text streaming

Streaming text as it generates provides immediate visual feedback, making the assistant feel faster and more responsive than waiting for a complete response.

Use distinctive styling for user vs. AI messages

Left/right message layout with user bubbles follows conventions teachers already know, making it effortless to follow the conversation without learning new patterns.

Anchor each message in a fixed section

Keeping each exchange in its own stable container prevents the layout from shifting as text streams in — so teachers can read without losing their place.

Design

UX Considerations

Where should teachers access the assistant from?

Entry point placement shapes everything — it determines how often teachers reach for the tool, and whether it feels like a core part of the platform or an afterthought. Getting this wrong means the assistant goes unused, no matter how good the experience inside it is.

Dedicated tab in the nav drawer wireframeNav drawer pros and cons
Floating action button wireframeFAB pros and cons

Option 1 — Dedicated Tab in the Nav Drawer

The Winning Choice

Floating Action Button

Always reachable without pulling teachers away from what they're doing.

How should the assistant appear when launched?

How the assistant appears on launch had real stakes — would it feel like an interruption, could teachers easily dismiss it without losing progress, and would it give the experience enough room to work?

Full screen modal wireframeFull screen modal pros and cons
Anchored modal overlay wireframeAnchored modal pros and cons
Split view wireframeSplit view pros and cons

Option 1 — Full Screen Modal

The Winning Choice

Anchored Modal Overlay

Stays present without taking over — enough screen space to have a real conversation, without losing sight of where you are.

What does a teacher see before the conversation starts?

The empty state is the assistant's first impression. Get it wrong and teachers either don't know where to start, or worse — don't trust the tool enough to try. The goal was to give just enough guidance without making the experience feel scripted.

Blank input wireframeBlank input pros and cons
Suggested question tiles wireframeSuggested question tiles pros and cons
Proactive greeting wireframeProactive greeting pros and cons

Option 1 — Blank Input · No Suggested Questions

The Winning Choice

Suggested Question Tiles

Question tiles give teachers a clear starting point — and signal what the assistant is actually capable of from the moment it opens.

Final Designs

Putting it all together.

The three big decisions shown above — access point, display format, empty state — shaped the core design direction; however, this project also included dozens of smaller decisions that don't each merit their own section, but collectively helped shape the final experience.

Opening the chat interface
Final design in use
GIF
Stop
Mobile view of the AI assistant
Desktop view of the AI assistant

Outcomes

What happened after launch.

We didn't approach this project with explicit success metrics — the original driver was grant competitiveness. That said, the results were still meaningful: since implementing the assistant, support ticket volume has decreased by 12% compared to previous semesters. The assistant has helped teachers get answers without needing to directly reach out to our team — which was the core promise of the tool.

12%

decrease in support tickets

Reflection

Design Landscape

LLM chat interfaces are still early — design around your use case, not conventions

There's no settled playbook for LLM chat UI yet. Patterns that work for ChatGPT don't automatically translate to a tool teachers use mid-workflow.

What I Learned

The depth of what goes into making an LLM actually useful surprised me

Working hands-on with the Assistants API — vector storage, context windows, system prompt design — gave me a much more grounded picture of what's actually happening under the hood.

Honest Takeaway

The assistant helps — but it doesn't replace a person

Teachers who onboard with a team member still see higher implementation success than those who don't. The assistant is a support layer, not a replacement for human connection.

If I Could Do It Again

I would have invested more in user testing — but it wasn't in the cards

Early-stage startup work rarely has runway for structured usability testing before shipping. It made the competitive research more load-bearing — when you can't test with users, understanding what established products got right becomes your best available signal.