Roger Branon Rodriguez

CASE STUDY · Ongoing project

Emote

A behavioral framework for AI and trust-sensitive systems.

FounderResearchProduct DesignVibe coding + custom codingVercelGitHub

ROLE

Founder · Product Designer · Researcher · Engineer

FOCUS

Trust moments: ambiguity, consent, repair

STATUS

Ongoing

Emote homepage hero image
Hero: Emote homepage (current).

Emote focuses less on what systems do, and more on how they respond when uncertainty is high.

OVERVIEW

What Emote is

Emote is a reusable language for trust-sensitive moments: the points where people hesitate, second-guess, or need reassurance.

It turns emotional research signals into portable design guidance (patterns + behavior tokens) that can be applied across UI, AI prompts, and support workflows.

MOTIVATION

Why I created it

Most systems are optimized for success paths. AI demos assume confidence.

My work in complex, regulated products taught me the opposite: the moments that matter most happen when users are confused, anxious, or something breaks.

Emote is my attempt to make those human moments first-class design inputs, not an afterthought.

THE PROBLEM

What breaks in AI systems

  • Explaining uncertainty without sounding evasive
  • Confirming consent before action
  • Handling errors without blame
  • Repairing trust after something goes wrong

These failures usually aren’t technical. They’re behavioral and communicative.

APPROACH

How Emote works

Emote is organized into a simple stack:

  • Trust moments: situations like ambiguity, consent, repair.
  • Patterns: named playbooks for what “good” behavior looks like in that moment.
  • Behavior tokens: small, reusable commitments (ex: “confirm before action,” “name risk transparently”).

The goal is alignment: same language, same intent, across the stack.

BUILD

How I built it

  • Research + synthesis: drew from years of qualitative research in regulated systems; abstracted recurring emotional failure points.
  • Vibe coding: rapid exploration to pressure-test language and structure.
  • Custom coding: refined into a clean Next.js site deployed on Vercel, versioned in GitHub.

WHAT’S DIFFERENT

Why this framework matters

  • It treats behavior as a design output, not just UI.
  • It’s tool-agnostic and stack-friendly.
  • It prioritizes trust repair over novelty.
  • It’s designed for real systems, not ideal ones.

IMPACT + LEARNING

What I learned

Clarity is not only a writing skill, it’s a product behavior. Designing for trust means designing for what happens after the “happy path” ends.

Good AI design isn’t about sounding confident. It’s about knowing when not to.

NEXT

What I’d do with a team

  • Validate patterns inside real AI workflows.
  • Measure trust recovery, not just task success.
  • Integrate behavior tokens into design systems and prompt libraries.