Micro Case Study—

Living Research Reference (LRR)

I design and maintain personas as time-bound, evidence-backed decision tools—not static artifacts.

To do this, I created a Living Research Reference: a governed synthesis layer that sits between raw qualitative research and the personas shared with product and business stakeholders. It allows insights to evolve with new research, technology shifts, and regulatory change—without losing historical context or flattening edge cases. AI is used only for recall and querying; interpretation and persona authorship remain human-led

What It Is

Living Research Reference
A role-specific, continuously updated research synthesis that preserves raw evidence, variance, and edge cases, and can be queried during design. It informs but does not replace static persona artifacts.

Think of it as:

  • Research memory, not a persona

  • A synthesis layer, not a repository

  • Time-aware

What Problem It Solves

In enterprise environments, personas may fail because they:

  • Become outdated as workflows, tools, or policies change

  • Flatten important edge cases in the name of alignment

  • Lose traceability back to actual user evidence

  • Require repeated re-synthesis when new designers or projects begin

The LRR addresses these risks by separating research continuity from persona presentation.

How It Works

  1. Structured interviews conducted per role (e.g., Billing, Enrollment, Underwriting)

  2. Role-based Living Research Reference created per user group

  3. Raw transcripts and notes are preserved; patterns are synthesized but not over-generalized

  4. AI is used only to:

    • Retrieve relevant evidence

    • Surface themes already present in the data

  5. The designer:

    • Authors and updates personas manually

    • Decides when insights are stable enough to present

    • Tracks changes over time (quarterly, regulatory, or tooling shifts)

Decision Framing in Practice

When exploring design ideas, I use the Living Research Reference to validate whether an idea is supported by existing research before investing in design exploration.

In this example, I used the Living Research Reference to evaluate several AI-related ideas (e.g., voice-to-text, writing augmentation, Copilot use) against the qualitative research I had already conducted. Rather than assuming value, I asked questions such as:
“Across recent billing interviews, what evidence suggests that voice-based input would reduce effort or error in this role?”

AI supported recall and comparison across interviews, while interpretation and scoping remained human-led. This resulted in a constrained, evidence-backed understanding of where these tools may add value and where they should not be applied.

Below is the basic process I followed with the LRR for this example.

  • I started with a neutral question:

    • Could voice-to-text meaningfully reduce effort or error for these roles?

    No assumptions were made about user preference or desirability.

  • I queried the Living Research Reference for signals across roles:

    • Where writing effort is concentrated

    • Where accuracy, auditability, or precision are critical

    • Where existing tools already compensate for effort

    AI supported recall and comparison across prior research. It did not generate recommendations.

  • The evidence suggested limited support for voice-to-text in core workflows, but stronger signals for writing augmentation (e.g., clarifying, standardizing, or refining written communication) in specific roles.

    This led to a follow-up question:

    • Would a writing-assist tool (e.g., Highlight-style AI) better align with observed needs?

  • Rather than a binary decision, the outcome was scoped:

    • Writing augmentation may be valuable for certain communication-heavy tasks

    • It is not appropriate for core decision-making or adjudication work

    • Embedded tools (e.g., Copilot) can support execution, but do not replace this evidence-based evaluation process

  • The Living Research Reference enabled me to:

    • Reject or narrow ideas early

    • Avoid solution-first design

    • Document not just decisions, but the reasoning behind them

    This example illustrates how I use AI to support analysis and sensemaking, while keeping interpretation and accountability human-led.

Why This Matters to the Business

  • Reduces rework caused by outdated assumptions

  • Increases confidence in design rationale

  • Improves onboarding for new designers and PMs

  • Supports auditability in regulated environments

This approach reduces rework by preventing teams from designing against outdated assumptions. It increases confidence in design decisions by making research evidence explicit and traceable. It improves onboarding by preserving institutional knowledge beyond individual team members. And in regulated environments, it supports auditability by documenting how user understanding and design decisions evolve over time.

Ethical & Responsible AI Use

  • AI does not generate insights independently

  • AI does not author personas

  • All interpretations and decisions are human-reviewed

  • The system is designed to reduce cognitive load, not replace judgment