Case Study
FURLATE – Translating paws into plans
Brought structure to an AI experiment so pet parents can record sounds, tag context, and learn what their cats or dogs might be asking for.
Overview
FURLATE pairs on-device audio capture with a lightweight classifier hosted on Firebase. I designed the data tagging flow, reward moments, and iconography so the experience feels playful yet trustworthy.
Problem / Goal
- Raw AI outputs confused users because there was no context or confidence indicator.
- Parents wanted to save a history of sounds tied to routines (feeding, play, nap).
- Recording UX had to work one-handed without requiring on-screen focus.
My Role
Conducted user interviews, mapped flows, created UI kit, and defined the tone-of-voice for insights.
Constraints
Needed to keep ML inference cost near-zero and respect Apple's microphone permissions guidelines.
Process
- Wireframe → storyboarded the capture flow and data review moments.
- UI → used neon gradients, sticker-style badges, and haptics to signal success.
- Shipping → delivered SwiftUI-ready specs plus a motion pack for the listening avatar.
Outcomes
- 83% of testers now tag context, creating better ML training data.
- Confidence badges cut support tickets around “is this real?” by half.
- TODO: add metrics once we launch the pro tier.
Live Product
Next Steps
Rolling out shared pet profiles and improving accuracy with community-labeled clips. Download the build on the App Store or reach out for a teardown.