Bubble - AI Wearable IoT System

Today's IoT, predominantly used at home and commanded by voice assistants, needs a more seamless, effortless, and efficient control mechanism. To enhance user's experience interacting with their home devices, we explored non-speech sound recognition and gesture control. Our solution, Bubble, identifies user activities through environmental sounds, and enables swift and customized IoT control through Apple Watch. To achieve our final design, we researched 17 target users and iterated through 5 interface designs. We'd like to give our special thanks to Garmin Design Team for their advice and feedback.

Type
Product Lead | Wearable AI | B2C
Date
Feb 2023
Duration
Feb - May 2023
Role
UX/UI Designer, Team Lead

Define Our User and Problem

within Home - IoT Experience Improvement
Persona - Sierra
User Painpoints

1. Voice interaction is sometimes exhausting or disruptive.

2. Alexa does not understand when not using the exact command or speaking accents.

3. Setting up each individual device is too much.

User Needs

1. More Effortless

“I would like more agile control at home, instead of going extra miles to control each device”

2. More Intelligent

“I want devices to work together to create a cohesive experience”

3. More Seamless

“I want more than voice commands. Sometimes I don’t want to speak”

4. Better Lifestyle

“I want more intelligent and cozy home experience”

Empathize, Define, Ideate
Define Our Scope

In order to make Bubble intelligent and understand what the user needs, we need sensors to provide live data input for our machine learning model.

Due to user's common expression of concerns about privacy, we chose microphone to be the sensor, as it is less intrusive compared with cameras, but more versatile than infrared. It is also familiar to users who already possesses smart home assistants such as Alexa.

First Prototype
Initial Testing

Our first prototype, a coughing triggered humidifier. We collected many audio samples of their coughing from surrounding people, and trained the ML Model. When the micro controller hears someone coughing continuously around, it will turn on the humidifier for them.

Important feedbacks from user testings:

  • Sensor should be close to users
  • Users express concerns about autonomy
  • Don’t want too many different sensors installed at home

From the research feedbacks, we realized:

1. We need a wearable that integrates sensors and can hear user anytime.

2. Users need to be in the driver's seat and the our device needs to ask users for permissions when controlling their home.

Therefore, we chose Apple Watch to be our data collector (microphone) and user interface (control device). We would like to create an application on Apple Watch that can recognize environmental sound, infer user's action, anticipate user's needs, and suggests appropriate IoT controls.

We would iterate on three aspects:
  • EFFORTLESS - Easy and integrated control system
  • SEAMLESS - Minimal distraction, multimodal Interactions
  • INTELLIGENT - ML model, personalization

UX/UI Iterations

for effortless and seamless control experience
Product Lead | Wearable AI | B2C
Version 1  :  Highlights
  • Probability Visualization - Turns on the most relevant home device; button size corresponds to the probability (tap on the circles to turn it off/on)
  • Self-Iteration - Learns from user's responses overtime and increases the size (probability) of the desired device, until it becomes the only suggested device
  • Single-Swipe Interaction - After 2 weeks, user can simply swipe left/right to turn off/keep on the dominant device (similar to Tinder)
User Feedback
  • Visualizing probability helps me understand the ML mechanism
  • However, would not sacrifice a clean look for it
Product Lead | Wearable AI | B2C
Version 2  :  Improvement
  • Single Device - display the dominant device only
  • Gesture Control - control using just 1 hand to increase accessibility
Design Questions
  • How to control lights in different locations?
  • Level of autonomous - who turns on the devices, user or Bubble?
  • Visual Style
Product Lead | Wearable AI | B2C
Version 3  :  Improvement
  • Replacing devices with modes - more human-centered, less mechanical
  • User in the driver's seat
  • Simplicity design
User Feedback
  • Want to quickly distinguish between modes
Product Lead | Wearable AI | B2C
Version 4  :  Improvement
  • Further reduce cognitive load, only display necessary information
  • Color coding - modes for rapid recognition
Prototype Testing
  • with Garmin Design Team, IOT Owner, Graphic Design Professor, Apple Watch Super User and UX Designer
  • Like the clean and minimal design
  • Extremely digestible Information
  • Colors are too dark, indistinguishable

Accessibility/ML Iterations

for seamless and Intelligent control experience
Gesture Control

During the Usability testing, we found that in some specific user scenarios, it is difficult for the user to perform tapping operations on the watch screen, such as cooking. In our research we discovered that the apple watch has gesture interaction and found it to be a very convenient way to interact.

Machine Learning Iterations

We collected many different categories of sound data and trained it using CoreML, which is a  built-in tool in Xcode.

We picked four sounds from the video and converted them into real-time sonograms, the machine can recognize the sounds according to different color patterns. For example, the sound of opening a door, keyboard typing, cooking, and the sound of cat :)

Final UX Design

Product Lead | Wearable AI | B2C

Reflection

Moving forward, we would like to work on:

  • App Usability Testing
  • Prototype of user onboarding function
  • Prototype of device discovery/API connection
  • Refine machine learning & audio monitoring protocol