ShowerSense

Project 37 in Collaboration with Procter & Gamble

Tracking Hair Care in Motion

Introducing ShowerSense

ShowerSense introduces a privacy-preserving alternative that captures shower routines through motion sensing, enabling realistic insights without video recording.

Team members

Douglas Chun-Hao Tan (ISTD), Chow Liang Zhi (DAI), Celest Ng Song Wei (DAI), Nicole Cheah Ching Suan (ISTD), Julianna Sherine Galvez Teodoro (ESD), Long En Qi Rayne (ASD), Manoranjan Roshan Emmanuel (ISTD)

Instructors:

  • Qin Yanxia

Writing Instructors:

  • Belinda Seet

Project Roadmap

Why Shower Behaviour Matters

Understanding how consumers wash their hair is important for improving hair care products. Studying real shower routines allows researchers to analyse how people interact with shampoo, conditioner, and other products.

Limitations of Traditional Observation

Traditional shower studies rely on direct observation or video recording, where participants are aware they are being monitored. While effective, these methods can be time-consuming and limit the depth and scalability of behavioural insights.

Solution

ShowerSense captures motion data instead of video. Wearable sensors record upper-limb movements during shower routines, which are reconstructed through a digital avatar. This allows researchers to analyse behaviour while protecting participant privacy.

Understanding Shower Behaviour Without Intrusion

Improving hair care products requires understanding how people wash their hair in real-world routines.

Traditional methods rely on direct observation or video recording. However, when participants know they are being observed, their behaviour may change, leading to less accurate insights.

Towards a more Natural and Privacy-Preserving Alternative

ShowerSense addresses this challenge by using motion sensing instead of video recording. By capturing movements through wearable sensors and reconstructing them as a digital avatar, the system enables more natural behaviour analysis with minimal intrusion.

Systems Demonstration

The interface guides users to input data, adjust settings, and preview outputs, demonstrating how the system translates raw inputs into a fully realised avatar for behavioural analysis.
/

System Components

The system consists of several key components that transform motion data into an animated digital avatar for behavioural analysis.

Avatar Generator
Motion Capture Integration
Unreal Engine Visualisation

The Avatar Generator creates a personalised 3D avatar using calibration images and measurement extraction. Computer vision techniques detect body features and estimate body parameters, which are then used to generate a rigged avatar with configurable hair assets. This ensures the digital avatar reflects the participant’s body proportions while maintaining anonymity.

Wearable IMU sensors record upper-limb movements during shower routines. The captured motion data is processed and converted into animation files that can be applied to the avatar skeleton. This allows the digital avatar to replicate the participant’s movements without requiring video recording.

The Unreal Client visualises the animated avatar in an interactive environment. Users can preview motion playback, simulate hair physics, and record animations for analysis. This allows researchers to observe realistic shower behaviour while preserving participant privacy.

Pose Accuracy Evaluation

To evaluate the accuracy of our motion reconstruction, we compare the detected joint positions against ground truth keypoints obtained from visual data. The model achieves an overall accuracy of 87.6%, demonstrating its ability to reliably capture upper-limb movements using IMU sensors.

The comparison shows that the reconstructed avatar closely matches the participant’s arm and hand pose, indicating that motion-based sensing can effectively approximate real-world shower behaviour without relying on video recording.

Acknowledgements

The ShowerSense team would like to express our sincere gratitude to our capstone instructor, Dr Qin Yanxia and Dr Kan Ee May, for their guidance, feedback, and continuous support throughout the development of this project. Their insights were invaluable in shaping both the technical direction and research approach of our work.

We would also like to thank Ms Belinda Seet for her guidance on the communication and documentation of our project.

Special thanks to our industry mentor, Ms Valerie Yong from Procter & Gamble, for providing valuable industry perspectives, project guidance, and support throughout the collaboration.

Menu

ornament-menu