Project Description
Quiet Interaction In Mixed Reality - A Smart Home System aims to create an Accessible Home Environment for DHH Individuals through the combination of AR, AI, and IoT Technologies.
As technology rapidly evolves, voice-command-based smart assistants are becoming integral to our daily lives. However, this advancement overlooks the needs of the Deaf and Hard of Hearing (DHH) community, creating a technological gap in current systems. To address this technological oversight, this study develops a Mixed-Reality (MR) application that integrates Augmented Reality (AR), Artificial Intelligence (AI), and the Internet of Things (IoT) to fill the gaps in safety, communication, and accessibility for DHH individuals at home.
Final Prototype Overview

User Menu Mechanism

Lamp Control (ON/OFF)

Urgent Events (Water Leaking)

Events (Someone enters the room)

3D Representative User Home Model and Room Selection

Appliances Status (Microwave Status Alert)

Urgent Events (Fridge Door Opened)

Speech-to-Text and Text-to-Speech (Mutual communication with individuals familiar with sign language)

Blinds Control (ON/OFF/Shade Angle)



User Menu Mechanism
3D Representative User Home Model and Room Selection
Blinds Control (ON/OFF/Shade Angle)


Lamp Control (ON/OFF)
Appliances Status (Microwave Status Alert)


Urgent Events (Water Leaking)
Urgent Events (Fridge Door Opened)


Events (Someone enters the room)
Speech-to-Text and Text-to-Speech (Mutual communication with individuals familiar with sign language)
Research Methodologies
Research Through Design (RtD) & User-Centric Design (UCD)
• Literature Review: Gain insights into the daily challenges that DHH individuals encounter at home, as well as the advantages and limitations of current solutions.
• Online Survey with Target Users: A more detailed understanding of their specific challenges and needs.
Persona
System Architecture and Workflow
The prototype integrates two main systems into an AR application: Home device management and Speech services.
- Home Device Management:The system uses an MQTT broker for publish-subscribe communication. The AR app subscribes to device updates and sends control commands. It also embeds a web browser that streams live video and displays environmental sound transcriptions.
- Speech Services:Microsoft Azure’s API handles real-time audio input, returning transcribed or translated text for live captioning and speaker identification within the AR interface.
AR and IoT: Data Transmission Framework
Functions Overview
Watch Prototype Video