Needs of Mixing Voice and Touch Inputs
Voice user interface is now common, however, it’s not always the best way to achieve certain tasks. Users sometimes may want to take their initiative to explore information themselves through their preferable moderates.
In this project, we explore mixed-initiative user experiences where a user collaborates with a conversational AI agent on a large touchscreen. In this project, we design a conversational user interface utilizing multimodal signals, including visual, audio, screen-touch gestures, and even users’ positions.