Dynamic Speaker Alignment for Interactive Dialog Systems
Natural language interfaces are becoming more and more wide-spread, not just because they allow hands-free operation, but because every human is readily and intuitively able to speak their language, dialog systems are easy to use, i.e. the users don't require any additional training. Spoken dialog systems are also said to exploit many other properties of natural language, namely that it is flexible, responsive, fast, robust, and also often perceived as an enjoyable activity.
It is well known that humans automatically align their language during dialog, particularly in task-oriented dialog, where mutual understanding is crucial for completing the tasks at hand. Alignment between user and dialog system does not only make the conversation itself more natural, but presumably also cognitively more lightweight for the user.
The goal of this project is to develop and implement a cognitive model of linguistic alignment for interactive dialog. Building on computational models of priming, we will make use of Fluid Construction Grammar's bidirectionality which allows using the same representations for both parsing and production. The alignment model will be tested and evaluated based on a dialog system case study with FCG as its natural language processing frontend.