%A Huang,Chien-Ming %A Andrist,Sean %A Sauppé,Allison %A Mutlu,Bilge %D 2015 %J Frontiers in Psychology %C %F %G English %K Human intentions,eye gaze,Support vector machine,intention predictor,task intent,Gaze Patterns %Q %R 10.3389/fpsyg.2015.01049 %W %L %M %P %7 %8 2015-July-24 %9 Original Research %+ Mr Chien-Ming Huang,University of Wisconsin-Madison,Computer Sciences,Madison,Wisconsin,United States,cmhuang@cs.jhu.edu %# %! Using Gaze Patterns to Predict Task Intent in Collaboration %* %< %T Using gaze patterns to predict task intent in collaboration %U https://www.frontiersin.org/articles/10.3389/fpsyg.2015.01049 %V 6 %0 JOURNAL ARTICLE %@ 1664-1078 %X In everyday interactions, humans naturally exhibit behavioral cues, such as gaze and head movements, that signal their intentions while interpreting the behavioral cues of others to predict their intentions. Such intention prediction enables each partner to adapt their behaviors to the intent of others, serving a critical role in joint action where parties work together to achieve a common goal. Among behavioral cues, eye gaze is particularly important in understanding a person's attention and intention. In this work, we seek to quantify how gaze patterns may indicate a person's intention. Our investigation was contextualized in a dyadic sandwich-making scenario in which a “worker” prepared a sandwich by adding ingredients requested by a “customer.” In this context, we investigated the extent to which the customers' gaze cues serve as predictors of which ingredients they intend to request. Predictive features were derived to represent characteristics of the customers' gaze patterns. We developed a support vector machine-based (SVM-based) model that achieved 76% accuracy in predicting the customers' intended requests based solely on gaze features. Moreover, the predictor made correct predictions approximately 1.8 s before the spoken request from the customer. We further analyzed several episodes of interactions from our data to develop a deeper understanding of the scenarios where our predictor succeeded and failed in making correct predictions. These analyses revealed additional gaze patterns that may be leveraged to improve intention prediction. This work highlights gaze cues as a significant resource for understanding human intentions and informs the design of real-time recognizers of user intention for intelligent systems, such as assistive robots and ubiquitous devices, that may enable more complex capabilities and improved user experience.