Human communication is known to be complex by nature. Indeed it involves modalities such as speech, facial expressions, hand gestures or body posture. It is, by definition, multimodal. Moreover, each one has different points of view, perspectives, objectives or interpretations. Despite this complexity, we are able to successfully understand each other through the non-verbal elements of the conversation. In fact, most of the meaning we derive from a conversation is due to the non-verbal elements rather than the words spoken. In that sense, communicating in the absence of this information can be difficult, as happens in Virtual Environments (VEs). The applications of VEs can be quite diverse, ranging from simple multi-user chats to Virtual Learning Environments or Massively Multiplayer Online Role-playing computer Games. Nevertheless, all share the goal of being the interaction framework of their users.
Addressing this subject, concludes that the lack of the non-verbal elements from both speaker and listener limits successful communication in VEs.
Analyzing the current literature, two main obstacles for communication in VEs can be identified: the lack of feedback from the environment and the lack of meaningful content about the users’ context. To tackle this problem several approaches can be found, all revolving around the interpretation of the user’s context. Investigates the influence of five characteristics - anxiety, spatial intelligence, verbal intelligence, personality and computer experience - on the sense of presence. deals with the idea of the analysis of the user's behavior and interpretations, utilizing accelerometers to uncover the user's cultural background by analyzing patterns of gestural expressivity. The affective aspect of communication is also addressed by researchers as emotions appear in virtually all complete models of human communication. Another particularly preponderant factor in human communication is stress. Consequently, the lack of stress models in VEs constitutes an obstacle to effective communication between the participants. The possibility of detecting cognitive and physical stress is explored by monitoring keyboard interactions with the eventual goal of detecting acute or gradual stress. Unfortunately, despite its recognized importance, research in stress models for VEs is stilldetecting acute or gradual stress.
Unfortunately, despite its recognized importance, research in stress models for VEs is still scarce. Motivated by these factors, we now aim to develop a framework to model the user’s context, focusing on stress, and to provide this information to a VE so that richer communication processes can be developed that allow for its users to communicate in ways that are closer to face-to-face. One of our guiding lines is that such framework will be non-intrusive and non-invasive since less intrusive techniques facilitate more accurate and frequent monitoring. Thus being, the estimation of stress will be based on the transparent analysis of the user’s behavior and interaction patterns in real-time.
We are also motivated by previous work in which we successfully measured changes in stress in a non-intrusive way using motion detection and handheld devices equipped with basic sensors. From this hardware we were able to extract features such as touch patterns, touch duration, touch intensity, touch accuracy, acceleration on the device, amount of movement and a measure of cognitive performance. In our preliminary tests nearly 20 volunteer participants (students and teachers from the university) were requested to play a game that included performing mental calculations in a calm and in a stressed state. In average, each participant has shown statistically significant differences in half of the parameters studied when comparing the calm and stressed measurements.
Supported by these results we now aim to acquire more appropriate and precise sensors that allow us to develop a more accurate framework for modeling stress. The main innovation of this approach will be the providing, in real time, of meaningful context information to the users of a VE in an intuitive graphical way that can complement what is being said with the nonverbal information. This will result in more efficient communication processes that will more accurately resemble the contextual richness of face-to-face communication.
A prototype will be developed with the collaboration of the consultants to support decision making in the financial and healthcare sectors. In fact, decisions in these environments are frequently taken when the participants reach a point of saturation in which they wish to end the process so much that they disregard part of the consequences of their decisions. With the implementation of this prototype we expect to provide valuable information to the coordinator about the state of the participants so that he can better manage the process.