ArticuLab

HCII ~ School of Computer Science ~ Carnegie Mellon University

Connection Machines: Virtual peers who can build rapport to support students in learning

Alex
Our Connection Machines project investigates nonverbal and verbal behaviors of two partners in a conversation setting in the context of rapport. We have collected data on teenagers conducting a peer tutoring task in linear algebra. We are currently investigating nonverbal behaviors, such as smile, gaze, prosody and pitch, and verbal content, such as second person pronoun use, of the participants in the dialogue. We are also interested in designing and implementing automatic detection of features that arise from the interaction of the two interlocutors, such as entrainment and mimicry, in a multimodal view, which means looking verbal and nonverbal behavior at the same time. Another interest of ours is using machine learning techniques to correlate the multimodal inputs with objective measures, like rapport of the two participants, to make predictions of new data. We are also interested in investigating the relationship between rapport and student learning gain.

Based on the human-human interaction data, we will elicit verbal and nonverbal patterns that serve as guidelines for virtual human design. Ultimately we will create a fully automatic embodied conversational agent (ECA) that could replace one of the participants in the peer tutoring setting and fulfill the task of engaging students in a learning task. More information.

Team
Justine Cassell
Alexandros Papangelis
Yoichi Matsuyama
Ran Zhao
Catharine Oertel
Samuel Mascarenhas
Tanmay Sinha
Torey Bocast
Anders Weinstein
Alumni
Zhou Yu
Evelyn Yarzebinski
David Gerritsen
Amy Ogan

Scaffolding Science Achievement in a Culturally Diverse Classroom: Bridging the Gap with Virtual Peers

Alex
In this project, we aim to address the systematically-reduced standardized test scores of African American students compared to their Euro-American peers by using virtual peer technology to understand the role of dialect, and more broadly, cultural congruence, on students's performance, and to help students achieve in the classroom. Our results show that African American children are more likely to demonstrate switching between their vernacular dialect and the standard (Mainstream American English) when working with a virtual peer partner who demonstrates this code-switching than when they are performing the same task with a human peer partner. Additionally, we have shown that African American children demonstrate improved science talk (improved verbal reasoning) after hearing an example of science talk from a virtual partner who spoke using the vernacular dialect than one who provided the same content in the standard dialect. We have also demonstrated that children demonstrate increased fluency when speaking socially with a virtual partner which first introduces itself in the vernacular dialect than one which introduces itself with the standard. Regardless of partner, children also demonstrate increased fluency when they are speaking in the vernacular dialect rather than the standard dialect themselves.

Our current and future work builds on these results by building language and code-switching models of African American Vernacular English into our system, creating content for more educational domains into our virtual peer system, investigating the mechanisms behind improved performance with same-dialect virtual partners, and exploring the ethical considerations around developing culturally-sensitive classroom systems. More information.

Team
Justine Cassell
Samantha Finkelstein
Anders Weinstein
Alumni
Callie Vaughn
Evelyn Yarzebinski
Amy Ogan
Brittany McLaughlin

Innovative Technologies For Autism

Sam Waving
A special interest at the ArticuLab is to understand how children with Autism Spectrum Disorder (ASD) communicate with their peers. Successful peer interactions are vital to learning, future employment and well-being. Furthermore, while social interaction is a core deficit in autism, assessing and treating social deficits is not well understood. To address these needs, we use a unique paradigm of human-computer interaction, called virtual peers, to promote a better understanding of the verbal and non-verbal communication skills of children with ASD, their assessment and the individualized design of interventions. Click here to learn more.

Team
Justine Cassell
Alumni
Andrea Tartaro
Miri Arie
Julia Merryman

Collaborative storytelling with a virtual peer

This project investigates the potential of a virtual peer to engage in collaborative storytelling by modeling roles, speech acts and turn-taking behaviors that children use during improvisational play. We are investigating aspects of engagement and educational potential of the collaborative system. Click here to learn more.

Team
Justine Cassell
Alumni
Andrea Tartaro
Katy Witmer
Austin Wang

JR Summit

Junior SummitThe Junior Summit 1998 brought together 3000 young people from 139 countries, and was one of the first on-line communities of its kind. Click here for publications.

In our research on the Junior Summit, we are examining the over 48,000 messages exchanged among these children and adolescents, but also interviews and questionnaires concerning the effects of the Junior Summit that we collected five years later. We explore how language use and online community formation, leadership, and group dynamics. We also examine the design of this technology, and implications for educational technology, youth empowerment, civic engagement and political participation. Click here to learn more.

Team
Justine Cassell
Dona Tversky
Brooke Foucault
Alastair Gill
Alex Markov

NUMACK

NUMACK (Northwestern University Multimodal Autonomous Conversational Kiosk) is an Embodied Conversation Agent (ECA) who gives directions around Northwestern's Campus using a combination of speech, gestures and facial expressions. Click here for publications. NUMACK Gesturing

The system is capable of interacting with human users by generating novel language and gestures in coordination using a grammar-based, computational model of language and a gesture planning system. These systems work in coordination to express information about the real world from a domain knowledge base and an evolving model of context, or information state. Click here to learn more.

Team
Justine Cassell
Paul Tepper
Rachel Baker
John Borland
Nathan Cantelmo