| Affiliations: | College of Sciences |
| Team Leader: |
|
| Faculty Mentor: |
Mustapha Mouloua, PhD
|
|
Team Size:
|
2 |
| Open Spots: | 0 |
|
Team Member Qualifications:
|
Technical & Research Readiness Basic familiarity with research methods in social/behavioral science. Comfort using common research tools (e.g., surveys, data entry platforms, scheduling tools). Required Qualifications: CITI Training Human Subjects Research – Group 2: Social/Behavioral Research Investigators and Key Personnel Research and HIPAA Privacy Protections University Research Authorization Completion and approval of the URA (Undergraduate Research Authorization) form prior to participation. Time Commitment Ability to commit to in-person lab hours as required by the project. Reliable availability throughout the agreed-upon research period. Task Completion & Communication Commitment to completing assigned tasks by set deadlines. Openness to consistent communication with lab leadership and team members (email, shared platforms, meetings). Professional & Ethical Conduct Adherence to ethical standards for human subjects research. Respect for participant confidentiality and data security protocols. Organizational & Collaborative Skills Ability to follow protocols accurately and maintain organized records. Willingness to work collaboratively within a research team and accept feedback. |
|
Description:
|
Conversational artificial intelligence systems are increasingly used in contexts that rely on emotional and social engagement, including education, mental health support, and companionship. Users frequently describe these systems as empathetic or understanding, suggesting that language-based interaction alone can elicit social responses typically reserved for human partners. However, the mechanisms underlying this perceived emotional connection remain poorly understood. Current approaches of-ten attribute these effects to anthropomorphic interpretations or emergent “AI empathy” without isolating the specific features that drive such perceptions. This gap limits our ability to evaluate how humans cognitively interpret artificial agents and obscures the boundary between perceived understanding and genuine social cognition. Building on this link, this study examines how language-driven mirroring engages neural and autonomic systems typically involved in social cognition and affective processing, and how individual differences modulate susceptibility to HIAG-related dynamics. Specifically, this project asks whether syntactic linguistic mirroring in conversational AI is sufficient to elicit measurable neural and autonomic responses associated with social engage-ment, and whether individual differences shape susceptibility to this effect. By isolating linguistic structure from semantic content and linking it to neurophysiological responses, this work aims to clarify how social cognition can be evoked by artificial systems, with implications for neuroscience, human–AI interaction, and ethical AI design. |