A groundbreaking study has, for the very first time, uncovered the precise mechanisms and moments involved in making eye contact, demonstrating that these elements—beyond the simple occurrence of the gaze itself—are pivotal in shaping our comprehension and reactions to others, even including robotic entities.
Under the leadership of cognitive neuroscientist Dr. Nathan Caruana, a team of researchers from the HAVIC Lab at Flinders University recruited 137 participants to engage in a collaborative block-building exercise alongside a virtual collaborator.
Their investigation revealed that the optimal method for conveying a request involved a deliberate sequence of gazes: first directing attention to a particular object, then establishing eye contact, and subsequently returning the gaze to that same object. This particular rhythm proved most effective in prompting participants to perceive the gaze as an explicit plea for assistance.
Dr. Caruana explains that pinpointing these essential patterns in eye contact provides fresh perspectives on the ways we interpret social signals during direct, in-person exchanges, ultimately guiding the development of more intuitive and human-like technological interfaces.
“Our research indicates that the frequency of someone looking at you or whether their gaze concludes a series of eye movements is not the deciding factor; rather, it is the surrounding context of those eye movements that renders the action meaningful and communicative,” notes Dr. Caruana, who is affiliated with the College of Education, Psychology and Social Work.
“What stands out as particularly intriguing is the consistency in human responses, regardless of whether the gaze patterns originate from a human partner or a robotic one.”
“These discoveries contribute to unraveling one of our most innate behaviors and its application in forging stronger interpersonal bonds, whether interacting with a colleague, a robotic system, or individuals with atypical communication styles.”
“This work builds on our prior studies, which demonstrated that the human brain is inherently attuned to detecting and reacting to social cues, and that people are naturally equipped to engage effectively with robots and digital agents when these entities employ the familiar non-verbal signals we encounter in daily human interactions.”
The researchers emphasize that their findings hold immediate relevance for designing social robots and virtual assistants, which are increasingly integrated into educational environments, professional spaces, and domestic settings, while also extending to wider applications outside the realm of technology.
“Grasping the dynamics of eye contact can enhance training programs for non-verbal communication in demanding environments, such as athletic competitions, military operations, and bustling industrial workplaces,” Dr. Caruana adds.
“Furthermore, it offers valuable support to individuals who depend significantly on visual signals, including those who are deaf or hard of hearing, as well as people on the autism spectrum.”
Building on this foundation, the research team is broadening their scope to examine additional variables influencing gaze perception, including the length of eye contact, multiple successive glances, and preconceived notions about the interaction partner—be it a human, an AI system, or a computer-operated entity.
Currently, the HAVIC Lab is undertaking multiple practical investigations into human perceptions and engagements with social robots across diverse contexts, such as classroom settings and manufacturing facilities.
“These understated cues form the essential foundation of meaningful social bonds,” Dr. Caruana remarks.
“By gaining deeper insights into them, we empower the creation of advanced technologies and targeted training methodologies that enable clearer and more assured human connections.”
The HAVIC Lab maintains close ties with the Flinders Institute for Mental Health and Wellbeing and serves as a key collaborator in the Flinders Autism Research Initiative.








