Jonathan Vitale is a Postdoctoral Research Fellow at the Innovation and Enterprise Research Laboratory (The Magic Lab).
Jonathan has a Bachelor degree in Digital Communication and a Master degree in Computer Science from University of Milan.
In his PhD research he investigated human social and emotional intelligence and, specifically, how human faces are possibly processed and represented to extract identity and facial expression information.
Jonathan is member of the Cognitive Science Society. He serves as peer reviewer for conferences in cognitive science and social robotics areas, such as the Annual Meeting of Cognitive Science Society, the International Conference on Social Robotics, the IEEE International Symposium on Robot and Human Interactive Communication and the International Journal of Social Robotics.
He was part of the local organising committee at the 6th International Conference on Social Robotics, co-chair of the 1st workshop on Attention for Social Intelligence and co-organiser of the 1st workshop on Human-Robot Engagement in the Home, Workplace and Public Spaces at IJCAI 2017.
Jonathan's research interests cover topics concerning human cognition and how computational models of human cognition can advance artifical intelligence technologies and society's healthcare.
Topics of particular interest are: face perception and processing, interactions of emotional signals with decision-making processes, embodied cognition theories, attention and executive functioning and psychological aspects of human-robot interactions.
Vitale, J., Williams, M.-.A., Johnston, B. & Boccignone, G. 2014, 'Affective facial expression processing via simulation: A probabilistic model', Biologically Inspired Cognitive Architectures, vol. 10, pp. 30-41.View/Download from: UTS OPUS or Publisher's site
Understanding the mental state of other people is an important skill for
intelligent agents and robots to operate within social environments. However,
the mental processes involved in `mind-reading' are complex. One explanation of
such processes is Simulation Theory - it is supported by a large body of
neuropsychological research. Yet, determining the best computational model or
theory to use in simulation-style emotion detection, is far from being
In this work, we use Simulation Theory and neuroscience findings on
Mirror-Neuron Systems as the basis for a novel computational model, as a way to
handle affective facial expressions. The model is based on a probabilistic
mapping of observations from multiple identities onto a single fixed identity
(`internal transcoding of external stimuli'), and then onto a latent space
(`phenomenological response'). Together with the proposed architecture we
present some promising preliminary results
Ojha, S., Vitale, J. & Williams, M.-.A. 2017, 'A Domain-Independent Approach of Cognitive Appraisal Augmented by Higher Cognitive Layer of Ethical Reasoning', Annual Meeting of the Cognitive Science Society, London, pp. 2833-2838.View/Download from: UTS OPUS
According to cognitive appraisal theory, emotion in an individual is the result of how a situation/event is evaluated by the individual. This evaluation has different outcomes among people and it is often suggested to be operationalised by a set of rules or beliefs acquired by the subject throughout development. Unfortunately, this view is particularly detrimental for computational applications of emotion appraisal. In fact, it requires providing a knowledge base that is particularly difficult to establish and manage, especially in systems designed for highly complex scenarios, such as social robots. In addition,
according to appraisal theory, an individual might elicit more than one emotion at a time in reaction to an event. Hence, determining which emotional state should be attributed in relationship to a specific event is another critical issue not yet fully addressed by the available literature. In this work, we show that: (i) the cognitive appraisal process can be realised without a complex set of rules; instead, we propose that this process can be operationalised by knowing only the positive or negative
perceived effect the event has on the subject, thus facilitating extensibility and integrability of the emotional system; (ii) the final emotional state to attribute in relation to a specific situation is better explained by ethical reasoning mechanisms. These hypotheses are supported by our experimental results. Therefore, this contribution is particularly significant to provide a more simple and generalisable explanation of cognitive appraisal theory and to promote the integration between theories of emotion and ethics studies, currently often neglected by the available literature.
Tonkin, M., Vitale, J., Ojha, S., Clark, J., Pfeiffer, S., Judge, W., Wang, X. & Williams, M. 2017, 'Embodiment, Privacy and Social Robots: May I Remember You?', Social Robotics: 9th International Conference, ICSR 2017, International Conference on Social Robotics, Springer International Publishing, Tsukuba, Japan, pp. 506-515.View/Download from: UTS OPUS or Publisher's site
As social robots move from the laboratory into public settings the possibility of unwanted intrusion into a user's personal privacy is magnified.
The actual social interaction between human and robot may involve anthropomorphising of the robot by the user, and this may prompt the user to disclose private or sensitive information. To comprehend possible impacts we conducted an exploratory study with a novel privacy measure to understand changes to users' privacy considerations when interacting with an embodied robotic system vs a disembodied system.
In this paper we measure the difference in personal information provided to such systems, and discuss the idea that embodiment may increase users' risk tolerance and reduce their privacy concerns.
Tonkin, M., Vitale, J., Ojha, S., Williams, M.-.A., Fuller, P., Judge, W. & Wang, X. 2017, 'Would You Like to Sample? Robot Engagement in a Shopping Centre', 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), International Symposium on Robot and Human Interactive Communication, Lisbon, Portugal, pp. 42-49.View/Download from: UTS OPUS or Publisher's site
Nowadays, robots are gradually appearing in public spaces such as libraries, train stations, airports and shopping centres. Only a limited percentage of research literature explores robot applications in public spaces. Studying robot applications in the wild is particularly important for designing commercially viable applications able to meet a specific goal. Therefore, in this paper we conduct an experiment to test a robot application in a shopping centre, aiming to provide results relevant for today's technological capability and market. We compared the performance of a robot and a human in promoting food samples in a shopping centre, a well known commercial application, and then analysed the effects of the type of engagement used to achieve this goal. Our results show that the robot is able to engage customers similarly to a human as expected. However unexpectedly, while an actively engaging human was able to perform better than a passively engaging human, we found the opposite effect for the robot. In this paper we investigate this phenomenon, with possible explanation ready to be explored and tested in subsequent research.
Vitale, J., Johnston, B. & Williams, M.A. 2017, 'Facial Motor Information is Sufficient for Identity Recognition', Proceedings of the 39th Annual Meeting of the Cognitive Science Society, The 39th Annual Meeting of the Cognitive Science Society, Cognitive Science Society, London, pp. 3447-3452.View/Download from: UTS OPUS
The face is a central communication channel providing information about the identities of our interaction partners and their potential mental states expressed by motor configurations. Although it is well known that infants ability to recognise people follows a developmental process, it is still an open question how face identity recognition skills can develop and, in particular, how facial expression and identity processing potentially interact during this developmental process. We propose that by acquiring information of the facial motor configuration observed from face stimuli encountered throughout development would be sufficient to develop a face-space representation. This representation encodes the observed face stimuli as points of a multidimensional psychological space able to assist facial identity and expression recognition. We validate our hypothesis through computational simulations and we suggest potential implications of this understanding with respect to the available findings in face processing.
Vitale, J., Williams, M.-.A. & Johnston, B. 2016, 'The face-space duality hypothesis: a computational model', Proceedings of the 38th Annual Conference of the Cognitive Science Society, Annual Conference of the Cognitive Science Society, Cognitive Science Society, Philadelphia, pp. 514-519.View/Download from: UTS OPUS
Vitale, J., Williams, M.-.A. & Johnston, B. 2014, 'Socially impaired robots: Human social disorders and robots' socio-emotional intelligence', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Social Robotics (ICSR), Springer Verlag, Sydney, Australia, pp. 350-359.View/Download from: UTS OPUS or Publisher's site
Social robots need intelligence in order to safely coexist and interact with humans. Robots without functional abilities in understanding others and unable to empathise might be a societal risk and they may lead to a society of socially impaired robots. In this work we provide a survey of three relevant human social disorders, namely autism, psychopathy and schizophrenia, as a means to gain a better understanding of social robots' future capability requirements.We provide evidence supporting the idea that social robots will require a combination of emotional intelligence and social intelligence, namely socio-emotional intelligence. We argue that a robot with a simple socio-emotional process requires a simulation-driven model of intelligence. Finally, we provide some critical guidelines for designing future socio-emotional robots.
Wang, X., Williams, M.-.A., Gardenfors, P., Vitale, J., Abidi, S., Johnston, B., Kuipers, B. & Huang, A. 2014, 'Directing human attention with pointing', Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on, IEEE/RSJ International Symposium on Robot and Human Interactive Communication, IEEE, Edinburgh, Scotland, pp. 174-179.View/Download from: UTS OPUS or Publisher's site
Pointing is a typical means of directing a human's attention to a specific object or event. Robot pointing behaviours that direct the attention of humans are critical for human-robot interaction, communication and collaboration. In this paper, we describe an experiment undertaken to investigate human comprehension of a humanoid robot's pointing behaviour. We programmed a NAO robot to point to markers on a large screen and asked untrained human subjects to identify the target of the robots pointing gesture. We found that humans are able to identify robot pointing gestures. Human subjects achieved higher levels of comprehension when the robot pointed at objects closer to the gesturing arm and when they stood behind the robot. In addition, we found that subjects performance improved with each assessment task. These new results can be used to guide the design of effective robot pointing behaviours that enable more effective robot to human communication and improve human-robot collaborative performance.
The aim of this work is to investigate a system able to detect facial expressions and to use them in a model for affect recognition in order to further investigate models for social interactions mediated by social signals.
Vitale, J. 2010, 'Analisi e implementazione di un sistema neuro-fuzzy in architettura domotica'.