Jonathan Vitale is a Postdoctoral Research Fellow at the Innovation and Enterprise Research Laboratory (The Magic Lab).
Jonathan has a Bachelor degree in Digital Communication, a Master degree in Computer Science from University of Milan and a PhD in Information Technology from University of Technology Sydney.
In his PhD research he investigated, from a computational perspective, how human faces are processed and represented to extract identity and facial expression information by mean of bodily motor representations, and how such facial features impact on human social and emotional cognition.
Jonathan serves as peer reviewer for conferences in cognitive science and social robotics areas, such as the Annual Meeting of Cognitive Science Society, the International Conference on Social Robotics, the IEEE International Symposium on Robot and Human Interactive Communication and the International Journal of Social Robotics.
He was part of the local organising committee at the 6th International Conference on Social Robotics, co-chair of the 1st workshop on Attention for Social Intelligence and co-organiser of the 1st workshop on Human-Robot Engagement in the Home, Workplace and Public Spaces at IJCAI 2017.
He actively engage with industry research partners to explore the design of social robotics applications in public spaces, such as shopping malls and airports.
Can supervise: YES
Jonathan's research interests cover topics concerning human cognition, computational models of cognition applied to artificial intelligence and design and development of social robotics applications for social good.
Topics of particular interest are: face perception and processing, interactions of emotional signals with decision-making processes, embodied cognition theories, attention and executive functioning, ethically-driven technology design and psychological aspects of human-robot interactions.
Vitale, J, Williams, M-A, Johnston, B & Boccignone, G 2014, 'Affective facial expression processing via simulation: A probabilistic model', Biologically Inspired Cognitive Architectures, vol. 10, pp. 30-41.View/Download from: UTS OPUS or Publisher's site
Understanding the mental state of other people is an important skill for
intelligent agents and robots to operate within social environments. However,
the mental processes involved in `mind-reading' are complex. One explanation of
such processes is Simulation Theory - it is supported by a large body of
neuropsychological research. Yet, determining the best computational model or
theory to use in simulation-style emotion detection, is far from being
In this work, we use Simulation Theory and neuroscience findings on
Mirror-Neuron Systems as the basis for a novel computational model, as a way to
handle affective facial expressions. The model is based on a probabilistic
mapping of observations from multiple identities onto a single fixed identity
(`internal transcoding of external stimuli'), and then onto a latent space
(`phenomenological response'). Together with the proposed architecture we
present some promising preliminary results
Gardenfors, P, Williams, M-A, Johnston, B, Billingsley, R, Vitale, J, Peppas, P & Clark, J 2018, 'Event boards as tools for holistic AI', International Workshop on Artificial Intelligence and Cognition 2018, Palermo, Italy.View/Download from: UTS OPUS
Ojha, S, Gudi, SLKC, Vitale, J, Williams, MA & Johnston, B 2019, 'I remember what you did: A behavioural guide-robot', Advances in Intelligent Systems and Computing, pp. 273-282.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing AG, part of Springer Nature 2019. Robots are coming closer to human society following the birth of emerging field called Social Robotics. Social Robotics is a branch of robotics that specifically pertains to the design and development of robots that can be employed in human society for the welfare of mankind. The applications of social robots may range from household domains such as elderly and child care to educational domains like personal psychological training and tutoring. It is crucial to note that if such robots are intended to work closely with young children, it is extremely important to make sure that these robots teach not only the facts but also important social aspects like knowing what is right and what is wrong. It is because we do not want to produce a generation of kids that knows only the facts but not morality. In this paper, we present a mechanism used in our computational model (i.e EEGS) for social robots, in which emotions and behavioural response of the robot depends on how one has previously treated a robot. For example, if one has previously treated a robot in a good manner, it will respond accordingly while if one has previously mistreated the robot, it will make the person realise the issue. A robot with such a quality can be very useful in teaching good manners to the future generation of kids.
Herse, S, Vitale, J, Ebrahimian, D, Tonkin, M, Ojha, S, Sidra, S, Johnston, B, Phillips, S, Gudi, SLKC, Clark, J, Judge, W & Williams, MA 2018, 'Bon Appetit! Robot Persuasion for Food Recommendation', ACM/IEEE International Conference on Human-Robot Interaction, ACM/IEEE International Conference on Human-Robot Interaction, ACM, Chicago, USA, pp. 125-126.View/Download from: UTS OPUS or Publisher's site
© 2018 Authors. The integration of social robots within service industries requires social robots to be persuasive. We conducted a vignette experiment to investigate the persuasiveness of a human, robot, and an information kiosk when offering consumers a restaurant recommendation. We found that embodiment type significantly affects the persuasiveness of the agent, but only when using a specific recommendation sentence. These preliminary results suggest that human-like features of an agent may serve to boost persuasion in recommendation systems. However, the extent of the effect is determined by the nature of the given recommendation.
Vitale, J, Tonkin, M, Herse, S, Ojha, S, Clark, J, Williams, M, Wang, X & Judge, W 2018, 'Be More Transparent and Users Will Like You: A Robot Privacy and User Experience Design Experiment', Proceedings of 2018 ACM/IEEE International Conference on Human- Robot Interaction, International Conference on Human-Robot Interaction, ACM, Chicago, IL, USA, pp. 379-387.View/Download from: UTS OPUS or Publisher's site
Herse, S, Vitale, J, Tonkin, M, Ebrahimian, D, Ojha, S, Johnston, B, Judge, W & Williams, MA 2018, 'Do You Trust Me, Blindly? Factors Influencing Trust Towards a Robot Recommender System', RO-MAN 2018 The 27th IEEE International Symposium on Robot and Human Interactive Communication, IEEE International Symposium on Robot and Human Interactive Communication., IEEE, China, pp. 7-14.View/Download from: UTS OPUS or Publisher's site
© 2018 IEEE. When robots and human users collaborate, trust is essential for user acceptance and engagement. In this paper, we investigated two factors thought to influence user trust towards a robot: preference elicitation (a combination of user involvement and explanation) and embodiment. We set our experiment in the application domain of a restaurant recommender system, assessing trust via user decision making and perceived source credibility. Previous research in this area uses simulated environments and recommender systems that present the user with the best choice from a pool of options. This experiment builds on past work in two ways: first, we strengthened the ecological validity of our experimental paradigm by incorporating perceived risk during decision making; and second, we used a system that recommends a nonoptimal choice to the user. While no effect of embodiment is found for trust, the inclusion of preference elicitation features significantly increases user trust towards the robot recommender system. These findings have implications for marketing and health promotion in relation to Human-Robot Interaction and call for further investigation into the development and maintenance of trust between robot and user.
Ojha, S, Vitale, J, Raza, SA, Billingsley, R & Williams, MA 2018, 'Implementing the Dynamic Role of Mood and Personality in Emotion Processing of Cognitive Agents', Sixth Annual Conference on Advances in Cognitive Systems, Annual Conference on Advances in Cognitive Systems, Stanford, California.View/Download from: UTS OPUS
Tonkin, M, Vitale, J, Herse, S, Williams, MA, Judge, W & Wang, X 2018, 'Design Methodology for the UX of HRI: A Field Study of a Commercial Social Robot at an Airport', ACM/IEEE International Conference on Human-Robot Interaction, ACM/IEEE International Conference on Human-Robot Interaction, ACM, Chicago, USA, pp. 407-415.View/Download from: UTS OPUS or Publisher's site
© 2018 ACM. Research in robotics and human-robot interaction is becoming more and more mature. Additionally, more affordable social robots are being released commercially. Thus, industry is currently demanding ideas for viable commercial applications to situate social robots in public spaces and enhance customers experience. However, present literature in human-robot interaction does not provide a clear set of guidelines and a methodology to (i) identify commercial applications for robotic platforms able to position the users needs at the centre of the discussion and (ii) ensure the creation of a positive user experience. With this paper we propose to fill this gap by providing a methodology for the design of robotic applications including these desired features, suitable for integration by researchers, industry, business and government organisations. As we will show in this paper, we successfully employed this methodology for an exploratory field study involving the trial implementation of a commercially available, social humanoid robot at an airport.
Tonkin, M, Vitale, J, Ojha, S, Clark, J, Pfeiffer, S, Judge, W, Wang, X & Williams, M 2017, 'Embodiment, Privacy and Social Robots: May I Remember You?', Social Robotics: 9th International Conference, ICSR 2017, International Conference on Social Robotics, Springer International Publishing, Tsukuba, Japan, pp. 506-515.View/Download from: UTS OPUS or Publisher's site
As social robots move from the laboratory into public settings the possibility of unwanted intrusion into a user's personal privacy is magnified.
The actual social interaction between human and robot may involve anthropomorphising of the robot by the user, and this may prompt the user to disclose private or sensitive information. To comprehend possible impacts we conducted an exploratory study with a novel privacy measure to understand changes to users' privacy considerations when interacting with an embodied robotic system vs a disembodied system.
In this paper we measure the difference in personal information provided to such systems, and discuss the idea that embodiment may increase users' risk tolerance and reduce their privacy concerns.
Ojha, S, Vitale, J & Williams, M-A 2017, 'A Domain-Independent Approach of Cognitive Appraisal Augmented by Higher Cognitive Layer of Ethical Reasoning', Proceedingsof the 39th Annual Meeting of the Cognitive Science Society, Annual Meeting of the Cognitive Science Society, Cognitive Science Society, London, pp. 2833-2838.View/Download from: UTS OPUS
According to cognitive appraisal theory, emotion in an individual is the result of how a situation/event is evaluated by the individual. This evaluation has different outcomes among people and it is often suggested to be operationalised by a set of rules or beliefs acquired by the subject throughout development. Unfortunately, this view is particularly detrimental for computational applications of emotion appraisal. In fact, it requires providing a knowledge base that is particularly difficult to establish and manage, especially in systems designed for highly complex scenarios, such as social robots. In addition,
according to appraisal theory, an individual might elicit more than one emotion at a time in reaction to an event. Hence, determining which emotional state should be attributed in relationship to a specific event is another critical issue not yet fully addressed by the available literature. In this work, we show that: (i) the cognitive appraisal process can be realised without a complex set of rules; instead, we propose that this process can be operationalised by knowing only the positive or negative
perceived effect the event has on the subject, thus facilitating extensibility and integrability of the emotional system; (ii) the final emotional state to attribute in relation to a specific situation is better explained by ethical reasoning mechanisms. These hypotheses are supported by our experimental results. Therefore, this contribution is particularly significant to provide a more simple and generalisable explanation of cognitive appraisal theory and to promote the integration between theories of emotion and ethics studies, currently often neglected by the available literature.
Tonkin, M, Vitale, J, Ojha, S, Williams, M-A, Fuller, P, Judge, W & Wang, X 2017, 'Would You Like to Sample? Robot Engagement in a Shopping Centre', 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), International Symposium on Robot and Human Interactive Communication, IEEE, Lisbon, Portugal, pp. 42-49.View/Download from: UTS OPUS or Publisher's site
Nowadays, robots are gradually appearing in public spaces such as libraries, train stations, airports and shopping centres. Only a limited percentage of research literature explores robot applications in public spaces. Studying robot applications in the wild is particularly important for designing commercially viable applications able to meet a specific goal. Therefore, in this paper we conduct an experiment to test a robot application in a shopping centre, aiming to provide results relevant for today's technological capability and market. We compared the performance of a robot and a human in promoting food samples in a shopping centre, a well known commercial application, and then analysed the effects of the type of engagement used to achieve this goal. Our results show that the robot is able to engage customers similarly to a human as expected. However unexpectedly, while an actively engaging human was able to perform better than a passively engaging human, we found the opposite effect for the robot. In this paper we investigate this phenomenon, with possible explanation ready to be explored and tested in subsequent research.
Vitale, J, Johnston, B & Williams, MA 2017, 'Facial Motor Information is Sufficient for Identity Recognition', Proceedings of the 39th Annual Meeting of the Cognitive Science Society, The 39th Annual Meeting of the Cognitive Science Society, Cognitive Science Society, London, pp. 3447-3452.View/Download from: UTS OPUS
The face is a central communication channel providing information about the identities of our interaction partners and their potential mental states expressed by motor configurations. Although it is well known that infants ability to recognise people follows a developmental process, it is still an open question how face identity recognition skills can develop and, in particular, how facial expression and identity processing potentially interact during this developmental process. We propose that by acquiring information of the facial motor configuration observed from face stimuli encountered throughout development would be sufficient to develop a face-space representation. This representation encodes the observed face stimuli as points of a multidimensional psychological space able to assist facial identity and expression recognition. We validate our hypothesis through computational simulations and we suggest potential implications of this understanding with respect to the available findings in face processing.
Vitale, J, Williams, M-A & Johnston, B 2016, 'The face-space duality hypothesis: a computational model', Proceedings of the 38th Annual Conference of the Cognitive Science Society, Annual Conference of the Cognitive Science Society, Cognitive Science Society, Philadelphia, pp. 514-519.View/Download from: UTS OPUS
Vitale, J, Williams, M-A & Johnston, B 2014, 'Socially Impaired Robots: Human Social Disorders and Robots' Socio-Emotional Intelligence', SOCIAL ROBOTICS, 6th International Conference on Social Robotics (ICSR), SPRINGER-VERLAG BERLIN, Univ Technol,Ctr Quantum Computat & Intelligent Syst, Sydney, AUSTRALIA, pp. 350-359.
Vitale, J, Williams, M-A & Johnston, B 2014, 'Socially impaired robots: Human social disorders and robots' socio-emotional intelligence', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Social Robotics (ICSR), Springer Verlag, Sydney, Australia, pp. 350-359.View/Download from: UTS OPUS or Publisher's site
Social robots need intelligence in order to safely coexist and interact with humans. Robots without functional abilities in understanding others and unable to empathise might be a societal risk and they may lead to a society of socially impaired robots. In this work we provide a survey of three relevant human social disorders, namely autism, psychopathy and schizophrenia, as a means to gain a better understanding of social robots' future capability requirements.We provide evidence supporting the idea that social robots will require a combination of emotional intelligence and social intelligence, namely socio-emotional intelligence. We argue that a robot with a simple socio-emotional process requires a simulation-driven model of intelligence. Finally, we provide some critical guidelines for designing future socio-emotional robots.
Wang, X, Williams, M-A, Gardenfors, P, Vitale, J, Abidi, S, Johnston, B, Kuipers, B & Huang, A 2014, 'Directing human attention with pointing', Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on, IEEE/RSJ International Symposium on Robot and Human Interactive Communication, IEEE, Edinburgh, Scotland, pp. 174-179.View/Download from: UTS OPUS or Publisher's site
Pointing is a typical means of directing a human's attention to a specific object or event. Robot pointing behaviours that direct the attention of humans are critical for human-robot interaction, communication and collaboration. In this paper, we describe an experiment undertaken to investigate human comprehension of a humanoid robot's pointing behaviour. We programmed a NAO robot to point to markers on a large screen and asked untrained human subjects to identify the target of the robots pointing gesture. We found that humans are able to identify robot pointing gestures. Human subjects achieved higher levels of comprehension when the robot pointed at objects closer to the gesturing arm and when they stood behind the robot. In addition, we found that subjects performance improved with each assessment task. These new results can be used to guide the design of effective robot pointing behaviours that enable more effective robot to human communication and improve human-robot collaborative performance.
The aim of this work is to investigate a system able to detect facial expressions and to use them in a model for affect recognition in order to further investigate models for social interactions mediated by social signals.
Vitale, J 2010, 'Analisi e implementazione di un sistema neuro-fuzzy in architettura domotica'.
Previous industry research partners included: Commonwealth Bank of Australia Innovation Lab, Stockland Property Group and Air New Zealand.