Dr Jonathan Vitale is a leading early career researcher on human-machine interaction and social robotics using experimental data from human studies to develop, test and validate computational models of human cognition. Jonathan is a Senior member of UTS’s World Champion team at RoboCup@Home 2019. He also significantly contributed to the design, deployment and testing of cutting-edge social robots commercial applications in public spaces. Some of these applications are the first robotic flight attendant at Sydney International Airport, in partnership with Commonwealth Bank of Australia and Air New Zealand, the robotic retail assistant at a Stockland shopping centre, a robotic facilitator of NSW startups ecosystem at Sydney Startup Hub, and a multilingual social robot receptionist at Fairfield Hospital, in partnership with South Western Sydney Local Health District.
Despite his early research career, Dr Vitale can already count on many high-quality peer-reviewed papers, including publications at top-tier international conferences like the Annual Meeting of Cognitive Science Society and the International Conference on Human-Robot Interaction.
He was a member of the Local Organising Committee for the International Conference on Social Robotics 2014, where he also co-organised and chaired a workshop. In 2017 he co-organised another workshop at the prestigious International Joint Conference on Artificial Intelligence (IJCAI). Dr Vitale is actively serving as a peer reviewer in many respected conferences and journals in human-robot interaction and cognitive science research, such as the IEEE International Conference on Robot and Human Interactive Communication, the Association for the Advancement of Artificial Intelligence, and the International Journal of Social Robotics.
Jonathan's national leadership in human-robot interaction research is demonstrated by the establishment and maintenance of external research partnership with industry and government institutions to conduct innovative human-robot interaction experiments in public spaces. Dr Vitale is listed as a Chief Investigator on the first Australian social robotics industry project in partnership with Commonwealth Bank of Australia, and he provided significant technical and research leadership to develop and coordinate partnerships with other external institutions, such as Jobs for NSW and South Western Sydney Local Health District.
Dr Vitale conducted his PhD and postdoctoral research activities at the UTS Innovation and Enterprise Research Lab, also known as the Magic Lab. The Magic Lab is Australia's leading lab in social robotics research. The lab offers a global network of mentors and close collaborators, including Steve Wozniak, co-founder of Apple computer, and Prof. Peter Gardenfors (University of Lund, Sweden), a leading authority in cognitive science research with extensive experience in models of human cognition.
At UTS Jonathan conducted research studies by using cutting-edge technologies, like humanoid robots (PR2, Nao, and Pepper), smart devices (Google Home, Amazon Echo, smart wearables, motion capture suits, and IoT devices), and super-computers to run Deep Learning algorithms (Dell Alienware devices). More technologies were employed through external partnerships with industry (PAL Robotics REEM humanoid robot). The facilities, network and reputation of the laboratory and the research team attract UTS's most talented students taking the opportunity to work on research projects delivering real-world innovative AI applications.
One significant contribution of Dr Vitale to the field of human-robot interaction is the first design methodology for social robots commercial applications, published at the International Conference on Human-Robot Interaction 2018. This methodology is used to design social robots commercial applications that can be deployed and tested in public spaces. The evaluations of these applications include the design and execution of human-robot interaction experiments advancing human-robot interaction research with results from 'in-the-wild' real-world field studies. Since 2018, the paper has already been cited more than 15 times, consequently contributing to the design of many real-world social robots applications and related experiments. Other significant contributions to the fields of human-machine interaction and cognitive science include works providing a better understanding of people's privacy and its effects on people's trust towards technologies and their decision-making process.
The Magic Lab is part of the UTS Centre for Artificial Intelligence (CAI) in the Faculty of Engineering and IT. CAI was officially opened in March 2017 and has achieved great success in exploring the theoretical research issues of AI and driving AI industry impact. It has become Australia's largest AI centre: 30 staff (including one Laureate fellow, one ATSE researcher, four distinguished professors, two IEEE Fellows, and one highly cited researcher), ten postdocs and 180 PhD students. CAI staff and students are currently working on around 20 ARC projects and 30 industry projects. In 2019, CAI published over 250 papers in leading journals and conferences. As an internationally leading centre for AI research, CAI provides an excellent research environment, including research collaboration framework, PhD student training program, and mentoring system. UTS Computer Science is ranked in the top 30 worldwide (ARWU), and CAI is a pivotal contributor to this outstanding achievement. UTS was rated 5 out of 5 ("Well-above world standard'') in FoR Code 0801 (Artificial Intelligence) in the recent ERA 2018.
Can supervise: YES
Jonathan's research interests cover topics concerning human cognition, computational models of cognition applied to artificial intelligence and design and development of social robotics applications for social good.
Topics of particular interest are: face perception and processing, interactions of emotional signals and other psychological processes with human decision-making, embodied cognition theories, attention and executive functioning, ethically-driven technology design and psychological aspects of human-robot interactions.
Dr Vitale is also active in teaching activities. He is an experienced supervisor and mentor of HDR, undergraduate and postgraduate students. In 2019, Jonathan assisted in the design of a new subject in robotics and data science at UTS, and he has experience in tutoring and lecturing university subjects since 2016.
Tonkin, M, Vitale, J, Herse, S, Raza, SA, Madhisetty, S, Kang, L, Vu, TD, Johnston, B & Williams, MA 2019, 'Privacy First: Designing Responsible and Inclusive Social Robot Applications for in the Wild Studies', 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE International Conference on Robot and Human Interactive Communication, IEEE, New Delhi, India.View/Download from: Publisher's site
Deploying social robots applications in public spaces for conducting in the wild studies is a significant challenge but critical to the advancement of social robotics. Real world environments are complex, dynamic, and uncertain. Human-Robot interactions can be unstructured and unanticipated. In addition, when the robot is intended to be a shared public resource, management issues such as user access and user privacy arise, leading to design choices that can impact on users' trust and the adoption of the designed system. In this paper we propose a user registration and login system for a social robot and report on people's preferences when registering their personal details with the robot to access services. This study is the first iteration of a larger body of work investigating potential use cases for the Pepper social robot at a government managed centre for startups and innovation. We prototyped and deployed a system for user registration with the robot, which gives users control over registering and accessing services with either face recognition technology or a QR code. The QR code played a critical role in increasing the number of users adopting the technology. We discuss the need to develop social robot applications that responsibly adhere to privacy principles, are inclusive, and cater for a broad spectrum of people.
Ojha, S, Vitale, J, Raza, SA, Billingsley, R & Williams, M-A 2019, 'Integrating Personality and Mood with Agent Emotions', AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS), ASSOC COMPUTING MACHINERY, Montreal, CANADA, pp. 2147-2149.
Ojha, S, Gudi, SLKC, Vitale, J, Williams, MA & Johnston, B 2018, 'I remember what you did: A behavioural guide-robot', Advances in Intelligent Systems and Computing, International Conference on Robot Intelligence Technology and Applications, Kuala Lumpur, Malaysia, pp. 273-282.View/Download from: Publisher's site
© Springer International Publishing AG, part of Springer Nature 2019. Robots are coming closer to human society following the birth of emerging field called Social Robotics. Social Robotics is a branch of robotics that specifically pertains to the design and development of robots that can be employed in human society for the welfare of mankind. The applications of social robots may range from household domains such as elderly and child care to educational domains like personal psychological training and tutoring. It is crucial to note that if such robots are intended to work closely with young children, it is extremely important to make sure that these robots teach not only the facts but also important social aspects like knowing what is right and what is wrong. It is because we do not want to produce a generation of kids that knows only the facts but not morality. In this paper, we present a mechanism used in our computational model (i.e EEGS) for social robots, in which emotions and behavioural response of the robot depends on how one has previously treated a robot. For example, if one has previously treated a robot in a good manner, it will respond accordingly while if one has previously mistreated the robot, it will make the person realise the issue. A robot with such a quality can be very useful in teaching good manners to the future generation of kids.
Pfeiffer, S, Ebrahimian, D, Herse, S, Le, TN, Leong, S, Lu, B, Powell, K, Raza, SA, Sang, T, Sawant, I, Tonkin, M, Vinaviles, C, Vu, TD, Yang, Q, Billingsley, R, Clark, J, Johnston, B, Madhisetty, S, McLaren, N, Peppas, P, Vitale, J & Williams, MA 2018, 'UTS Unleashed! RoboCup@Home SSPL Champions 2019', RoboCup 2019: Robot World Cup XXIII, Robot World Cup, Springer International Publishing, Sydney, NSW, Australia, pp. 603-615.View/Download from: Publisher's site
This paper summarizes the approaches employed by Team UTS Unleashed! to take First Place in the 2019 RoboCup@Home Social Standard Platform League. First, our system architecture is introduced. Next, our approach to basic skills needed for a strong performance in the competition. We describe several implementations for tests participation. Finally our software development methodology is discussed.
Ojha, S, Vitale, J, Raza, SA, Billingsley, R & Williams, MA 2018, 'Implementing the Dynamic Role of Mood and Personality in Emotion Processing of Cognitive Agents', Sixth Annual Conference on Advances in Cognitive Systems, Annual Conference on Advances in Cognitive Systems, Stanford, California.
Tonkin, M, Vitale, J, Herse, S, Williams, MA, Judge, W & Wang, X 2018, 'Design Methodology for the UX of HRI: A Field Study of a Commercial Social Robot at an Airport', ACM/IEEE International Conference on Human-Robot Interaction, ACM/IEEE International Conference on Human-Robot Interaction, ACM, Chicago, USA, pp. 407-415.View/Download from: Publisher's site
© 2018 ACM. Research in robotics and human-robot interaction is becoming more and more mature. Additionally, more affordable social robots are being released commercially. Thus, industry is currently demanding ideas for viable commercial applications to situate social robots in public spaces and enhance customers experience. However, present literature in human-robot interaction does not provide a clear set of guidelines and a methodology to (i) identify commercial applications for robotic platforms able to position the users needs at the centre of the discussion and (ii) ensure the creation of a positive user experience. With this paper we propose to fill this gap by providing a methodology for the design of robotic applications including these desired features, suitable for integration by researchers, industry, business and government organisations. As we will show in this paper, we successfully employed this methodology for an exploratory field study involving the trial implementation of a commercially available, social humanoid robot at an airport.
Vitale, J, Tonkin, M, Herse, S, Ojha, S, Clark, J, Williams, M, Wang, X & Judge, W 2018, 'Be More Transparent and Users Will Like You: A Robot Privacy and User Experience Design Experiment', Proceedings of 2018 ACM/IEEE International Conference on Human- Robot Interaction, International Conference on Human-Robot Interaction, ACM, Chicago, IL, USA, pp. 379-387.View/Download from: Publisher's site
Herse, S, Vitale, J, Ebrahimian, D, Tonkin, M, Ojha, S, Sidra, S, Johnston, B, Phillips, S, Gudi, SLKC, Clark, J, Judge, W & Williams, MA 2018, 'Bon Appetit! Robot Persuasion for Food Recommendation', ACM/IEEE International Conference on Human-Robot Interaction, ACM/IEEE International Conference on Human-Robot Interaction, ACM, Chicago, USA, pp. 125-126.View/Download from: Publisher's site
© 2018 Authors. The integration of social robots within service industries requires social robots to be persuasive. We conducted a vignette experiment to investigate the persuasiveness of a human, robot, and an information kiosk when offering consumers a restaurant recommendation. We found that embodiment type significantly affects the persuasiveness of the agent, but only when using a specific recommendation sentence. These preliminary results suggest that human-like features of an agent may serve to boost persuasion in recommendation systems. However, the extent of the effect is determined by the nature of the given recommendation.
Herse, S, Vitale, J, Tonkin, M, Ebrahimian, D, Ojha, S, Johnston, B, Judge, W & Williams, MA 2018, 'Do You Trust Me, Blindly? Factors Influencing Trust Towards a Robot Recommender System', RO-MAN 2018 The 27th IEEE International Symposium on Robot and Human Interactive Communication, IEEE International Symposium on Robot and Human Interactive Communication., IEEE, China, pp. 7-14.View/Download from: Publisher's site
© 2018 IEEE. When robots and human users collaborate, trust is essential for user acceptance and engagement. In this paper, we investigated two factors thought to influence user trust towards a robot: preference elicitation (a combination of user involvement and explanation) and embodiment. We set our experiment in the application domain of a restaurant recommender system, assessing trust via user decision making and perceived source credibility. Previous research in this area uses simulated environments and recommender systems that present the user with the best choice from a pool of options. This experiment builds on past work in two ways: first, we strengthened the ecological validity of our experimental paradigm by incorporating perceived risk during decision making; and second, we used a system that recommends a nonoptimal choice to the user. While no effect of embodiment is found for trust, the inclusion of preference elicitation features significantly increases user trust towards the robot recommender system. These findings have implications for marketing and health promotion in relation to Human-Robot Interaction and call for further investigation into the development and maintenance of trust between robot and user.
Ojha, S, Vitale, J & Williams, M-A 2017, 'A Domain-Independent Approach of Cognitive Appraisal Augmented by Higher Cognitive Layer of Ethical Reasoning', Proceedingsof the 39th Annual Meeting of the Cognitive Science Society, Annual Meeting of the Cognitive Science Society, Cognitive Science Society, London, pp. 2833-2838.
According to cognitive appraisal theory, emotion in an individual is the result of how a situation/event is evaluated by the individual. This evaluation has different outcomes among people and it is often suggested to be operationalised by a set of rules or beliefs acquired by the subject throughout development. Unfortunately, this view is particularly detrimental for computational applications of emotion appraisal. In fact, it requires providing a knowledge base that is particularly difficult to establish and manage, especially in systems designed for highly complex scenarios, such as social robots. In addition,
according to appraisal theory, an individual might elicit more than one emotion at a time in reaction to an event. Hence, determining which emotional state should be attributed in relationship to a specific event is another critical issue not yet fully addressed by the available literature. In this work, we show that: (i) the cognitive appraisal process can be realised without a complex set of rules; instead, we propose that this process can be operationalised by knowing only the positive or negative
perceived effect the event has on the subject, thus facilitating extensibility and integrability of the emotional system; (ii) the final emotional state to attribute in relation to a specific situation is better explained by ethical reasoning mechanisms. These hypotheses are supported by our experimental results. Therefore, this contribution is particularly significant to provide a more simple and generalisable explanation of cognitive appraisal theory and to promote the integration between theories of emotion and ethics studies, currently often neglected by the available literature.
Tonkin, M, Vitale, J, Ojha, S, Clark, J, Pfeiffer, S, Judge, W, Wang, X & Williams, M 2017, 'Embodiment, Privacy and Social Robots: May I Remember You?', Social Robotics: 9th International Conference, ICSR 2017, International Conference on Social Robotics, Springer International Publishing, Tsukuba, Japan, pp. 506-515.View/Download from: Publisher's site
As social robots move from the laboratory into public settings the possibility of unwanted intrusion into a user's personal privacy is magnified.
The actual social interaction between human and robot may involve anthropomorphising of the robot by the user, and this may prompt the user to disclose private or sensitive information. To comprehend possible impacts we conducted an exploratory study with a novel privacy measure to understand changes to users' privacy considerations when interacting with an embodied robotic system vs a disembodied system.
In this paper we measure the difference in personal information provided to such systems, and discuss the idea that embodiment may increase users' risk tolerance and reduce their privacy concerns.
Tonkin, M, Vitale, J, Ojha, S, Williams, M-A, Fuller, P, Judge, W & Wang, X 2017, 'Would You Like to Sample? Robot Engagement in a Shopping Centre', 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), International Symposium on Robot and Human Interactive Communication, IEEE, Lisbon, Portugal, pp. 42-49.View/Download from: Publisher's site
Nowadays, robots are gradually appearing in public spaces such as libraries, train stations, airports and shopping centres. Only a limited percentage of research literature explores robot applications in public spaces. Studying robot applications in the wild is particularly important for designing commercially viable applications able to meet a specific goal. Therefore, in this paper we conduct an experiment to test a robot application in a shopping centre, aiming to provide results relevant for today's technological capability and market. We compared the performance of a robot and a human in promoting food samples in a shopping centre, a well known commercial application, and then analysed the effects of the type of engagement used to achieve this goal. Our results show that the robot is able to engage customers similarly to a human as expected. However unexpectedly, while an actively engaging human was able to perform better than a passively engaging human, we found the opposite effect for the robot. In this paper we investigate this phenomenon, with possible explanation ready to be explored and tested in subsequent research.
Vitale, J, Williams, M-A & Johnston, B 2016, 'The face-space duality hypothesis: a computational model', Proceedings of the 38th Annual Conference of the Cognitive Science Society, Annual Conference of the Cognitive Science Society, Cognitive Science Society, Philadelphia, pp. 514-519.
Wang, X, Williams, M-A, Gardenfors, P, Vitale, J, Abidi, S, Johnston, B, Kuipers, B & Huang, A 2014, 'Directing human attention with pointing', Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on, IEEE/RSJ International Symposium on Robot and Human Interactive Communication, IEEE, Edinburgh, Scotland, pp. 174-179.View/Download from: Publisher's site
Pointing is a typical means of directing a human's attention to a specific object or event. Robot pointing behaviours that direct the attention of humans are critical for human-robot interaction, communication and collaboration. In this paper, we describe an experiment undertaken to investigate human comprehension of a humanoid robot's pointing behaviour. We programmed a NAO robot to point to markers on a large screen and asked untrained human subjects to identify the target of the robots pointing gesture. We found that humans are able to identify robot pointing gestures. Human subjects achieved higher levels of comprehension when the robot pointed at objects closer to the gesturing arm and when they stood behind the robot. In addition, we found that subjects performance improved with each assessment task. These new results can be used to guide the design of effective robot pointing behaviours that enable more effective robot to human communication and improve human-robot collaborative performance.
Vitale, J 2018, 'Face Perception and Cognition Using Motor Representations: A Computational Approach'.
Face perception and cognition skills are critically needed by humans to be proficient in social cognition. Social cognition is defined as the ability to make sense of others' actions and react appropriately to them. For example, determining the identity of an interaction partner is an essential precondition to engaging socially with people. In addition, recognising facial expressions contributes to regulating human social exchanges. In fact, it assists in determining the mental state of the interaction partner and selecting the best subsequent behavioural response.
Humans show a preference for faces at a very early stage. This preference is maintained throughout their lives and it contributes to the acquisition of face recognition skills, which develop with time and experience. However, newborns have the ability to process face stimuli and imitate observed facial expressions from birth. This early imitation behaviour is a plausible way to collect sensory-motor information about the configuration of observed facial muscles. If recognising people is acquired by encountering new faces, how do humans acquire such a skill? Are there any interactions between face recognition and facial motor information processing? If so, how do these mechanisms possibly interact?
I provide answers to these research questions by looking at theories of embodied cognition. Embodied cognition research suggests that cognition extends beyond the brain to include body parts. I argue that mechanisms interacting with physical or mental aspects of the body provide sensory-motor information of the observed facial stimuli. This motor information, in turn, is sufficient for the acquisition of face identity recognition capabilities. I validate this thesis by providing mathematical models and computational simulations describing face perception and cognition. Furthermore, I show that altering the motor representations of facial configurations leads to significant deficits in face processing capabili...
Vitale, J 2016, 'Towards Embodied Face Processing Theories: a Computational View'.
Vitale, J 2014, 'Attention for the Development of Empathic Bonds: From Facial Expressions to Interoception'.
Vitale, J 2012, 'Virtual social interactions in an affective driven environment'.
The aim of this work is to investigate a system able to detect facial expressions and to use them in a model for affect recognition in order to further investigate models for social interactions mediated by social signals.
Vitale, J 2010, 'Analisi e implementazione di un sistema neuro-fuzzy in architettura domotica'.
Previous industry research partners included: Commonwealth Bank of Australia Innovation Lab, Stockland Property Group, Air New Zealand, Jobs for NSW and South Western Sydney Local Health District.