Ojha, S, Williams, MA & Johnston, B 2018, 'The Essence of Ethical Reasoning in Robot-Emotion Processing', International Journal of Social Robotics, vol. 10, no. 2, pp. 211-223.View/Download from: UTS OPUS or Publisher's site
© 2017, Springer Science+Business Media B.V., part of Springer Nature. As social robots become more and more intelligent and autonomous in operation, it is extremely important to ensure that such robots act in socially acceptable manner. More specifically, if such an autonomous robot is capable of generating and expressing emotions of its own, it should also have an ability to reason if it is ethical to exhibit a particular emotional state in response to a surrounding event. Most existing computational models of emotion for social robots have focused on achieving a certain level of believability of the emotions expressed. We argue that believability of a robot's emotions, although crucially necessary, is not a sufficient quality to elicit socially acceptable emotions. Thus, we stress on the need of higher level of cognition in emotion processing mechanism which empowers social robots with an ability to decide if it is socially appropriate to express a particular emotion in a given context or it is better to inhibit such an experience. In this paper, we present the detailed mathematical explanation of the ethical reasoning mechanism in our computational model, EEGS, that helps a social robot to reach to the most socially acceptable emotional state when more than one emotions are elicited by an event. Experimental results show that ethical reasoning in EEGS helps in the generation of believable as well as socially acceptable emotions.
Gudi, SLKC, Ojha, S, Sidra, Johnston, B & Williams, MA 2017, 'A proactive robot tutor based on emotional intelligence', Advances in Intelligent Systems and Computing, International Conference on Robot Intelligence Technology and Applications, Springer, Korea, pp. 113-120.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing AG, part of Springer Nature 2019. In recent years, social robots are playing a vital role in various aspects of acting as a companion, assisting in regular tasks, health, interaction, teaching, etc. Coming to the case of robot tutor, the actions of the robot are limited. It may not fully understand the emotions of the student. It may continue to give lecture even though the user is bored or left away from the robot. This situation makes a user feel that robot cannot supersede a human being because it is not in a position to understand emotions. To overcome this issue, in this paper, we present an Emotional Classification System (ECS) where the robot adapts to the mood of the user and behaves accordingly by becoming proactive. It works based on the emotion tracked by the robot using its emotional intelligence. A robot as a sign language tutor scenario is considered to assist speech and hearing impairment people for validating our model. Real-time implementations and analysis are further discussed by considering Pepper robot as a platform.
Ojha, S, Gudi, SLKC, Vitale, J, Williams, MA & Johnston, B 2019, 'I remember what you did: A behavioural guide-robot', Advances in Intelligent Systems and Computing, pp. 273-282.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing AG, part of Springer Nature 2019. Robots are coming closer to human society following the birth of emerging field called Social Robotics. Social Robotics is a branch of robotics that specifically pertains to the design and development of robots that can be employed in human society for the welfare of mankind. The applications of social robots may range from household domains such as elderly and child care to educational domains like personal psychological training and tutoring. It is crucial to note that if such robots are intended to work closely with young children, it is extremely important to make sure that these robots teach not only the facts but also important social aspects like knowing what is right and what is wrong. It is because we do not want to produce a generation of kids that knows only the facts but not morality. In this paper, we present a mechanism used in our computational model (i.e EEGS) for social robots, in which emotions and behavioural response of the robot depends on how one has previously treated a robot. For example, if one has previously treated a robot in a good manner, it will respond accordingly while if one has previously mistreated the robot, it will make the person realise the issue. A robot with such a quality can be very useful in teaching good manners to the future generation of kids.
Herse, S, Vitale, J, Ebrahimian, D, Tonkin, M, Ojha, S, Sidra, S, Johnston, B, Phillips, S, Gudi, SLKC, Clark, J, Judge, W & Williams, MA 2018, 'Bon Appetit! Robot Persuasion for Food Recommendation', ACM/IEEE International Conference on Human-Robot Interaction, ACM/IEEE International Conference on Human-Robot Interaction, ACM, Chicago, USA, pp. 125-126.View/Download from: UTS OPUS or Publisher's site
© 2018 Authors. The integration of social robots within service industries requires social robots to be persuasive. We conducted a vignette experiment to investigate the persuasiveness of a human, robot, and an information kiosk when offering consumers a restaurant recommendation. We found that embodiment type significantly affects the persuasiveness of the agent, but only when using a specific recommendation sentence. These preliminary results suggest that human-like features of an agent may serve to boost persuasion in recommendation systems. However, the extent of the effect is determined by the nature of the given recommendation.
Herse, S, Vitale, J, Tonkin, M, Ebrahimian, D, Ojha, S, Johnston, B, Judge, W & Williams, MA 2018, 'Do You Trust Me, Blindly? Factors Influencing Trust Towards a Robot Recommender System', RO-MAN 2018 The 27th IEEE International Symposium on Robot and Human Interactive Communication, IEEE International Symposium on Robot and Human Interactive Communication., IEEE, China, pp. 7-14.View/Download from: UTS OPUS or Publisher's site
© 2018 IEEE. When robots and human users collaborate, trust is essential for user acceptance and engagement. In this paper, we investigated two factors thought to influence user trust towards a robot: preference elicitation (a combination of user involvement and explanation) and embodiment. We set our experiment in the application domain of a restaurant recommender system, assessing trust via user decision making and perceived source credibility. Previous research in this area uses simulated environments and recommender systems that present the user with the best choice from a pool of options. This experiment builds on past work in two ways: first, we strengthened the ecological validity of our experimental paradigm by incorporating perceived risk during decision making; and second, we used a system that recommends a nonoptimal choice to the user. While no effect of embodiment is found for trust, the inclusion of preference elicitation features significantly increases user trust towards the robot recommender system. These findings have implications for marketing and health promotion in relation to Human-Robot Interaction and call for further investigation into the development and maintenance of trust between robot and user.
Krishna Chand Gudi, SL, Ojha, S, Johnston, B, Clark, J & Williams, MA 2018, 'Fog robotics for efficient, fluent and robust human-robot interaction', NCA 2018 - 2018 IEEE 17th International Symposium on Network Computing and Applications, International Symposium on Network Computing and Applications, IEEE, Cambridge, MA, USA.View/Download from: UTS OPUS or Publisher's site
© 2018 IEEE. Active communication between robots and humans is essential for effective human-robot interaction. To accomplish this objective, Cloud Robotics (CR) was introduced to make robots enhance their capabilities. It enables robots to perform extensive computations in the cloud by sharing their outcomes. Outcomes include maps, images, processing power, data, activities, and other robot resources. But due to the colossal growth of data and traffic, CR suffers from serious latency issues. Therefore, it is unlikely to scale a large number of robots particularly in human-robot interaction scenarios, where responsiveness is paramount. Furthermore, other issues related to security such as privacy breaches and ransomware attacks can increase. To address these problems, in this paper, we have envisioned the next generation of social robotic architectures based on Fog Robotics (FR) that inherits the strengths of Fog Computing to augment the future social robotic systems. These new architectures can escalate the dexterity of robots by shoving the data closer to the robot. Additionally, they can ensure that human-robot interaction is more responsive by resolving the problems of CR. Moreover, experimental results are further discussed by considering a scenario of FR and latency as a primary factor comparing to CR models.
Ojha, S, Vitale, J, Raza, SA, Billingsley, R & Williams, MA 2018, 'Implementing the Dynamic Role of Mood and Personality in Emotion Processing of Cognitive Agents', Sixth Annual Conference on Advances in Cognitive Systems, Annual Conference on Advances in Cognitive Systems, Stanford, California.View/Download from: UTS OPUS
Vitale, J, Tonkin, M, Herse, S, Ojha, S, Clark, J, Williams, M, Wang, X & Judge, W 2018, 'Be More Transparent and Users Will Like You: A Robot Privacy and User Experience Design Experiment', Proceedings of 2018 ACM/IEEE International Conference on Human- Robot Interaction, International Conference on Human-Robot Interaction, ACM, Chicago, IL, USA, pp. 379-387.View/Download from: UTS OPUS or Publisher's site
Ojha, S & Williams, M-A 2017, 'Emotional Appraisal : A Computational Perspective', Website proceedings of the Fifth Annual Conference on Advances in Cognitive Systems, Fifth Annual Conference on Advances in Cognitive Systems, ACS, Troy, USA, pp. 1-15.View/Download from: UTS OPUS
Research on computational modelling of emotions has received significant attention in the last few decades. As such, several computational models of emotions have been proposed which have provided an unprecedented insight into the implications of the emotion theories emerging from cognitive psychology studies. Yet the existing computational models of emotion have distinct limitations namely:(i) low replicability - difficult to implement the given computational model by reading the description of the model, (ii) domain dependence - model only applicable in one or more predefined scenarios or domains, (iii) low scalability and integrability - difficult to use the system in larger or different domains and difficult to integrate the model in wide range of other intelligent systems. In this paper, we propose a completely domain-independent mathematical representation for computational modelling of emotion that provides better replicability and integrability. The implementation of our model is inspired by appraisal theory - an emotion theory which assumes that emotions result from the cognitive evaluation of a situation.
Ojha, S, Vitale, J & Williams, M-A 2017, 'A Domain-Independent Approach of Cognitive Appraisal Augmented by Higher Cognitive Layer of Ethical Reasoning', Proceedingsof the 39th Annual Meeting of the Cognitive Science Society, Annual Meeting of the Cognitive Science Society, Cognitive Science Society, London, pp. 2833-2838.View/Download from: UTS OPUS
According to cognitive appraisal theory, emotion in an individual is the result of how a situation/event is evaluated by the individual. This evaluation has different outcomes among people and it is often suggested to be operationalised by a set of rules or beliefs acquired by the subject throughout development. Unfortunately, this view is particularly detrimental for computational applications of emotion appraisal. In fact, it requires providing a knowledge base that is particularly difficult to establish and manage, especially in systems designed for highly complex scenarios, such as social robots. In addition,
according to appraisal theory, an individual might elicit more than one emotion at a time in reaction to an event. Hence, determining which emotional state should be attributed in relationship to a specific event is another critical issue not yet fully addressed by the available literature. In this work, we show that: (i) the cognitive appraisal process can be realised without a complex set of rules; instead, we propose that this process can be operationalised by knowing only the positive or negative
perceived effect the event has on the subject, thus facilitating extensibility and integrability of the emotional system; (ii) the final emotional state to attribute in relation to a specific situation is better explained by ethical reasoning mechanisms. These hypotheses are supported by our experimental results. Therefore, this contribution is particularly significant to provide a more simple and generalisable explanation of cognitive appraisal theory and to promote the integration between theories of emotion and ethics studies, currently often neglected by the available literature.
Tonkin, M, Vitale, J, Ojha, S, Clark, J, Pfeiffer, S, Judge, W, Wang, X & Williams, M 2017, 'Embodiment, Privacy and Social Robots: May I Remember You?', Social Robotics: 9th International Conference, ICSR 2017, International Conference on Social Robotics, Springer International Publishing, Tsukuba, Japan, pp. 506-515.View/Download from: UTS OPUS or Publisher's site
As social robots move from the laboratory into public settings the possibility of unwanted intrusion into a user's personal privacy is magnified.
The actual social interaction between human and robot may involve anthropomorphising of the robot by the user, and this may prompt the user to disclose private or sensitive information. To comprehend possible impacts we conducted an exploratory study with a novel privacy measure to understand changes to users' privacy considerations when interacting with an embodied robotic system vs a disembodied system.
In this paper we measure the difference in personal information provided to such systems, and discuss the idea that embodiment may increase users' risk tolerance and reduce their privacy concerns.
Tonkin, M, Vitale, J, Ojha, S, Williams, M-A, Fuller, P, Judge, W & Wang, X 2017, 'Would You Like to Sample? Robot Engagement in a Shopping Centre', 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), International Symposium on Robot and Human Interactive Communication, IEEE, Lisbon, Portugal, pp. 42-49.View/Download from: UTS OPUS or Publisher's site
Nowadays, robots are gradually appearing in public spaces such as libraries, train stations, airports and shopping centres. Only a limited percentage of research literature explores robot applications in public spaces. Studying robot applications in the wild is particularly important for designing commercially viable applications able to meet a specific goal. Therefore, in this paper we conduct an experiment to test a robot application in a shopping centre, aiming to provide results relevant for today's technological capability and market. We compared the performance of a robot and a human in promoting food samples in a shopping centre, a well known commercial application, and then analysed the effects of the type of engagement used to achieve this goal. Our results show that the robot is able to engage customers similarly to a human as expected. However unexpectedly, while an actively engaging human was able to perform better than a passively engaging human, we found the opposite effect for the robot. In this paper we investigate this phenomenon, with possible explanation ready to be explored and tested in subsequent research.
Ojha, S & Williams, MA 2016, 'Ethically-Guided Emotional Responses for Social Robots: Should I Be Angry?', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Social Robotics (ICSR), SPRINGER-VERLAG BERLIN, Kansas City, USA.View/Download from: UTS OPUS or Publisher's site
Emotions play a critical role in human-robot interaction. Human-robot interaction in social contexts will be more effective if robots can understand human emotions and express (display) emotions accordingly as a means to communicate their own internal state. In this paper we present a novel computational model of robot emotion generation based on appraisal theory and guided by ethical judgement. There have been recent advances in developing emotion for robots. However, despite the extensive research on robot emotion, it is difficult to say if a particular robot is exhibiting appropriate emotions or even showing that it can empathize with humans by exhibiting similar emotions to humans in the same situation. A key question is - to what extent should a robot direct anger toward a young child or an elderly person for an act that it should show anger towards an ordinary adult to signal danger or stupidity? Realizing the need for an ethically guided approach to emotion expressions in social robots as they interact with people, we present a novel Ethical Emotion Generation System (EEGS) for the expression of the most acceptable emotions in social robots.
Cloud Robotics (CR) is an emerging and successful approach to robotics. The number of robots or other IoT
devices may increase drastically in the future which might need
enormous bandwidth and there might be security concerns. If
robots in CR are not secured then robots can even become
surveillance bot by hackers. Moreover, if an internet connection
is lost due to network hitches then in that crucial moment robot
may not be available to complete its given task. For example,
a robot assisting a person can stop working unexpectedly or
work with the instructions from hacker. In order to address
such problems, we propose a new approach to robotics - Fog
Robotics (FR) in this paper, so a network of robots can be used
more securely and efficiently as compared to CR.