Benjamin is best reached by email. Please do not call the phone number listed.
Can supervise: YES
Ojha, S, Williams, MA & Johnston, B 2018, 'The Essence of Ethical Reasoning in Robot-Emotion Processing', International Journal of Social Robotics, vol. 10, no. 2, pp. 211-223.View/Download from: UTS OPUS or Publisher's site
© 2017, Springer Science+Business Media B.V., part of Springer Nature. As social robots become more and more intelligent and autonomous in operation, it is extremely important to ensure that such robots act in socially acceptable manner. More specifically, if such an autonomous robot is capable of generating and expressing emotions of its own, it should also have an ability to reason if it is ethical to exhibit a particular emotional state in response to a surrounding event. Most existing computational models of emotion for social robots have focused on achieving a certain level of believability of the emotions expressed. We argue that believability of a robot's emotions, although crucially necessary, is not a sufficient quality to elicit socially acceptable emotions. Thus, we stress on the need of higher level of cognition in emotion processing mechanism which empowers social robots with an ability to decide if it is socially appropriate to express a particular emotion in a given context or it is better to inhibit such an experience. In this paper, we present the detailed mathematical explanation of the ethical reasoning mechanism in our computational model, EEGS, that helps a social robot to reach to the most socially acceptable emotional state when more than one emotions are elicited by an event. Experimental results show that ethical reasoning in EEGS helps in the generation of believable as well as socially acceptable emotions.
Vitale, J, Williams, M-A, Johnston, B & Boccignone, G 2014, 'Affective facial expression processing via simulation: A probabilistic model', Biologically Inspired Cognitive Architectures, vol. 10, pp. 30-41.View/Download from: UTS OPUS or Publisher's site
Understanding the mental state of other people is an important skill for
intelligent agents and robots to operate within social environments. However,
the mental processes involved in `mind-reading' are complex. One explanation of
such processes is Simulation Theory - it is supported by a large body of
neuropsychological research. Yet, determining the best computational model or
theory to use in simulation-style emotion detection, is far from being
In this work, we use Simulation Theory and neuroscience findings on
Mirror-Neuron Systems as the basis for a novel computational model, as a way to
handle affective facial expressions. The model is based on a probabilistic
mapping of observations from multiple identities onto a single fixed identity
(`internal transcoding of external stimuli'), and then onto a latent space
(`phenomenological response'). Together with the proposed architecture we
present some promising preliminary results
Gardenfors, P, Williams, M-A, Johnston, B, Billingsley, R, Vitale, J, Peppas, P & Clark, J 2018, 'Event boards as tools for holistic AI', International Workshop on Artificial Intelligence and Cognition 2018, Palermo, Italy.View/Download from: UTS OPUS
Gudi, SLKC, Ojha, S, Sidra, Johnston, B & Williams, MA 2017, 'A proactive robot tutor based on emotional intelligence', Advances in Intelligent Systems and Computing, International Conference on Robot Intelligence Technology and Applications, Springer, Korea, pp. 113-120.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing AG, part of Springer Nature 2019. In recent years, social robots are playing a vital role in various aspects of acting as a companion, assisting in regular tasks, health, interaction, teaching, etc. Coming to the case of robot tutor, the actions of the robot are limited. It may not fully understand the emotions of the student. It may continue to give lecture even though the user is bored or left away from the robot. This situation makes a user feel that robot cannot supersede a human being because it is not in a position to understand emotions. To overcome this issue, in this paper, we present an Emotional Classification System (ECS) where the robot adapts to the mood of the user and behaves accordingly by becoming proactive. It works based on the emotion tracked by the robot using its emotional intelligence. A robot as a sign language tutor scenario is considered to assist speech and hearing impairment people for validating our model. Real-time implementations and analysis are further discussed by considering Pepper robot as a platform.
Ojha, S, Gudi, SLKC, Vitale, J, Williams, MA & Johnston, B 2019, 'I remember what you did: A behavioural guide-robot', Advances in Intelligent Systems and Computing, pp. 273-282.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing AG, part of Springer Nature 2019. Robots are coming closer to human society following the birth of emerging field called Social Robotics. Social Robotics is a branch of robotics that specifically pertains to the design and development of robots that can be employed in human society for the welfare of mankind. The applications of social robots may range from household domains such as elderly and child care to educational domains like personal psychological training and tutoring. It is crucial to note that if such robots are intended to work closely with young children, it is extremely important to make sure that these robots teach not only the facts but also important social aspects like knowing what is right and what is wrong. It is because we do not want to produce a generation of kids that knows only the facts but not morality. In this paper, we present a mechanism used in our computational model (i.e EEGS) for social robots, in which emotions and behavioural response of the robot depends on how one has previously treated a robot. For example, if one has previously treated a robot in a good manner, it will respond accordingly while if one has previously mistreated the robot, it will make the person realise the issue. A robot with such a quality can be very useful in teaching good manners to the future generation of kids.
Herse, S, Vitale, J, Ebrahimian, D, Tonkin, M, Ojha, S, Sidra, S, Johnston, B, Phillips, S, Gudi, SLKC, Clark, J, Judge, W & Williams, MA 2018, 'Bon Appetit! Robot Persuasion for Food Recommendation', ACM/IEEE International Conference on Human-Robot Interaction, ACM/IEEE International Conference on Human-Robot Interaction, ACM, Chicago, USA, pp. 125-126.View/Download from: UTS OPUS or Publisher's site
© 2018 Authors. The integration of social robots within service industries requires social robots to be persuasive. We conducted a vignette experiment to investigate the persuasiveness of a human, robot, and an information kiosk when offering consumers a restaurant recommendation. We found that embodiment type significantly affects the persuasiveness of the agent, but only when using a specific recommendation sentence. These preliminary results suggest that human-like features of an agent may serve to boost persuasion in recommendation systems. However, the extent of the effect is determined by the nature of the given recommendation.
Herse, S, Vitale, J, Tonkin, M, Ebrahimian, D, Ojha, S, Johnston, B, Judge, W & Williams, MA 2018, 'Do You Trust Me, Blindly? Factors Influencing Trust Towards a Robot Recommender System', RO-MAN 2018 The 27th IEEE International Symposium on Robot and Human Interactive Communication, IEEE International Symposium on Robot and Human Interactive Communication., IEEE, China, pp. 7-14.View/Download from: UTS OPUS or Publisher's site
© 2018 IEEE. When robots and human users collaborate, trust is essential for user acceptance and engagement. In this paper, we investigated two factors thought to influence user trust towards a robot: preference elicitation (a combination of user involvement and explanation) and embodiment. We set our experiment in the application domain of a restaurant recommender system, assessing trust via user decision making and perceived source credibility. Previous research in this area uses simulated environments and recommender systems that present the user with the best choice from a pool of options. This experiment builds on past work in two ways: first, we strengthened the ecological validity of our experimental paradigm by incorporating perceived risk during decision making; and second, we used a system that recommends a nonoptimal choice to the user. While no effect of embodiment is found for trust, the inclusion of preference elicitation features significantly increases user trust towards the robot recommender system. These findings have implications for marketing and health promotion in relation to Human-Robot Interaction and call for further investigation into the development and maintenance of trust between robot and user.
Krishna Chand Gudi, SL, Ojha, S, Johnston, B, Clark, J & Williams, MA 2018, 'Fog robotics for efficient, fluent and robust human-robot interaction', NCA 2018 - 2018 IEEE 17th International Symposium on Network Computing and Applications, International Symposium on Network Computing and Applications, IEEE, Cambridge, MA, USA.View/Download from: UTS OPUS or Publisher's site
© 2018 IEEE. Active communication between robots and humans is essential for effective human-robot interaction. To accomplish this objective, Cloud Robotics (CR) was introduced to make robots enhance their capabilities. It enables robots to perform extensive computations in the cloud by sharing their outcomes. Outcomes include maps, images, processing power, data, activities, and other robot resources. But due to the colossal growth of data and traffic, CR suffers from serious latency issues. Therefore, it is unlikely to scale a large number of robots particularly in human-robot interaction scenarios, where responsiveness is paramount. Furthermore, other issues related to security such as privacy breaches and ransomware attacks can increase. To address these problems, in this paper, we have envisioned the next generation of social robotic architectures based on Fog Robotics (FR) that inherits the strengths of Fog Computing to augment the future social robotic systems. These new architectures can escalate the dexterity of robots by shoving the data closer to the robot. Additionally, they can ensure that human-robot interaction is more responsive by resolving the problems of CR. Moreover, experimental results are further discussed by considering a scenario of FR and latency as a primary factor comparing to CR models.
Vitale, J, Johnston, B & Williams, MA 2017, 'Facial Motor Information is Sufficient for Identity Recognition', Proceedings of the 39th Annual Meeting of the Cognitive Science Society, The 39th Annual Meeting of the Cognitive Science Society, Cognitive Science Society, London, pp. 3447-3452.View/Download from: UTS OPUS
The face is a central communication channel providing information about the identities of our interaction partners and their potential mental states expressed by motor configurations. Although it is well known that infants ability to recognise people follows a developmental process, it is still an open question how face identity recognition skills can develop and, in particular, how facial expression and identity processing potentially interact during this developmental process. We propose that by acquiring information of the facial motor configuration observed from face stimuli encountered throughout development would be sufficient to develop a face-space representation. This representation encodes the observed face stimuli as points of a multidimensional psychological space able to assist facial identity and expression recognition. We validate our hypothesis through computational simulations and we suggest potential implications of this understanding with respect to the available findings in face processing.
Leong, TW & Johnston, B 2016, 'Co-design and robots: A case study of a robot dog for aging people', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Social Robotics (ICSR), Springer, Kansas City, Missouri, United States, pp. 702-711.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing AG 2016.The day-to-day experiences of aging citizens differ significantly from young, technologically savvy engineers. Yet, well-meaning engineers continue to design technologies for aging citizens, informed by skewed stereotypes of aging without deep engagements from these users. This paper describes a co-design project based on the principles of Participatory Design that sought to provide aging people with the capacity to co-design technologies that suit their needs. The project combined the design intuitions of both participants and designers, on equal footing, to produce a companion robot in the form of a networked robotic dog. Besides evaluating a productive approach that empowers aging people in the process of co-designing and evaluating technologies for themselves, this paper presents a viable solution that is playful and meaningful to these elderly people; capable of enhancing their independence, social agency and well-being.
Romat, H, Williams, M-A, Wang, X, Johnston, B, Bard, H & ACM 2016, 'Natural Human-Robot Interaction Using Social Cues', Proceedings of the 11th ACM/IEEE International Conference on Human Robot Interaction, ACM/IEEE International Conference on Human Robot Interaction (HRI), IEEE, Christchurch, New Zealand, pp. 503-504.View/Download from: UTS OPUS or Publisher's site
This paper investigates the problem of how humans understand and control human-robot collaborative action and how to build natural interactions during human-robot collaborative action. We use a "pick and place" experiment to study collaborative activities between a human and a robot. The results show that even if human participants had a good understanding of the maximum reachability of the robot, they consistently take a surprisingly long time to help and assist the robot when a target object is out of its reach. We implemented a number of social cues in the experiment, analysed their effects in order to identify the role they could play to improve the fluency of human-robot collaboration. The experimental results showed that when the robot uses head movements, two hands or a gesture to indicate non-reachability, people react in a more natural way to assist the robot.
Vitale, J, Williams, M-A & Johnston, B 2016, 'The face-space duality hypothesis: a computational model', Proceedings of the 38th Annual Conference of the Cognitive Science Society, Annual Conference of the Cognitive Science Society, Cognitive Science Society, Philadelphia, pp. 514-519.View/Download from: UTS OPUS
Novianto, R, Williams, M-A, Gärdenfors, P & Wightwick, G 2014, 'Classical conditioning in social robots', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Social Robotics (ICSR), Springer Verlag, Sydney, Australia, pp. 279-289.View/Download from: UTS OPUS or Publisher's site
Classical conditioning is important in humans to learn and predict events in terms of associations between stimuli and to produce responses based on these associations. Social robots that have a classical conditioning skill like humans will have an advantage to interact with people more naturally, socially and effectively. In this paper, we present a novel classical conditioning mechanism and describe its implementation in ASMO cognitive architecture. The capability of this mechanism is demonstrated in the Smokey robot companion experiment. Results show that Smokey can associate stimuli and predict events in its surroundings. ASMO's classical conditioning mechanism can be used in social robots to adapt to the environment and to improve the robots' performances.
Vitale, J, Williams, M-A & Johnston, B 2014, 'Socially impaired robots: Human social disorders and robots' socio-emotional intelligence', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Social Robotics (ICSR), Springer Verlag, Sydney, Australia, pp. 350-359.View/Download from: UTS OPUS or Publisher's site
Social robots need intelligence in order to safely coexist and interact with humans. Robots without functional abilities in understanding others and unable to empathise might be a societal risk and they may lead to a society of socially impaired robots. In this work we provide a survey of three relevant human social disorders, namely autism, psychopathy and schizophrenia, as a means to gain a better understanding of social robots' future capability requirements.We provide evidence supporting the idea that social robots will require a combination of emotional intelligence and social intelligence, namely socio-emotional intelligence. We argue that a robot with a simple socio-emotional process requires a simulation-driven model of intelligence. Finally, we provide some critical guidelines for designing future socio-emotional robots.
Wang, X, Williams, M-A, Gardenfors, P, Vitale, J, Abidi, S, Johnston, B, Kuipers, B & Huang, A 2014, 'Directing human attention with pointing', Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on, IEEE/RSJ International Symposium on Robot and Human Interactive Communication, IEEE, Edinburgh, Scotland, pp. 174-179.View/Download from: UTS OPUS or Publisher's site
Pointing is a typical means of directing a human's attention to a specific object or event. Robot pointing behaviours that direct the attention of humans are critical for human-robot interaction, communication and collaboration. In this paper, we describe an experiment undertaken to investigate human comprehension of a humanoid robot's pointing behaviour. We programmed a NAO robot to point to markers on a large screen and asked untrained human subjects to identify the target of the robots pointing gesture. We found that humans are able to identify robot pointing gestures. Human subjects achieved higher levels of comprehension when the robot pointed at objects closer to the gesturing arm and when they stood behind the robot. In addition, we found that subjects performance improved with each assessment task. These new results can be used to guide the design of effective robot pointing behaviours that enable more effective robot to human communication and improve human-robot collaborative performance.
Abidi, SS, Williams, M & Johnston, BG 2013, 'Human pointing as a robot directive', ACM/IEEE International Conference on Human-Robot Interaction, ACM/IEEE International Conference on Human-Robot Interaction, IEEE, Tokyo, Japan, pp. 67-68.View/Download from: Publisher's site
People are accustomed to directing other people's attention using pointing gestures. People enact and interpret pointing commands often and effortlessly. If robots understand human intentions (e.g. as encoded in pointing-gestures), they can reach higher
Felix Navarro, KM, Gay, VC, Golliard, L, Johnston, BG, Leijdekkers, P, Vaughan, EP, Wang, T & Williams, M 2013, 'SocialCycle What Can a Mobile App Do To Encourage Cycling', 38th IEEE Conference on Local Computer Networks (LCN 2013) and Workshops, IEEE Conference on Local Computer Networks, IEEE Computer Society, Sydney Australia, pp. 24-30.View/Download from: UTS OPUS or Publisher's site
Traffic congestion presents significant enviromnental, social and economic costs. Encouraging people to cycle and use other fonns of alternate transportation is one important aspect of addressing these problems. However, many city councils face significant difficulties in educating citizens and encouraging them to fonn new habits around these alternate fonns of transport. Mobile devices present a great opportunity to effect such positive behavior change. In this paper we discuss the results of a survey aimed at understanding how mobile devices can be used to encourage cycling and/or improve the cycling experience. We use the results of the survey to design and develop a mobile app called SocialCycle, which purpose is to encourage users to start cycling and to increase the number of trips that existing riders take by bicycle
Novianto, R, Johnston, BG & Williams, M 2013, 'Habituation and sensitisation learning in ASMO cognitive architecture', Lecture Notes in Computer Science, International Conference on Social Robotics (ICSR), Springer International Publishing, Bristol, United Kingdom, pp. 249-259.View/Download from: UTS OPUS or Publisher's site
As social robots are designed to interact with humans in unstructured environments, they need to be aware of their surroundings, focus on significant events and ignore insignificant events in their environments. Humans have demonstrated a good example of adaptation to habituate and sensitise to significant and insignificant events respectively. Based on the inspiration of human habituation and sensitisation, we develop novel habituation and sensitisation mechanisms and include these mechanisms in ASMO cognitive architecture. The capability of these mechanisms is demonstrated in the `Smokey robot companion experiment. Results show that Smokey can be aware of their surroundings, focus on significant events and ignore insignificant events. ASMOs habituation and sensitisation mechanisms can be used in robots to adapt to the environment. It can also be used to modify the interaction of components in a cognitive architecture in order to improve agents or robots performances.
Wang, W, Johnston, B & Williams, MA 2013, 'Recognition and representation of robot skills in real time: A theoretical analysis', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Social Robotics (ICSR), Springer, Bristol, UK, pp. 127-137.View/Download from: UTS OPUS or Publisher's site
Sharing reusable knowledge among robots has the potential to sustainably develop robot skills. The bottlenecks to sharing robot skills across a network are how to recognise and represent reusable robot skills in real-time and how to define reusable robot skills in a way that facilitates the recognition and representation challenge. In this paper, we first analyse the considerations to categorise reusable robot skills that manipulate objects derived from R.C. Schank's script representation of human basic motion, and define three types of reusable robot skills on the basis of the analysis. Then, we propose a method with potential to identify robot skills in real-time. We present a theoretical process of skills recognition during task performance. Finally, we characterise reusable robot skill based on new definitions and explain how the new proposed representation of robot skill is potentially advantageous over current state-of-the-art work. © Springer International Publishing 2013.
Williams, MA, Abidi, S, Gärdenfors, P, Wang, X, Kuipers, B & Johnston, B 2013, 'Interpreting robot pointing behavior', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Social Robotics (ICSR), Springer, Bristol, UK, pp. 148-159.View/Download from: UTS OPUS or Publisher's site
The ability to draw other agents' attention to objects and events is an important skill on the critical path to effective human-robot collaboration. People use the act of pointing to draw other people's attention to objects and events for a wide range of purposes. While there is significant work that aims to understand people's pointing behavior, there is little work analyzing how people interpret robot pointing. Since robots have a wide range of physical bodies and cognitive architectures, interpreting pointing will be determined by a specific robot's morphology and behavior. Humanoids and robots whose heads, torso and arms resemble humans that point may be easier for people to interpret, however if such robots have different perceptual capabilities to people then misinterpretation may occur. In this paper we investigate how ordinary people interpret the pointing behavior of a leading state-of-the-art service robot that has been designed to work closely with people. We tested three hypotheses about how robot pointing is interpreted. The most surprising finding was that the direction and pitch of the robot's head was important in some conditions. © Springer International Publishing 2013.
Agmon, N, Agrawal, V, Aha, DW, Aloimonos, Y, Buckley, D, Doshi, P, Geib, C, Grasso, F, Green, N, Johnston, B, Kaliski, B, Kiekintveld, C, Law, E, Lieberman, H, Mengshoel, OJ, Metzler, T, Modayil, J, Oard, DW, Onder, N, O'Sullivan, B, Pastra, K, Precup, D, Ramachandran, S, Reed, C, Sariel-Talay, S, Selker, T, Shastri, L, Singh, S, Smith, SF, Srivastava, S, Sukthankar, G, Uthus, DC & Williams, MA 2012, 'Reports of the AAAI 2011 conference workshops', AI Magazine, pp. 57-70.View/Download from: Publisher's site
The AAAI-11 workshop program was held Sunday and Monday, August 7-18, 2011, at the Hyatt Regency San Francisco in San Francisco, California USA. The AAAI-11 workshop program included 15 workshops covering a wide range of topics in artificial intelligence. The titles of the workshops were Activity Context Representation: Techniques and Languages; Analyzing Microtext; Applied Adversarial Reasoning and Risk Modeling; Artificial Intelligence and Smarter Living: The Conquest of Complexity; Artifiicial Intelligence for Data Center Management and Cloud Computing; Automated Action Planning for Autonomous Mobile Robots; Computational Models of Natural Argument; Generalized Planning; Human Computation; Human-Robot Interaction in Elder Care; Interactive Decision Theory and Game Theory, 2010; Language-Action Tools for Cognitive Artificial Agents: Integrating Vision, Action, and Language; Lifelong Learning from Sensorimotor Experience; Plan, Activity, and Intent Recognition; and Scalable Integration of Analytics and Visualization. This article presents short summaries of those events. Copyright © 2012, Association for the Advancement of Artificial Intelligence. All rights reserved.
Wang, W, Johnston, BG & Williams, M 2012, 'Social networking for robots to share knowledge, skills and know-how', International Conference on Social Robotics, International Conference on Social Robotics (ICSR), Springer, Chengdu, China, pp. 418-427.View/Download from: UTS OPUS or Publisher's site
A major bottleneck in robotics research and development is the difficulty and time required to develop and implement new skills for robots to realize task-independence. In spite of work done in terms of task model transfer among robots, so far little work has been done on how to make robots task-independent. In this paper, we describe our work-in-progress towards the development of a robot social network called Numbots that draws on the principle of sharing information in human social networking. We demonstrate how Numbots has the potential to assist knowledge sharing, know-how and skill transfer among robots to realize task-independence.
Johnston, B 2011, 'An interface for crowd-sourcing spatial models of commonsense', AAAI Spring Symposium - Technical Report, pp. 139-142.
Commonsense is a challenge not only for representation and reasoning but also for large scale knowledge engineering required to capture the breadth of our 'everyday' world. One approach to knowledge engineering is to 'outsource' the effort to the public through games that generate structured commonsense knowledge from user play. To date, such games have focused on symbolic and textual knowledge. However, an effective commonsense reasoning system will require spatial and physical reasoning capabilities. In this paper, I propose a tool for gathering commonsense information from ordinary people. It is a user-friendly 3D sculpting tool for modeling and annotating models of physical objects and spaces. Copyright © 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.
Johnston, BG 2011, 'The Collection of Physical Knowledge and Its Application in Intelligent Systems', Proceedings of the 4th International Conference, AGI 2011: Artificial General Intelligence - Lecture Notes in Compter Science, Conference on Artificial General Intelligence (AGI), Springer, Mountain View, USA, pp. 163-173.View/Download from: UTS OPUS or Publisher's site
Intelligence is a multidimensional problem of which physical reasoning and physical knowledge are important dimensions. However, there are few resources of physical knowledge that can be used in data-driven approaches to Artificial Intelligence. Comirit Objects is a project intended to encourage the general public to contribute to research in Artificial Intelligence by building simple 3D models of everyday objects via an interactive web-site. This paper describes the simplified representation and web-interface used by Comirit Objects and a preliminary investigation into the potential applications of the collected models
Johnston, BG 2011, 'An Interface for Crowd-sourcing Spatial Models of Commonsense', Commonsense 2011, Symposium on Logical Formalizations of Commonsense Reasoning, AAAI Press, Stanford University, pp. 1-4.View/Download from: UTS OPUS
Commonsense is a challenge not only for representation and reasoning but also for large scale knowledge engineering required to capture the breadth of our `everyday world. One approach to knowledge engineering is to `outsource the effort to the public through games that generate structured commonsense knowledge from user play. To date, such games have focused on symbolic and textual knowledge. However, an effective commonsense reasoning system will require spatial and physical reasoning capabilities. In this paper, I propose a tool for gathering commonsense information from ordinary people. It is a user-friendly 3D sculpting tool for modeling and annotating models of physical objects and spaces.
Johnston, BG 2010, 'The toy box problem (and a preliminary solution)', Artificial General Intelligence - Proceedings of the Third Conference on Artificial General Intelligence, AGI 2010, Conference on Artificial General Intelligence, Atlantis Press, Lugano, Switzerland, pp. 43-48.View/Download from: UTS OPUS or Publisher's site
The evaluation of incremental progress towards 'Strong AI' or 'AGI' remains a challenging open problem. In this paper, we draw inspiration from benchmarks used in artificial commonsense reasoning to propose a new benchmark problem- the Toy Box Problem-th
Novianto, R, Johnston, BG & Williams, M 2010, 'Attention in the ASMO cognitive architecture', Biologically Inspired Cognitive Architectures 2010 - Frontiers in Artificial Intelligence and Applications vol 221: Proceedings of the First Annual Meeting of the BICA Society, Annual Meeting of the BICA Society, IOS Press, Washington, USA, pp. 98-105.View/Download from: UTS OPUS or Publisher's site
The ASMO Cognitive Architecture has been developed to support key capabilities: attention, awareness and self-modification. In this paper we describe the underlying attention model in ASMO. The ASMO Cognitive Architecture is inspired by a biological attention theory, and offers a mechanism for directing and creating behaviours, beliefs, anticipation, discovery, expectations and changes in a complex system. Thus, our attention based architecture provides an elegant solution to the problem of behaviour development and behaviour selection particularly when the behaviours are mutually incompatible.
Williams, M, Gardenfors, P, Johnston, BG & Wightwick, GR 2010, 'Anticipation as a Strategy: A Design Paradigm for Robotics', Lecture Notes in Artificial Intelligence 6291 - Knowledge Science, Engineering and Management, Knowledge Science, Engineering and Management, Springer-Verlag Berlin Heidelberg, Belfast, Northern Ireland, pp. 341-353.View/Download from: UTS OPUS or Publisher's site
Anticipation plays a crucial role during any action, particularly in agents operating in open, complex and dynamic environments. In this paper we consider the role of anticipation as a strategy from a design perspective. Anticipation is a crucial skill in sporting games like soccer, tennis and cricket. We explore the role of anticipation in robot soccer matches in the context of reaching the RoboCup vision to develop a robot soccer team capable of defeating the FIFA World Champions in 2050. Anticipation in soccer can be planned or emergent but whether planned or emergent, anticipation can be designed. Two key obstacles stand in the way of developing more anticipatory robot systems; an impoverished understanding of the âanticipationâ process/capability and a lack of know-how in the design of anticipatory systems. Several teams at RoboCup have developed remarkable preemptive behaviors. The CMU Dive and UTS Dodge are two compelling examples. In this paper we take steps towards designing robots that can adopt anticipatory behaviors by proposing an innovative model of anticipation as a strategy that specifies the key characteristics of anticipation behaviors to be developed. The model can drive the design of autonomous systems by providing a means to explore and to represent anticipation requirements. Our approach is to analyze anticipation as a strategy and then to use the insights obtained to design a reference model that can be used to specify a set of anticipatory requirements for guiding an autonomous robot soccer system.
A great deal of contention can be found within the published literature on grounding and the symbol grounding problem, much of it motivated by appeals to intuition and unfalsifiable claims. We seek to define a formal framework of representation grounding that is independent of any particular opinion, but that promotes classification and comparison. To this end, we identify a set of fundamental concepts and then formalize a hierarchy of six representational system classes that correspond to different perspectives on the representational requirements for intelligence, describing a spectrum of systems built on representations that range from symbolic through iconic to distributed and unconstrained. This framework offers utility not only in enriching our understanding of symbol grounding and the literature, but also in exposing crucial assumptions to be explored by the research community.
Johnston, BG & Williams, M 2009, 'A Formal Framework for the Symbol Grounding Problem', Proceedings of the Second Conference on Artificial General Intelligence, Conference on Artificial General Intelligence, Atlantis Press, Washington, USA, pp. 61-66.View/Download from: UTS OPUS or Publisher's site
A great deal of contention can be found within the published literature on grounding and the symbol grounding problem, much of it motivated by appeals to intuition and unfalsifiable claims. We seek to define a formal framework of representa- tion grounding that is independent of any particular opinion, but that promotes classification and comparison. To this end, we identify a set of fundamental concepts and then formalize a hierarchy of six representational system classes that corre- spond to different perspectives on the representational require- ments for intelligence, describing a spectrum of systems built on representations that range from symbolic through iconic to distributed and unconstrained. This framework offers utility not only in enriching our understanding of symbol grounding and the literature, but also in exposing crucial assumptions to be explored by the research community.
Johnston, BG & Williams, M 2009, 'Autonomous Learning of Commonsense Simulations', International Symposium on Logical Formalizations of Commonsense Reasoning, Symposium on Logical Formalizations of Commonsense Reasoning, UTSePress, Toronto, Canada, pp. 73-78.View/Download from: UTS OPUS
Parameter-driven simulations are an effective and efficient method for reasoning about a wide range of commonsense scenarios that can complement the use of logical formalizations. The advantage of simulation is its simplified knowledge elicitation process: rather than building complex logical formulae, simulations are constructed by simply selecting numerical values and graphical structures. In this paper, we propose the application of machine learning techniques to allow an embodied autonomous agent to automatically construct appropriate simulations from its real-world experience. The automation of learning can dramatically reduce the cost of knowledge elicitation, and therefore result in models of commonsense with breadth and depth not possible with traditional engineering of logical formalizations.
Johnston, BG & Williams, M 2009, 'Conservative and Reward-driven Behavior Selection in a Commonsense Reasoning Framework', 2009 AAAI Symposium: Multirepresentational Architectures for Human-Level Intelligence, National Conference of the American Association for Artificial Intelligence, AAAI Press, Washington, USA, pp. 14-19.View/Download from: UTS OPUS
Comirit is a framework for commonsense reasoning that combines simulation, logical deduction and passive machine learning. While a passive, observation-driven approach to learning is safe and highly conservative, it is limited to interaction only with those objects that it has previously observed. In this paper we describe a preliminary exploration of methods for extending Comirit to allow safe action selection in uncertain situations, and to allow reward-maximizing selection of behaviors.
Johnston, BG & Williams, M 2008, 'Comirit: Commonsense Reasoning by Integrating Simulation and Logic', Artificial General Intelligance 2008 Proceedings of the First AGI Conference, Conference on Artificial General Intelligence, IOS Press, Inst of Technology, University of Memphis, TN, USA, pp. 200-211.View/Download from: UTS OPUS
Rich computer simulations or quantitative models can enable an agent to realistically predict real-world behaviour with precision and performance that is difficult to emulate in logical formalisms. Unfortunately, such simulations lack the deductive flexibility of techniques such as formal logics and so do not find natural application in the deductive machinery of commonsense or general purpose reasing systems. This dilemma can, however, be resolved via a hybrid architecture that combines tableaux-based reasoning with a framework for generic simulation based on the concept of 'molecular' models. This combination exploits the complementary strengths of logic and simulation, allowing an agent to build and reason with automatically constructed simulations in a problem-sensitive manner.
Johnston, BG, Yang, F, Mendoza, R, Chen, X & Williams, M 2008, 'Ontology Based Object Categorization for Robots', Lecture Notes in Artificial Intelligence Vol 5345: Practical Aspects of Knowledge Management - Proceedings of the 7th International Conference, PAKM 2008, International Conference on Practical Aspects of Knowledge Management, Springer, Yokohama, Japan, pp. 219-231.View/Download from: UTS OPUS or Publisher's site
Meaningfully managing the relationship between representations and the entities they represent remains a challenge in robotics known as grounding. In this paper we Semantic Web technologies to provide a powerful extension to existing proposals for grounding robotic systems and have consequently developed OBOC, the first robotic software system with an ontology-based vision syb-system.
Johnston, BG & Williams, M 2007, 'A Generic Framework for Approximate Simulation in Commonsense Reasoning Systems', International Symposium on Logical Formalizations of Commonsense R, Symposium on Logical Formalizations of Commonsense Reasoning, AAAI Press, Stanford University, USA, pp. 71-76.View/Download from: UTS OPUS
This paper introduces the Slick architecture and outlines how it may be applied to solve the well known Egg-Cracking Problem. In contrast to other solutions to this problem that are based on formal logics, the Slick architecture is based on general- purpose and low-resolution quantitative simulations. On this benchmark problem, the Slick architecture offers greater elaboration tolerance and allows for faster elicitation of more general axioms. "This paper was selected by a process of anonymous peer reviewing for presentation at COMMONSENSE 2007" - first page of http://www.ucl.ac.uk/commonsense07/papers/johnston-and-williams.pdf "All submissions will be reviewed by the program committee listed at www.ucl.ac.uk/commonsense07/committee , and notification of acceptance will be given by November 24, 2006." - from CFP at http://www.ucl.ac.uk/commonsense07/cfp/
Mendoza, R, Johnston, BG, Yang, F, Huang, Z, Chen, X & Williams, M 2007, 'OBOC: Ontology Based Object Categorisation for Robots', The Fourth International Conference on Computational Intelligence, Robotics and Autonomous Systems, International Conference on Computational Intelligence, Robotics and Autonomous Systems, Massey University Press, Palmerston North, New Zealand, pp. 178-183.View/Download from: UTS OPUS
Meaningfully managing the relationship between representations and the entities they represent remains a challenge in robotics known as grounding. Useful insights can be found by approaching robotic systems development specifically with the grounding and symbol grounding problem in mind. In particular, Semantic Web technologies turn out to be not merely applicable to web-based software agents, but can also provide a powerful extension to existing proposals for grounded robotic systems development. Given the interoperability and openness of the Semantic Web, such technologies can increase the ability for a robot to introspect, communicate and be inspected - benefits that ultimately lead to more grounded systems with open-ended intelligent behaviour.
Cloud Robotics (CR) is an emerging and successful approach to robotics. The number of robots or other IoT
devices may increase drastically in the future which might need
enormous bandwidth and there might be security concerns. If
robots in CR are not secured then robots can even become
surveillance bot by hackers. Moreover, if an internet connection
is lost due to network hitches then in that crucial moment robot
may not be available to complete its given task. For example,
a robot assisting a person can stop working unexpectedly or
work with the instructions from hacker. In order to address
such problems, we propose a new approach to robotics - Fog
Robotics (FR) in this paper, so a network of robots can be used
more securely and efficiently as compared to CR.