Distinguished Professor Mary-Anne Williams is Director of The Magic Lab in the Centre of Artificial Intelligence. She is currently working with the United Nations on the impact of AI on Human Rights, Sustainable Development, and Peaces & Security.
Mary-Anne is on the inaugural 2013 Robohub's top 25 Women in Robotics, and listed 16th on the prestigious 365 International Women in STEM in 2017.
Mary-Anne is a leading authority on AI. She is senior leader in the Australian and International research community and serves on ACM Eugene L. Lawler Award Committee for Humanitarian Contributions within Computer Science and Informatics. Mary-Anne took the first robot soccer team to China, and has given public lectures in China including for China Science Week.
Mary-Anne a Fellow at the Australian Academy of Technological Sciences and Engineering (ATSE), the Australian Computer Society (ACS), and in the Centre for Legal Informatics (CodeX) at Stanford University.
Mary-Anne works with the Stanford d.school and previously the UTS Hatchery on entrepreneurship programs. She continues to work closely with student startup founders at UTS and CodeX. In 2017 she co-founded the AI Policy Hub with the Directors of CodeX Stanford University, and is co-authoring a major United Nations Report on the Impact of AI, due for release later this year.
Mary-Anne has given numerous keynote addresses as scientific conferences, government and business events including United Nations, SRI International, Stanford Law School, Graduate School of Business Stanford University, X-Media, Stanford University, Australian Dept Trade & Foreign Affairs, World Science Festival, Sydney Science Festival, and Strategic Management Society Conference.
- Chaired, Australian Research Council (ARC) Excellence in Research for Australia Committee that undertook a national evaluation of research in Mathematics, Information and Computing Sciences.
- ARC College of Experts
- Consultant to the ARC.
- Non-Excutive Director at KR inc
- Organised numerous large conferences including the 2014 International Conference on Social Robotics.
- Review Editor for the prestigious Artificial Intelligence Journal
- Editorial Board for AAAI/MIT Press
- Editorial Board IInformation Systems Journal
- Editorial Board International Journal of Social Robotics.
- ACM Eugene L. Lawler Award Committee for Humanitarian Contributions within Computer Science and Informatics.
Can supervise: YES
Artificial Intelligence, Explainable AI (XAI), AI Policy and Law, Social Robotics, Cybersecurity, The Internet of Everything, Privacy, Risk Management, Software Engineering, Human-Robot Interaction, Information Systems, Innovation and Enterprise, Legal and Ethical Issues of Artifical Intelligence.
Artificial Intelligence, Explainable AI (XAI), AI Policy and Law, Social Robotics, Cybersecurity, The Internet of Everything, Privacy, Risk Management, Software Engineering, Human-Robot Interaction, Information Systems, Innovation and Enterprise, Legal and Ethical Issues of Artifical Intelligence, Artifical Intelgeince and Human RIghts, Sustainable Development, Peace and Security.
Bi, Y & Williams, MA 2010, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Preface.
Castro, JL, Trillas, E & Zurita, JM 1998, Non-monotonic fuzzy reasoning.View/Download from: Publisher's site
Fuzzy reasoning can provide techniques both for representing and managing the imprecision in commonsense reasoning. But, like human reasoning, it conduces to inconsistencies (inherent to the imprecise or incomplete knowledge) that might be solved in the frame of fuzzy logic, simulating human behavior. In this paper we analyze this kind of conflicts and propose a non-monotonic fuzzy logic in order to solve it. Moreover, we show that many (non-monotonic) human reasoning patterns can be modeled by means of this "non-monotonic fuzzy reasoning". © 1998 Elsevier Science B.V.
© 2019, Springer Nature Switzerland AG. In this article, we provide the epistemic-entrenchment and partial-meet characterizations of a new, important class of concrete revision operators (all of which satisfy the AGM postulates for revision), called Parametrized Difference revision operators (PD operators, for short). PD operators are natural generalizations of Dalal's revision operator, with a much greater range of applicability, hence, the epistemic-entrenchment and partial-meet characterizations of the latter are also provided, as a by-product. Lastly, we prove that PD operators satisfy the strong version of Parikh's relevance-sensitive axiom for belief revision, showing that they are fully compatible with the notion of relevance.
Abidi, S, Piccardi, M, Tsang, WH & Williams, M-A 2019, 'Well-M³N: A Maximum-Margin Approach to Unsupervised Structured Prediction', IEEE Transactions on Emerging Topics in Computational Intelligence.View/Download from: UTS OPUS or Publisher's site
Unsupervised structured prediction is of fundamental importance for the clustering and classification of unannotated structured data. To date, its most common approach still relies on the use of structural probabilistic models and the expectation-maximization (EM) algorithm. Conversely, structural maximum-margin approaches, despite their extensive success in supervised and semi-supervised classification, have not raised equivalent attention in the unsupervised case. For this reason, in this paper we propose a novel approach that extends the maximum-margin Markov networks (M3N) to an unsupervised training framework. The main contributions of our extension are new formulations for the feature map and loss function of M3N that decouple the labels from the measurements and support multiple ground-truth training. Experiments on two challenging segmentation datasets have achieved competitive accuracy and generalization compared to other unsupervised algorithms such as k-means, EM and unsupervised structural SVM, and comparable performance to a contemporary deep learning-based approach.
van Rijmenam, M, Erekhinskaya, T, Schweitzer, J & Williams, MA 2019, 'Avoid being the turkey: How big data analytics changes the game of strategy in times of ambiguity and uncertainty', Long Range Planning, vol. 52, no. 5.View/Download from: UTS OPUS or Publisher's site
In order for organisations to remain competitive in times of ambiguity and uncertainty, there is a need to detect and anticipate unknown unknowns, also called 'black swans'. When these are ignored they may lead to competitive struggles. In this paper, we build on this view and suggest that big data analytics can provide necessary insights to help change strategy making. Research suggests that ambidextrous organisations should focus on developing and maintaining their dynamic capabilities. Following on from this, we take a dynamic capabilities perspective and propose a theoretical framework to explain the intricacies of big data analytics. This framework explains the ability of organisations to detect, anticipate and respond strategically in ambiguous and uncertain business environments. For a meta- synthesis of 101 cases of big data analytics, we employ a multi-method approach that incorporates Natural Language Processing, semantic analysis and case analysis, allowing extraction and analysis of structured information from unstructured data. Overall, we find evidence of big data analytics helping to detect, anticipate and respond to industry disruption. We offer six propositions about the relationships between the levels of data analytics capabilities and strategic dynamic capabilities. We find that descriptive data analytics improves the capability of an organisation to understand the business context (sensing) and that predictive data analytics aids in the realisation of business opportunities (seizing). This study contributes to an understanding of big data analytics as a dynamic organisational capability that supports strategic decision-making in times of ambiguity and uncertainty. We conclude by suggesting areas for further investigation, particularly in regard to the strategic application of prescriptive data analytics.
Anshar, M & Williams, MA 2018, 'Evolving robot empathy towards humans with motor disabilities through artificial pain generation', AIMS Neuroscience, vol. 5, no. 1, pp. 56-73.View/Download from: UTS OPUS or Publisher's site
© 2018 the Author(s). In contact assistive robots, a prolonged physical engagement between robots and humans with motor disabilities due to shoulder injuries, for instance, may at times lead humans to experience pain. In this situation, robots will require sophisticated capabilities, such as the ability to recognize human pain in advance and generate counter-responses as follow up emphatic action. Hence, it is important for robots to acquire an appropriate pain concept that allows them to develop these capabilities. This paper conceptualizes empathy generation through the realization of synthetic pain classes integrated into a robot's self-awareness framework, and the implementation of fault detection on the robot body serves as a primary source of pain activation. Projection of human shoulder motion into the robot arm motion acts as a fusion process, which is used as a medium to gather information for analyses then to generate corresponding synthetic pain and emphatic responses. An experiment is designed to mirror a human peer's shoulder motion into an observer robot. The results demonstrate that the fusion takes place accurately whenever unified internal states are achieved, allowing accurate classification of synthetic pain categories and generation of empathy responses in a timely fashion. Future works will consider a pain activation mechanism development.
Ojha, S, Williams, MA & Johnston, B 2018, 'The Essence of Ethical Reasoning in Robot-Emotion Processing', International Journal of Social Robotics, vol. 10, no. 2, pp. 211-223.View/Download from: UTS OPUS or Publisher's site
© 2017, Springer Science+Business Media B.V., part of Springer Nature. As social robots become more and more intelligent and autonomous in operation, it is extremely important to ensure that such robots act in socially acceptable manner. More specifically, if such an autonomous robot is capable of generating and expressing emotions of its own, it should also have an ability to reason if it is ethical to exhibit a particular emotional state in response to a surrounding event. Most existing computational models of emotion for social robots have focused on achieving a certain level of believability of the emotions expressed. We argue that believability of a robot's emotions, although crucially necessary, is not a sufficient quality to elicit socially acceptable emotions. Thus, we stress on the need of higher level of cognition in emotion processing mechanism which empowers social robots with an ability to decide if it is socially appropriate to express a particular emotion in a given context or it is better to inhibit such an experience. In this paper, we present the detailed mathematical explanation of the ethical reasoning mechanism in our computational model, EEGS, that helps a social robot to reach to the most socially acceptable emotional state when more than one emotions are elicited by an event. Experimental results show that ethical reasoning in EEGS helps in the generation of believable as well as socially acceptable emotions.
Anshar, M & Williams, MA 2016, 'Evolving synthetic pain into an adaptive self-awareness framework for robots', Biologically Inspired Cognitive Architectures, vol. 16, pp. 8-18.View/Download from: Publisher's site
In human-robot interaction, physical contact is the most common medium to be used, and the more physical interaction occurs, at certain times, the higher possibilities of causing humans to experience pain. Humans, at times, send this message out through social cues, such as verbal and facial expressions in which requires robots to have the skill to capture and translate these cues into useful information. The task of understanding human pain concept and its implementation on robots plays a dominant factor in allowing robots to acquire this social skill. However, it is reported that the concept of human pain is strongly related to the concept of human self-awareness concept and cognitive aspects with complex nerve mechanisms, hence, it is crucial to evolving appropriate self-awareness and pain concepts for robots. This paper focuses on imitating the concept of pain into a synthetic pain model, utilised in justifying the integration and implementation an adaptive self-awareness into a real robot design framework, named ASAF. The framework develops an appropriate robot cognitive system-"self-consciousness" that includes two primary levels of self-concept, namely subjective and objective. Novel experiments designated to measure whether a robot is capable of generating appropriate synthetic pain; whether the framework's reasoning skills support an accurate synthetic pain acknowledgement, and at the same time, develop appropriate counter responses. We find that the proposed framework enhances the awareness of robot's body parts and prevent further catastrophic impact on robot hardware and possible harm to human peers.
Surden, H & Williams, M-A 2016, 'How Self-Driving Cars Work'.
Autonomous or 'self-driving' cars are vehicles that drive themselves without human supervision or input. Because of safety benefits that they are expected to bring, autonomous vehicles are likely to become more common. Notably, for the first time, people will share a physical environment with computer-controlled machines that can both direct their own activities and that have considerable range of movement. This represents a distinct change from our current context. Today people share physical spaces either with machines that have free range of movement, but are controlled by people (e.g. automobiles) or with machines that are controlled by computers, but highly constrained in their range of movement (e.g. elevators). The movements of today's machines are thus broadly predictable. The unrestricted, computer-directed movement of autonomous vehicles is an entirely novel phenomenon that may challenge certain unarticulated assumptions in our existing legal structure.
Problematically, the movements of autonomous vehicles may be less predictable to the ordinary people who will share their physical environment--such as pedestrians--than the comparable movements of human-driven vehicles. Today, a great deal of physical harm that might otherwise occur is likely avoided through humanity's collective ability to predict the movements of other people. In anticipating the behavior of others, we employ what psychologists call a 'theory of mind.' Theory of mind cognitive mechanisms allow us to extrapolate from our own internal mental states in order to estimate what others are thinking or likely to do. These cognitive systems allow us to make instantaneous, unconscious judgments about the likely actions of people around us, and therefore, to keep ourselves safe in the driving context. However, the theory of mind mechanisms that allow us to accurately model the minds of other people and interpret their communicative signals of attention and *122 intention will be challenged in th...
Possible-world semantics are provided for Parikh's relevance-sensitive axiom for belief
revision, known as axiom (P). Loosely speaking, axiom (P) states that if a belief set K
can be divided into two disjoint compartments, and the new information φ relates only
to the first compartment, then the second compartment should not be effected by the
revision of K by φ. Using the well-known connection between AGM revision functions and
preorders on possible worlds as our starting point, we formulate additional constraints on
such preorders that characterise precisely Parikh's axiom (P). Interestingly, the additional
constraints essentially generalise a criterion of plausibility between possible worlds that
predates axiom (P). A by-product of our study is the identification of two possible readings
of Parikh's axiom (P), which we call the strong and the weak versions of the axiom.
Regarding specific operators, we show that Dalal's belief revision operator satisfies both
weak and strong (P), and it is therefore relevance-sensitive.
Vitale, J, Williams, M-A, Johnston, B & Boccignone, G 2014, 'Affective facial expression processing via simulation: A probabilistic model', Biologically Inspired Cognitive Architectures, vol. 10, pp. 30-41.View/Download from: UTS OPUS or Publisher's site
Understanding the mental state of other people is an important skill for
intelligent agents and robots to operate within social environments. However,
the mental processes involved in `mind-reading' are complex. One explanation of
such processes is Simulation Theory - it is supported by a large body of
neuropsychological research. Yet, determining the best computational model or
theory to use in simulation-style emotion detection, is far from being
In this work, we use Simulation Theory and neuroscience findings on
Mirror-Neuron Systems as the basis for a novel computational model, as a way to
handle affective facial expressions. The model is based on a probabilistic
mapping of observations from multiple identities onto a single fixed identity
(`internal transcoding of external stimuli'), and then onto a latent space
(`phenomenological response'). Together with the proposed architecture we
present some promising preliminary results
Multiple Belief Change extends the classical AGM framework for Belief Revision introduced by Alchourron, Gardenfors, and Makinson in the early '80s. The extended framework includes epistemic input represented as a (possibly infinite) set of sentences, as
Goebel, R & Williams, M 2011, 'Editorial : The Expansion Continues: Stitching together the Breadth of Disciplines Impinging on Artificial Intelligence', Artificial Intelligence Journal, vol. 175, no. 5-6, pp. 929-929.
Macinnis-Ng, CM, Zeppel, MJ, Williams, M & Eamus, D 2011, 'Applying a SPA model to examine the impact of climate change on GPP of open woodlands and the potential for woody thickening', Ecohydrology, vol. 4, no. 3, pp. 379-393.View/Download from: UTS OPUS or Publisher's site
Woody thickening is a global phenomenon that influences landscape C density, regional ecohydrology and biogeochemical cycling. The aim of the work described here is to test the hypothesis that increased atmospheric CO2 concentration, with or without photosynthetic acclimation, can increase gross primary production (GPP) and that this can explain woody thickening. We examine mechanisms underlying the response of GPP and highlight the importance of changes in soil water content by applying a detailed soil-plant-atmosphere model. Through this model, we show that CO2 enrichment with decreased or increased D and photosynthetic acclimation results in decreased canopy water use because of reduced gs. The decline in water use coupled with increased photosynthesis resulted in increased GPP, water-use efficiency and soil moisture content. This study shows that this is a valid mechanism for GPP increase because of CO2 enrichment coupled with either a decrease or an increase in D, in water-limited environments. We also show that a large increase in leaf area index could be sustained in the future as a result of the increased soil moisture content arising from CO2 enrichment and this increase was larger if D decreases rather than increases in the future. Large-scale predictions arising from this simple conceptual model are discussed and found to be supported in the literature. We conclude that woody thickening in Australia and probably globally can be explained by the changes in landscape GPP and soil moisture balance arising principally from the increased atmospheric CO2 concentration.
O'Hara, ML, Sample, S & Williams, MA 2011, 'Heart failure: From the icu to step‐down—and home', Nursing Management, vol. 37, no. 8, pp. 36-39.
A detailed transition plan can help prepare eligible patientsfor discharge. © 2011 Lippincott Williams & Wilkins, Inc.
Benferhat, S, Dubois, D, Prade, H & Williams, M 2010, 'A Framework For Iterated Belief Revision Using Possibilistic Counterparts To Jeffrey'S Rule', Fundamenta Informaticae, vol. 99, no. 2, pp. 147-168.View/Download from: UTS OPUS or Publisher's site
Intelligent agents require methods to revise their epistemic state as they acquire new information. Jeffrey's rule, which extends conditioning to probabilistic inputs, is appropriate for revising probabilistic epistemic states when new information comes
Collins, C, Fister, KR & Williams, M 2010, 'Optimal control of a cancer cell model with delay', Mathematical Modelling of Natural Phenomena, vol. 5, no. 3, pp. 63-75.View/Download from: Publisher's site
In this paper, we look at a model depicting the relationship of cancer cells in different development stages with immune cells and a cell cycle specific chemotherapy drug. The model includes a constant delay in the mitotic phase. By applying optimal control theory, we seek to minimize the cost associated with the chemotherapy drug and to minimize the number of tumor cells. Global existence of a solution has been shown for this model and existence of an optimal control has also been proven. Optimality conditions and characterization of the control are discussed. © EDP Sciences, 2010.
Delgado, K & Williams, M 2010, 'Diagnostic accuracy for coronary artery disease of multislice CT scanners in comparison to conventional coronary angiography: An integrative literature review', Journal of the American Academy of Nurse Practitioners, vol. 22, no. 9, pp. 496-503.View/Download from: Publisher's site
Purpose: To examine the quality of cardiac imaging done by multislice computed tomography (MSCT) and its ability to correctly identify significantly occluded segments of coronary arteries compared with quantitative coronary angiography.Data sources: Databases searched were CINAHL, MEDLINE, EBSCO, Academic Search Premier, and Web of Science and Health Source: Nursing/Academic edition. Keywords used were " Computed Tomography," " Coronar* Angiogra*," and " Coronary Artery Disease." Studies from peer-reviewed journals published from 2002 to 2008 that compared quantitative coronary angiography to MSCT were evaluated. Additional sources were identified from review of reference lists from articles found in the electronic search.Conclusions: MSCT was best employed to screen for the absence of disease in patients who were in sinus rhythm, who had no previous bypass grafts or stents placed, had a low risk of calcifications, and who were not obese. Both 40- and 64-slice technology demonstrated the highest accuracy in screening for the absence of disease on a vessel-based analysis.Implications for Practice: Those who have multiple risk factors and are asymptomatic should still be screened via catheterization. More studies are needed to determine the effectiveness of newer 64-slice technology as a tool to positively identify CAD. © 2010 The Author(s) Journal compilation © 2010 American Academy of Nurse Practitioners.
Goebel, R & Williams, M 2010, 'The Expanding Breadth of Artificial Intelligence Research', Artificial Intelligence Journal, vol. 174, no. 2, pp. 133-133.
Memmott, RJ, Coverston, CR, Heise, BA, Williams, M, Maughan, ED, Kohl, J & Palmer, S 2010, 'Practical considerations in establishing sustainable international nursing experiences', Nursing Education Perspectives, vol. 31, no. 5, pp. 298-302.
An understanding of global health and the development of cultural competence are important outcomes of today's baccalaureate nursing programs.Thoughtfully designed international experiences can provide excellent opportunities to achieve those outcomes. Based on 16 years of providing international experiences within a baccalaureate curriculum, components are identified that contribute to the development of a sustainable international program. Areas addressed in the article are evaluating the fit with university and college mission, establishing the program within the university operational structure, selecting faculty and students, developing sites, designing a course, and program evaluation. Copyright © 2010 by National League for Nursing, Inc.
Chen, X, Liu, W & Williams, M 2009, 'Introduction: Practical Cognitive Agnets and Robots', Autonomous Agents And Multi-Agent Systems, vol. 19, no. 3, pp. 245-247.
Christensen, BL & Williams, M 2009, 'Assessing postprandial glucose using 1,5-anhydroglucitol: An integrative literature review', Journal of the American Academy of Nurse Practitioners, vol. 21, no. 10, pp. 542-548.View/Download from: Publisher's site
Purpose: Recent studies have determined postprandial blood glucose is an independent risk factor for macrovascular complications. This risk exists, despite having HbA1C results within acceptable ranges for diabetes. 1,5-Anhydroglucitol (1,5AG) has been proposed as an appropriate indicator to detect and screen for postprandial hyperglycemia (PPHG). This review discusses the efficacy of 1,5AG to predict PPHG in order to reveal those who may be at risk for macrovascular complications. Data Sources: An electronic search was conducted from 2003 to 2008 in the following databases: Medline, CINAHL, Health Source: Nursing/Academic Edition, and Pre-CINAHL. Any articles relating to 1,5AG as a marker for PPHG were used. The search was limited to any human research articles published in English. All articles were reviewed for additional relevant studies. Conclusions: 1,5AG was found to be a reliable indicator of PPHG, even when HbA1C levels were within target ranges. 1,5AG may be a simple and effective tool for primary care providers to identify those at risk for macrovascular complications, who would otherwise go unnoticed if assessed by HbA1C alone. © 2009 American Academy of Nurse Practitioners.
Collins, C, Fister, KR, Key, B & Williams, M 2009, 'Blasting neuroblastoma using optimal control of chemotherapy', Mathematical Biosciences and Engineering, vol. 6, no. 3, pp. 451-467.View/Download from: Publisher's site
A mathematical model is used to investigate the effectiveness of the chemotherapy drug Topotecan against neuroblastoma. Optimal control theory is applied to minimize the tumor volume and the amount of drug utilized. The model incorporates a state constraint that requires the level of circulating neutrophils (white blood cells that form an integral part of the immune system) to rabove an acceptable value. The treatment schedule is designed to simultaneously satisfy this constraint and achieve the best results in fighting the tumor. Existence and uniqueness of the solution of the optimality system, which is the state system coupled with the adjoint system, is established. Numerical simulations are given to demonstrate the behavior of the tumor and the immune system components represented in the model.
Understanding the literature about the efficacy of green tea consumption in preventing and slowing the progression of cancers is critical. A systematic review of the literature was conducted using an electronic search to identify studies from 2000 to 2008 in the following database: Alt HealthWatch, CINAHL, Medline, Health Source - Consumer Edition, Health Source: Nursing/Academic Edition, Web of Science (ISI), and the Cochrane Library. Although the evidence from this review suggested associations between green tea consumption and a decreased risk for some cancers, the findings were inconclusive. In selected cases, green tea was effective in slowing the progression of the earlier stages of cancer. However, contrary evidence is reported and the dose and duration of use is variable. Most evidence stems from self-reports. Research using more rigorous designs to investigate the efficacy of green tea in humans is needed. © 2009 The Authors. Journal Compilation © 2009 Blackwell Publishing Asia Pty Ltd.
Williams, M, McCarthy, J, Gardenfors, P, Stanton, CJ & Karol, A 2009, 'A Grounding Framework', Autonomous Agents And Multi-Agent Systems, vol. 19, no. 3, pp. 272-296.View/Download from: UTS OPUS or Publisher's site
In order for an agent to achieve its objectives, make sound decisions, communicate and collaborate with others effectively it must have high quality representations. Representations can encapsulate objects, situations, experiences, decisions and behavior just to name a few. Our interest is in designing high quality representations, therefore it makes sense to ask of any representation; what does it represent; why is it represented; how is it represented; and importantly how well is it represented. This paper identifies the need to develop a better understanding of the grounding process as key to answering these important questions. The lack of a comprehensive understanding of grounding is a major obstacle in the quest to develop genuinely intelligent systems that can make their own representations as they seek to achieve their objectives. We develop an innovative framework which provides a powerful tool for describing, dissecting and inspecting grounding capabilities with the necessary flexibility to conduct meaningful and insightful analysis and evaluation. The framework is based on a set of clearly articulated principles and has three main applications. First, it can be used at both theoretical and practical levels to analyze grounding capabilities of a single system and to evaluate its performance. Second, it can be used to conduct comparative analysis and evaluation of grounding capabilities across a set of systems. Third, it offers a practical guide to assist the design and construction of high performance systems with effective grounding capabilities.
Overcoming barriers to clinical preventive services. © 2008 Lippincott Williams & Wilkins, Inc.
This document reviews the book âRobotics: State of the Art and Future Challengesâ. The review begins by providing a chapter by chapter summary of the book, and then concludes with a detailed review of the entire book. Robotics is a rich and exciting field of Artificial Intelligence. It has taken great strides in the last decade and a book on the state of the art and future challenges is timely. The reviewed book will assist AI researchers to keep abreast of developments in roboticsâa flagship area that enjoys a high profile and profound visibility in broader societyâby providing an empirically based overview of the field. The book a major outcome of a comparative robotics study and unique largely due to the fact that such field studies require a strong team of experts to invest significant time and effort, and site visits require significant funding. The book âRobotics: State of the Art and Future Challengesâ contains an extensive high level comparative review of the field of robotics across several pioneering research centers in several geographical regions by a team of scientists which included experts from NASA and US based universities. The book has seven chapters, and an appendix containing the biographies of team members. The broad field of robotics is divided into the following six research areas each of which has a single chapter devoted to it: robotic vehicles, space robotics, humanoid robots, industrial, service and personal robots, robotics in biology and medicine, and networked robots. The data for the reviews was obtained from site visits to 50 laboratories in countries such as Japan, South Korea, France, Germany, Italy, Spain, Sweden, Switzerland, and the UK. A 51st virtual site visit was conducted to Australia.
Anshar, M & Williams, M 2007, 'Extended Evolutionary Fast Learn-to-Walk Approach for Four-Legged Robots', Journal of Bionic Engineering, vol. 4, no. 4, pp. 255-264.View/Download from: UTS OPUS or Publisher's site
Robot locomotion is an active research area. In this paper we focus on the locomotion of quadruped robots. An effective walking gait of quadruped robots is mainly concerned with two key aspects, namely speed and stability. The large search space of potential parameter settings for leg joints means that hand tuning is not feasible in general. As a result walking parameters are typically determined using machine learning techniques. A major shortcoming of using machine learning techniques is the significant wear and tear of robots since many parameter combinations need to be evaluated before an optimal solution is found. This paper proposes a direct walking gait learning approach, which is specifically designed to reduce wear and tear of robot motors, joints and other hardware. In essence we provide an effective learning mechanism that leads to a solution in a faster convergence time than previous algorithms. The results demonstrate that the new learning algorithm obtains a faster convergence to the best solutions in a short run. This approach is significant in obtaining faster walking gaits which will be useful for a wide range of applications where speed and stability are important. Future work will extend our methods so that the faster convergence algorithm can be applied to a two legged humanoid and lead to less wear and tear whilst still developing a fast and stable gait.
Larsen, L, Mandleco, B, Williams, M & Tiedeman, M 2006, 'Childhood obesity: Prevention practices of nurse practitioners', Journal of the American Academy of Nurse Practitioners, vol. 18, no. 2, pp. 70-79.View/Download from: Publisher's site
Purpose: The purposes or this study were to (a) describe the prevention practices of nurse practitioners (NPs) regarding childhood obesity, (b) compare the practices of NPs by specialty, practice setting, and awareness of childhood obesity prevention guidelines, (c) identify relationships between prevention practices and demographic variables of NPs, and (d) examine the resources for and barriers to implementing prevention practices. Data sources: A convenience sample of 99 family NPs (FNPs) and pediatric NPs (PNPs) from the Intermountain area was used. Participants completed a questionnaire based on documented risk factors for childhood obesity as well as prevention guidelines developed by the American Academy of Pediatrics (AAP). Conclusions: NPs working in family practice or general pediatric practice settings were not consistently using the BMI-for-age index to screen for childhood obesity, as recommended by the AAP. However, they were teaching parents to promote healthy food choices and physical activity in their families. PNPs and FNPs working in a pediatric practice setting and NPs who were aware of prevention guidelines were more likely to perform several prevention strategies than FNPs working in a family practice setting and those who were unaware of guidelines. Major barriers to implementing childhood obesity prevention strategies included parental attitudes, the American lifestyle, and lack of resources for both the NP and the family. The main resources NPs used in preventing childhood obesity were a dietician, journal articles, and Web sites. Implications for practice: Although the majority of the NPs in this study reported being aware of childhood obesity prevention guidelines (73.7%), most were not consistently using BMI for age or monitoring children at increased risk for obesity. Because childhood obesity is escalating at such a rapid rate, it is critical that NPs working in family practice and pediatric practice settings take the necessary steps...
Benferhat, S, Kaci, S, La Berre, D & Williams, M 2004, 'Weakening Conflicting Information for Iterated Revision and Knowledge Integration', Artificial Intelligence Journal, vol. 153, no. 1-2, pp. 339-371.View/Download from: UTS OPUS or Publisher's site
Alder, R, Lookinland, S, Berry, JA & Williams, M 2003, 'A systematic review of the effectiveness of garlic as an anti-hyperlipidemic agent.', Journal of the American Academy of Nurse Practitioners, vol. 15, no. 3, pp. 120-129.View/Download from: Publisher's site
PURPOSE: To 1) conduct a thorough search of the literature for randomized controlled trials (RCTs) addressing the efficacy of garlic as an antihyperlipidemic agent, 2) critically appraise those studies, and 3) make a recommendation for practicing health care professionals. DATA SOURCES: Two independent reviewers extracted data from the articles identified from several data bases, using the previously tested Boyack and Lookinland Methodological Quality Index (MQI) as the standard. RESULTS: Six of ten studies found garlic to be effective. The average drop in total cholesterol was 24.8 mg/dL (9.9%), LDL 15.3 mg/dL (11.4%), and triglycerides 38 mg/dL (9.9%). The overall average MQI score was 39.6% (18%-70%). Major shortcomings of many of the RCTs included short duration, lack of power analysis and intention to treat analysis, as well as lack of control of diet as a confounding variable. CONCLUSION/IMPLICATIONS: The low methodological quality of the studies make it difficult to recommend garlic as an antihyperlipidemic agent. Until larger RCTs of longer duration, which correct the existing methodological flaws, are designed and carried out, it is best not to recommend garlic be used to treat mild to moderate hyperlipidemia.
Tselekidis, G, Peppas, P & Williams, M 2003, 'Belief revision and organisational knowledge dynamics', Journal Of The Operational Research Society, vol. 54, no. 9, pp. 914-923.View/Download from: UTS OPUS or Publisher's site
Benferhat, S, Dubois, D, Prade, H & Williams, M 2002, 'A Practical Approach to Revising Prioritized Knowledge Bases', Studia Logica, vol. 70, no. 1, pp. 105-130.View/Download from: UTS OPUS or Publisher's site
This paper investigates simple syntactic methods for revising prioritized belief bases, that are semantically meaningful in the frameworks of possibility theory and of Spohn''s ordinal conditional functions. Here, revising prioritized belief bases amounts to conditioning a distribution function on interpretations. The input information leading to the revision of a knowledge base can be sure or uncertain. Different types of scales for priorities are allowed: finite vs. infinite, numerical vs. ordinal. Syntactic revision is envisaged here as a process which transforms a prioritized belief bases into a new prioritized belief base, and thus allows a subsequent iteration.
Liu, W & Williams, M 2002, 'Trustworthiness Of Information Sources And Information Pedigrees', Lecture Notes In Computer Science vol 2333 - Intelligent Agents Viii: Agent Theories, Architectures, And Languages, vol. 2333, pp. 290-306.View/Download from: UTS OPUS or Publisher's site
To survive, and indeed thrive, in an open heterogenous information sharing environment, an agent's ability to evaluate the trustworthiness of other agents becomes crucial. In this paper, we investigate a procedure for evaluating an agent's trustworthines
Boyer, LE, Williams, M, Calker, LC & Marshall, ES 2001, 'Hispanic women's perceptions regarding cervical cancer screening', Journal of obstetric, gynecologic, and neonatal nursing : JOGNN / NAACOG, vol. 30, no. 2, pp. 240-245.View/Download from: Publisher's site
OBJECTIVE: To examine factors affecting cervical cancer screening behaviors. DESIGN: Qualitative, descriptive. SETTING: Interviews were conducted in participants' homes. PARTICIPANTS: Purposive sample of 20 Hispanic women 18 to 65 years of age. RESULTS: Participants accessed the health care system primarily during times of illness or in association with impending marriage, obtaining birth control, or childbearing. Barriers to screening participation included personal/cultural and provider/ system factors. Motivators included personal experience with others having cervical cancer, perceived importance of the Pap smear in maintaining health, reduction of financial barriers, and access to culturally appropriate health care. CONCLUSIONS: Factors affecting cervical cancer screening behavior among Hispanic women are identifiable and describable. Knowledge of barriers and motivators can be utilized to design effective nursing interventions and community-based programs.
A new methodology for developing theories of action has recently emerged which provides means for formally evaluating the correctness of such theories. Yet, for a theory of action to qualify as a solution to the frame problem, not only does it need to produce correct inferences, but moreover, it needs to derive these inferences from a concise representation of the domain at hand. The new methodology however offers no means for assessing conciseness. Such a formal account of conciseness is developed in this paper. Combined with the existing criterion for correctness, our account of conciseness offers a framework where proposed solutions to the frame problem can be formally evaluated. © 2001 Kluwer Academic Publishers.
Blad, KD, Lookinland, S, Measom, G, Bond, AE & Williams, M 2000, 'Assessing dopamine concentrations: An evidence-based approach', American Journal of Critical Care, vol. 9, no. 2, pp. 130-139.
Background Both overmedication and undermedication can be potentially life threatening. If the actual volume of a 100-mL intravenous bag used to mix dopamine solutions is greater than the labeled volume, overdilution of medication can occur, resulting in an ineffective hemodynamic response in patients and thus an unintended adverse drug event. Objectives To determine the actual fluid volumes of 100-mL intravenous bags, compare the actual volumes of 100-mL bags from the 3 major manufacturers of intravenous bags, and determine if the excess volume is sufficient to cause a clinically significant overdilution of dopamine. Methods A comparative descriptive design was used. The volumes of 162 intravenous bags of 100 mL of 5% dextrose in water (32 lot numbers with various expiration dates) were measured. Visual volume was confirmed by using a 250-mL graduated cylinder. Volume by weight was determined with a calibrated laboratory-quality electronic scale. On the basis of a mathematical model, any overfill greater than 110 mL was considered clinically significant. Results The difference between actual and labeled volumes was statistically and clinically significant. Mean visual volume was 110.20 mL (range, 107-114 mL). Mean weighed volume was 109.26 mL (range, 106.15-112.09 mL). The fluid volumes among bags from the 3 major IV companies differed significantly (P<.001). Conclusions The overfill in sufficient numbers of 100-mL intravenous bags was enough to cause clinically significant overdilution of dopamine. When dopamine or other vasoactive medications are mixed, either an in-line buret or premixed bags of the drugs should be used to prevent an unintended adverse drug event.
Dingman, SK, Williams, M, Fosbinder, D & Warnick, M 1999, 'Implementing a caring model to improve patient satisfaction', Journal of Nursing Administration, vol. 29, no. 12, pp. 30-37.View/Download from: Publisher's site
Objective: To evaluate the effect of implementing a Caring Model on patient satisfaction. Background: Patient satisfaction has become an important indicator of quality care and financial success of healthcare institutions. Acknowledging the importance of nurse caring behaviors and the impact on patient satisfaction has been relatively recent. Based on a synthesis of the literature, five caring behaviors have been formulated into a model; no single study identified the five selected behaviors included in this study. Methods: In an acute care setting, eight patient satisfaction attributes were incorporated into a Caring Model. Implementation of the model among nursing staff members included an educational in-service, printing of the behaviors on the name badge, reminders in monthly staff meetings and nursing rounds, and inclusion of the caring behaviors in patient care documentation, job descriptions, and performance appraisals. The impact upon patient satisfaction was compared 6 months' preintervention to 6 months' postintervention. Results: Postintervention, the patient satisfaction attributes of Nurses Anticipating Needs and Responds to Requests significantly increased. Attributes that began preintervention as immediate priorities for improvement became major strengths postintervention. Conclusions/Implications: Results of this study provide evidence that nurse caring behaviors can influence patient satisfaction. For a Caring Model to be effective, it must become an integral part of strategic planning and be implemented throughout the entire organization. To sustain the effects of the model, there must be frequent reminders among staff members. Nurse caring is an important predictor of patient satisfaction. The authors discuss the effect of implementing a caring model on patient satisfaction. In an acute care setting, eight patient satisfaction attributes incorporated into five nurse caring behaviors were evaluated pre- and postintervention. Results of the study...
Most information systems are faced with incomplete information, even for simple database applications; therefore, they must make plausible conjectures in order to operate in a satisfactory way. A simple example is the closed world assumption, which is us
Polshakov, VI, Williams, MA, Gargaro, AR, Frenkiel, TA, Westley, BR, Chadwick, MP, May, FEB & Feeney, J 1997, 'High-resolution solution structure of human pNR-2/pS2: A single trefoil motif protein', Journal of Molecular Biology, vol. 267, no. 2, pp. 418-432.View/Download from: Publisher's site
pNR-2/pS2 is a 60 residue extracellular protein, which was originally discovered in human breast cancer cells, and subsequently found in other tumours and normal gastric epithelial cells. We have determined the three-dimensional solution structure of a C58S mutant of human pNR-2/pS2 using 639 distance and 137 torsion angle constraints obtained from analysis of multidimensional NMR spectra. A series of simulated annealing calculations resulted in the unambiguous determination of the protein's disulphide bonding pattern and produced a family of 19 structures consistent with the constraints. The peptide contains a single 'trefoil' sequence motif, a region of about 40 residues with a characteristic sequence pattern, which has been found, either singly or as a repeat, in about a dozen extracellular proteins. The trefoil domain contains three disulphide bonds, whose 1-5, 2-4 and 3-6 cysteine pairings form the structure into three closely packed loops with only a small amount of secondary structure, which consists of a short α-helix packed against a two-stranded antiparallel β-sheet. The structure of the domain is very similar to those of the two trefoil domains that occur in porcine spasmolytic polypeptide (PSP), the only member of the trefoil family whose three-dimensional structure has been previously determined. Outside the trefoil domain, which forms the compact 'head' of the molecule, the N and C-terminal strands are closely associated, forming an extended 'tail', which has some β-sheet character for part of its length and which becomes more disordered towards the termini as indicated by 15N(1H) NOEs. We have considered the structural implications of the possible formation of a native C58-C58 disulphide-bonded homodimer. Comparison of the surface features of pNR-2/pS2 and PSP, and consideration of the sequences of the other human trefoil domains in the light of these structures, illuminates the possible role of specific residues in ligand/receptor binding.
Ravert, P, Williams, M & Fosbinder, DM 1997, 'The Interpersonal Competence Instrument for Nurses', Western Journal of Nursing Research, vol. 19, no. 6, pp. 781-791.View/Download from: Publisher's site
A new tool, the Interpersonal Competence Instrument for Nurses, was evaluated for content validity and readability. Evaluators consisted of a panel of 10 nursing experts. The instrument measures four categories of the patient-nurse interaction: translating, getting to know you, establishing trust, and going the extra mile. The content validity indexes (CVI) for 14 of 15 behaviors within the four categories ranged from .8 to 1.0 and are considered to have content validity. The CVI for the 15th behavior, clicking, was calculated as .7, and, thus, was modified. The CVI for the entire instrument of 111 items was determined to be .84. Readability analyzed with the SMOG formula established a grade level of 8.09. Additional psychometric testing of the tool is in progress. Internal consistency reliability is being evaluated through the use of coefficient alpha and item analysis. Construct validity is being estimated through the experimental approach. The internal structure of the instrument is being assessed through factor analysis. Presuming that this instrument continues to demonstrate validity and reliability with future testing, it will facilitate the evaluation of nurse-patient interaction and promote focused education for nurses in the career development continuum.
Williams, MA 1997, 'Fostering collegiality.', Nursing management, vol. 28, no. 6, p. 66.
Antoniou, G, Courtney, A, Ernst, J & Williams, M 1996, 'A System For Computing Constrained Default Logic Extensions', Lecture Notes in Computer Science, vol. 1126, pp. 237-250.View/Download from: Publisher's site
The aim of this paper is to describe the algorithmic foundations of the part of the program Exten responsible for the computation of extensions in Constrained Default Logic. Exten is a system that computes extensions for various default logics. The effic
Alchourron, Gärdenfors and Makinson have developed and investigated a set of rationality postulates which appear to capture much of what is required of any rational system of theory revision. This set of postulates describes a class of revision functions, however it does not provide a constructive way of defining such a function. There are two principal constructions of revision functions, namely an epistemic entrenchment and a system of spheres. We refer to their approach as the AGM paradigm. We provide a new constructive modeling for a revision function based on a nice preorder on models, and furthermore we give explicit conditions under which a nice preorder on models, an epistemic entrenchment, and a system of spheres yield the same revision function. Moreover, we provide an identity which captures the relationship between revision functions and update operators (as defined by Katsuno and Mendelzon). © 1995, Duke University Press. All Rights Reserved.
Chen, S & Williams, M 2008, 'Learning Personalized Ontologies from Text: A Review on an Inherently Transdisciplinary Area' in Gonzalez, RA, Chen, N & Dahanayake, A (eds), Personalized Information Retrieval and Access Concepts Methods and Practices, IGI Global, UK & USA, pp. 1-29.View/Download from: UTS OPUS
Book Chapter on Ontology learning - a review of the main concepts of ontologies and the state of the art in the area of ontology learning from text.
Gardenfors, P & Williams, M 2007, 'Multi-Agent Communication, Planning, and Collaboration' in Schalley, AC & Khlentzos, D (eds), Mental States: Language and Cognitive Structure, John Benjamins, Amsterdam, pp. 197-253.View/Download from: UTS OPUS
Williams, M 2007, 'Computer Mediated Communication' in Editor, JRB & Editor, SRC (eds), International Encyclopedia of Organization Studies, Sage Publications, London, pp. 207-212.
Williams, M & Gardenfors, P 2007, 'Communication, Planning and Collaboration based on Representations and Simulations' in Editor, ACS & Editor, DK (eds), Mental States: Volume 1: Evolution, function, nature; Volume 2: Language and cognitive structure, John Benjamins Publishing Company, Amsterdam, The Netherlands, pp. 95-122.
Williams, M & Elliot, S 2003, 'An Evaluation of Intelligent Agent based Innovation in the Wholesale Financial Services Industry' in Andersen, KV, Elliot, S, Swatman, P, Trauth, E & Bj0rn-Anders, N (eds), SEEKING SUCCESS IN E-BUSINESS, KLUWER ACADEMIC PUBLISHERS, USA, pp. 91-105.
In this paper we describe the problems and challenges facing Australian corporations in the Wholesale Financial Services sector and describe a research model which seeks to assess the impact of emerging Intelligent Agent enabled e-business initiatives, particularly in the area of system architecture and mass customisation. The purpose is to assist these firms achieve a level of international competitiveness in this area through (a) the investigation and longitudinal monitoring of the current status of and further developments in intelligent agent technologies, and (b) the investigation of emergent applications and successful approaches for the adoption and implementation of these key technologies in the provision of improved value-added customer services. We argue that a multidisciplinary integration of e-business strategy, finance, intelligent agent architectures and knowledge technologies offer a previously unexplored solution to the documented challenges confronting Australia's Wholesale Financial Services industry. Agents can evolve over time iteratively and independently, without impacting other agents. A key difference between agent architectures and more traditional architectures is that instead of building relationships between software components at design time, agent architectures allow relationships to be fonned on the fly at run-time. This results in highly responsive systems that are sensitive to the dynamic financial services context and that may be opportunistic in any competitive complex business environment.
Madhisetty, S & Williams, MA 2019, 'Managing privacy through key performance indicators when photos and videos are shared via social media', Advances in Intelligent Systems and Computing, pp. 1103-1117.View/Download from: Publisher's site
© Springer Nature Switzerland AG 2019. There are many definitions of privacy. What is considered sensitive varies from individual to individual. When a document is shared it may reveal certain information, the exchange of information is grounded with a specific context. This contextual grounding may not be afforded when photos and videos are shared, because they may contain rich semantic and syntactic information coded as tacit knowledge. Identifying sensitive information in a photo or a video is a major problem; therefore, rather than making assumptions about what is sensitive in a photo or a video, this research asked a group of study participants why they share content and what their concerns are (if any)? This enabled inferences to be made about categories of sensitivity in accordance with the participants' responses. Interview data was gathered and Grounded Theory was applied. The following themes emerged from the data: a major theme, in which no privacy concerns were developed, three sub-themes in which varying levels of privacy concerns were developed and key performance indicators which manage levels of privacy were determined. This paper focuses on the main themes' key performance indicators and how they can manage privacy when photos and videos are shared over social media.
Madhisetty, S & Williams, MA 2019, 'The role of trust and control in managing privacy when photos and videos are stored or shared', Advances in Intelligent Systems and Computing, pp. 127-140.View/Download from: Publisher's site
© Springer Nature Switzerland AG 2019. A photo or a video could contain sensitive information coded as tacit information, which makes it difficult gauge, the loss of privacy, if such photo or a video were shared. Social media applications like Facebook, Twitter, WhatsApp and many more such applications are becoming popular. The instant sharing of information via photos and videos is making the management of issues which rise out of loss of privacy more difficult. Many users of social media trust that their content will not be misused other than purposes that were originally intended. This paper discusses not only about how much of that trust is real and how much of it was forced, but demonstrates the reasoning behind forced trust. These interferences were made after data collection via interviews and data analysis using Grounded Theory.
Madhisetty, S, Williams, MA, Massy-Greene, J, Franco, L & El Khoury, M 2019, 'How to manage privacy in photos after publication', ICEIS 2019 - Proceedings of the 21st International Conference on Enterprise Information Systems, pp. 162-168.
Copyright © 2019 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved. Photos and videos once published may stay available for people to view it unless they are deleted by the publisher of the photograph. If the content is downloaded and uploaded by others then they lose all the privacy settings once afforded by the publisher of the photograph or video via social media settings. This means that they could be modified or in some cases misused by others. Photos also contain tacit information, which cannot be completely interpreted at the time of their publication. Sensitive information may be revealed to others as the information is coded as tacit information. Tacit information allows different interpretations and creates difficulty in understanding loss of privacy. Free flow and availability of tacit information embedded in a photograph could have serious privacy problems. Our solution discussed in this paper illuminates the difficulty of managing privacy due the tacit information embedded in a photo. It also provides an offline solution for the photograph such that it cannot be modified or altered and gets automatically deleted over a period of time. By extending the Exif data of a photograph by incorporating an in-built feature of automatic deletion, and the access to an image by scrambling the image via adding a hash value. Only a customized application can unscramble the image therefore making it available. This intends to provide a novel offline solution to manage the availability of the image post publication.
Skillicorn, D, Alsadhan, N, Billingsley, R & Williams, MA 2019, 'Measuring Human Emotion in Short Documents to Improve Social Robot and Agent Interactions', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 29-41.View/Download from: Publisher's site
© 2019, Springer Nature Switzerland AG. Social robots and agents can interact with people better if they can infer their affective state (emotions). While they cannot yet recognise affective state from tone and body language, they can use the fragments of speech that they (over)hear. We show that emotions – as conventionally framed – are difficult to detect. We suggest, from empirical results, that this is because emotions are the wrong granularity; and that emotions contain subemotions that are much more clearly separated from one another, and so are both easier to detect and to exploit.
Gardenfors, P, Williams, M-A, Johnston, B, Billingsley, R, Vitale, J, Peppas, P & Clark, J 2018, 'Event boards as tools for holistic AI', International Workshop on Artificial Intelligence and Cognition 2018, Palermo, Italy.View/Download from: UTS OPUS
Gudi, SLKC, Ojha, S, Sidra, Johnston, B & Williams, MA 2017, 'A proactive robot tutor based on emotional intelligence', Advances in Intelligent Systems and Computing, International Conference on Robot Intelligence Technology and Applications, Springer, Korea, pp. 113-120.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing AG, part of Springer Nature 2019. In recent years, social robots are playing a vital role in various aspects of acting as a companion, assisting in regular tasks, health, interaction, teaching, etc. Coming to the case of robot tutor, the actions of the robot are limited. It may not fully understand the emotions of the student. It may continue to give lecture even though the user is bored or left away from the robot. This situation makes a user feel that robot cannot supersede a human being because it is not in a position to understand emotions. To overcome this issue, in this paper, we present an Emotional Classification System (ECS) where the robot adapts to the mood of the user and behaves accordingly by becoming proactive. It works based on the emotion tracked by the robot using its emotional intelligence. A robot as a sign language tutor scenario is considered to assist speech and hearing impairment people for validating our model. Real-time implementations and analysis are further discussed by considering Pepper robot as a platform.
Ojha, S, Gudi, SLKC, Vitale, J, Williams, MA & Johnston, B 2019, 'I remember what you did: A behavioural guide-robot', Advances in Intelligent Systems and Computing, pp. 273-282.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing AG, part of Springer Nature 2019. Robots are coming closer to human society following the birth of emerging field called Social Robotics. Social Robotics is a branch of robotics that specifically pertains to the design and development of robots that can be employed in human society for the welfare of mankind. The applications of social robots may range from household domains such as elderly and child care to educational domains like personal psychological training and tutoring. It is crucial to note that if such robots are intended to work closely with young children, it is extremely important to make sure that these robots teach not only the facts but also important social aspects like knowing what is right and what is wrong. It is because we do not want to produce a generation of kids that knows only the facts but not morality. In this paper, we present a mechanism used in our computational model (i.e EEGS) for social robots, in which emotions and behavioural response of the robot depends on how one has previously treated a robot. For example, if one has previously treated a robot in a good manner, it will respond accordingly while if one has previously mistreated the robot, it will make the person realise the issue. A robot with such a quality can be very useful in teaching good manners to the future generation of kids.
Agrawal, S & Williams, MA 2018, 'Would You Obey an Aggressive Robot: A Human-Robot Interaction Field Study', RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication, IEEE International Symposium on Robot and Human Interactive Communication, IEEE, Nanjing, China, pp. 240-246.View/Download from: UTS OPUS or Publisher's site
© 2018 IEEE. Social Robots have the potential to be of tremendous utility in healthcare, search and rescue, surveillance, transport, and military applications. In many of these applications, social robots need to advise and direct humans to follow important instructions. In this paper, we present the results of a Human-Robot Interaction field experiment conducted using a PR2 robot to explore key factors involved in obedience of humans to social robots. This paper focuses on studying how the human degree of obedience to a robot's instructions is related to the perceived aggression and authority of the robot's behavior. We implemented several social cues to exhibit and convey both authority and aggressiveness in the robot's behavior. In addition to this, we also analyzed the impact of other factors such as perceived anthropomorphism, safety, intelligence and responsibility of the robot's behavior on participants' compliance with the robot's instructions. The results suggest that the degree of perceived aggression in the robot's behavior by different participants did not have a significant impact on their decision to follow the robot's instruction. We have provided possible explanations for our findings and identified new research questions that will help to understand the role of robot authority in human-robot interaction, and that can help to guide the design of robots that are required to provide advice and instructions.
© 2018 Association for Computing Machinery. One of the main shortcomings of the early AGM paradigm is its lack of any guidelines for iterated revision; it formalizes only one-step rational belief revision. Darwiche and Pearl, subsequently, addressed this problem by introducing four additional postulates (the DP postulates), supplementing the AGM ones. Despite the popularity of the DP approach, there are still controversies surrounding the DP postulates. In this article, we prove a conflict between each one of the latter, and one of the most popular and intuitive 'off the shelf' revision functions, that is Dalal's revision operator.
Ojha, S, Vitale, J, Raza, SA, Billingsley, R & Williams, MA 2018, 'Implementing the Dynamic Role of Mood and Personality in Emotion Processing of Cognitive Agents', Sixth Annual Conference on Advances in Cognitive Systems, Annual Conference on Advances in Cognitive Systems, Stanford, California.View/Download from: UTS OPUS
Tonkin, M, Vitale, J, Herse, S, Williams, MA, Judge, W & Wang, X 2018, 'Design Methodology for the UX of HRI: A Field Study of a Commercial Social Robot at an Airport', ACM/IEEE International Conference on Human-Robot Interaction, ACM/IEEE International Conference on Human-Robot Interaction, ACM, Chicago, USA, pp. 407-415.View/Download from: UTS OPUS or Publisher's site
© 2018 ACM. Research in robotics and human-robot interaction is becoming more and more mature. Additionally, more affordable social robots are being released commercially. Thus, industry is currently demanding ideas for viable commercial applications to situate social robots in public spaces and enhance customers experience. However, present literature in human-robot interaction does not provide a clear set of guidelines and a methodology to (i) identify commercial applications for robotic platforms able to position the users needs at the centre of the discussion and (ii) ensure the creation of a positive user experience. With this paper we propose to fill this gap by providing a methodology for the design of robotic applications including these desired features, suitable for integration by researchers, industry, business and government organisations. As we will show in this paper, we successfully employed this methodology for an exploratory field study involving the trial implementation of a commercially available, social humanoid robot at an airport.
Vitale, J, Tonkin, M, Herse, S, Ojha, S, Clark, J, Williams, M, Wang, X & Judge, W 2018, 'Be More Transparent and Users Will Like You: A Robot Privacy and User Experience Design Experiment', Proceedings of 2018 ACM/IEEE International Conference on Human- Robot Interaction, International Conference on Human-Robot Interaction, ACM, Chicago, IL, USA, pp. 379-387.View/Download from: UTS OPUS or Publisher's site
Herse, S, Vitale, J, Ebrahimian, D, Tonkin, M, Ojha, S, Sidra, S, Johnston, B, Phillips, S, Gudi, SLKC, Clark, J, Judge, W & Williams, MA 2018, 'Bon Appetit! Robot Persuasion for Food Recommendation', ACM/IEEE International Conference on Human-Robot Interaction, ACM/IEEE International Conference on Human-Robot Interaction, ACM, Chicago, USA, pp. 125-126.View/Download from: UTS OPUS or Publisher's site
© 2018 Authors. The integration of social robots within service industries requires social robots to be persuasive. We conducted a vignette experiment to investigate the persuasiveness of a human, robot, and an information kiosk when offering consumers a restaurant recommendation. We found that embodiment type significantly affects the persuasiveness of the agent, but only when using a specific recommendation sentence. These preliminary results suggest that human-like features of an agent may serve to boost persuasion in recommendation systems. However, the extent of the effect is determined by the nature of the given recommendation.
Herse, S, Vitale, J, Tonkin, M, Ebrahimian, D, Ojha, S, Johnston, B, Judge, W & Williams, MA 2018, 'Do You Trust Me, Blindly? Factors Influencing Trust Towards a Robot Recommender System', RO-MAN 2018 The 27th IEEE International Symposium on Robot and Human Interactive Communication, IEEE International Symposium on Robot and Human Interactive Communication., IEEE, China, pp. 7-14.View/Download from: UTS OPUS or Publisher's site
© 2018 IEEE. When robots and human users collaborate, trust is essential for user acceptance and engagement. In this paper, we investigated two factors thought to influence user trust towards a robot: preference elicitation (a combination of user involvement and explanation) and embodiment. We set our experiment in the application domain of a restaurant recommender system, assessing trust via user decision making and perceived source credibility. Previous research in this area uses simulated environments and recommender systems that present the user with the best choice from a pool of options. This experiment builds on past work in two ways: first, we strengthened the ecological validity of our experimental paradigm by incorporating perceived risk during decision making; and second, we used a system that recommends a nonoptimal choice to the user. While no effect of embodiment is found for trust, the inclusion of preference elicitation features significantly increases user trust towards the robot recommender system. These findings have implications for marketing and health promotion in relation to Human-Robot Interaction and call for further investigation into the development and maintenance of trust between robot and user.
Krishna Chand Gudi, SL, Ojha, S, Johnston, B, Clark, J & Williams, MA 2018, 'Fog robotics for efficient, fluent and robust human-robot interaction', NCA 2018 - 2018 IEEE 17th International Symposium on Network Computing and Applications, International Symposium on Network Computing and Applications, IEEE, Cambridge, MA, USA.View/Download from: UTS OPUS or Publisher's site
© 2018 IEEE. Active communication between robots and humans is essential for effective human-robot interaction. To accomplish this objective, Cloud Robotics (CR) was introduced to make robots enhance their capabilities. It enables robots to perform extensive computations in the cloud by sharing their outcomes. Outcomes include maps, images, processing power, data, activities, and other robot resources. But due to the colossal growth of data and traffic, CR suffers from serious latency issues. Therefore, it is unlikely to scale a large number of robots particularly in human-robot interaction scenarios, where responsiveness is paramount. Furthermore, other issues related to security such as privacy breaches and ransomware attacks can increase. To address these problems, in this paper, we have envisioned the next generation of social robotic architectures based on Fog Robotics (FR) that inherits the strengths of Fog Computing to augment the future social robotic systems. These new architectures can escalate the dexterity of robots by shoving the data closer to the robot. Additionally, they can ensure that human-robot interaction is more responsive by resolving the problems of CR. Moreover, experimental results are further discussed by considering a scenario of FR and latency as a primary factor comparing to CR models.
Madhisetty, S & Williams, MA 2017, 'Framework for privacy in photos and videos when using social media', ICEIS 2017 - Proceedings of the 19th International Conference on Enterprise Information Systems, International Conference on Enterprise Information Systems, Science and Technology Publications, Porto, Portugal, pp. 331-336.View/Download from: UTS OPUS or Publisher's site
Privacy is a social construct. Having said that, how can it be contextualised and studied scientifically? This research contributes by investigating how to manage privacy better in the context of sharing and storing photos and videos using social media. Social media such as Facebook, Twitter, WhatsApp and many more applications are becoming popular. The instant sharing of tacit information via photos and videos makes the problem of privacy even more critical.The main problem was, nobody could define the actual meaning of privacy. Though there are definitions about privacy and Acts to protect it, there is no clear consensus as to what it actually means. I asked myself a question, how do I manage something when I don't know what it means exactly? I then decided to do this research by asking questions about privacy in particular categories of photos so that I could arrive at a general consensus. The data has been processed using the principles of Grounded Theory (GT) to develop a framework which assists in the effective management of privacy in photos and videos.
Agrawal, S & Williams, MA 2017, 'Robot authority and human obedience: A study of human behaviour using a robot security guard', ACM/IEEE International Conference on Human-Robot Interaction, 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, pp. 57-58.View/Download from: UTS OPUS or Publisher's site
© 2017 Authors. There has been much debate, sci-fi movie scenes, and several scientific studies exploring the concept of robot authority. Some of the key research questions include: when should humans follow/question robot instructions; how can a robot increase its ability to convince humans to follow their instructions or to change their behaviour. In this paper, we describe a recent experiment designed to explore the notions of robot authority and human obedience. We set up a robot in a publicly accessible building to act as a security guard that issued instructions to specific humans. We identified and analysed the factors that affected a human's decisions to follow the robot's instruction. The four key factors were: perceived aggression, responsiveness, anthropomorphism, level of safety and intelligence in the robot's behaviour. We implemented various social cues to exhibit and convey authority and aggressiveness in the robot's behaviour. The results suggest that the degree of aggression that different people perceived in the robot's behaviour did not have a significant impact in their decision to follow the robot's instruction. Although, the people who disobeyed the robot, perceived the robot's behaviour to be more unsafe and less human-like than the people who followed the robot's instructions and also found the robot to be more responsive.
Anshar, M & Williams, M-A 2017, 'Evolving Artificial Pain from Fault Detection through Pattern Data Analysis', 2017 IEEE International Conference on Real-time Computing and Robotics (RCAR), IEEE International Conference on Real-time Computing and Robotics, IEEE, Okinawa, Japan, pp. 694-699.View/Download from: UTS OPUS or Publisher's site
Fault detection is a classical area of study in robotics which mainly considered as a stimulus in robot motion planning. Earlier studies, reported in – provide the foundation for the importance of incorporating failure detection into robot planning mechanisms. Various aspects of robot motion planning are investigated in – and extensions the scope to multiple robot planning , . All of these studies assume that the robot is fully functional. In practice, however, robots fail and their failure can affect not only the plans but also put resources and people at risk. As the use of robots grow, such as in human-robot interaction or robot-to robot interaction,a new and growing field of research, robots are required to develop more sophisticated social skills. Understanding the concept of pain in humans themselves is critical for planning and tasks that require human-robot interaction. However, this raises an issue on how a robot can develop a proper concept of pain which relies on the robot being aware of its own body machinery aspects.  proposes a damage recovery approach where robots are aware of their body hardware failure, such as one or few of robot joints suffer from malfunctions. The study reports that the robot successfully discovers a new qualitative behavior of hexapod gaits. However, when this type of fault is detected, it merely functions as a stimulus to activate the new motion plan or new motion behaviour generation. Unlike in robots, in human mechanism, any machinery of body failure will generate internal states where humans experience what is called 'pain' . In fact, if the faults in robots themselves are associated with not only stimulus but also specific meaningful magnitude, it will be beneficial for robots to incorporate them as part of their experience. This paper intends to derive machinery faults of the robots, detected from robot proprioceptive sensors, into an appropriate representation of pain by introducing an artifi...
Aravanis, T, Peppas, P & Williams, MA 2017, 'Epistemic-entrenchment characterization of Parikh's axiom', IJCAI International Joint Conference on Artificial Intelligence, International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, Melbourne, Australia, pp. 772-778.View/Download from: UTS OPUS
In this article, we provide the epistemicentrenchment characterization of the weak version of Parikh's relevance-sensitive axiom for belief revision - known as axiom (P) - for the general case of incomplete theories. Loosely speaking, axiom (P) states that, if a belief set K can be divided into two disjoint compartments, and the new information φ relates only to the first compartment, then the second compartment should not be affected by the revision of K by φ. The above-mentioned characterization, essentially, constitutes additional constraints on epistemicentrenchment preorders, that induce AGM revision functions, satisfying the weak version of Parikh's axiom (P).
Billingsley, R, Billingsley, J, Gärdenfors, P, Peppas, P, Prade, H, Skillicorn, D & Williams, MA 2017, 'The altruistic robot: Do what i want, not just what i say', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Scalable Uncertainty Management, Granada, Spain, pp. 149-162.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing AG 2017. As autonomous robots expand their application beyond research labs and production lines, they must work in more flexible and less well defined environments. To escape the requirement for exhaustive instruction and stipulated preference ordering, a robot's operation must involve choices between alternative actions, guided by goals. We describe a robot that learns these goals from humans by considering the timeliness and context of instructions and rewards as evidence of the contours and gradients of an unknown human utility function. In turn, this underlies a choice-theory based rational preference relationship. We examine how the timing of requests, and contexts in which they arise, can lead to actions that pre-empt requests using methods we term contemporaneous entropy learning and context sensitive learning. We provide experiments on these two methods to demonstrate their usefulness in guiding a robot's actions.
Billingsley, R, Prade, H, Richard, G & Williams, MA 2017, 'Towards analogy-based decision - A proposal', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12th International Conference Flexible Query Answering Systems, London UK, pp. 28-35.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing AG 2017. This short paper outlines an analogy-based decision method. It takes advantage of analogical proportions between situations, i.e., a is to b as c is to d, for proposing plausibly good decisions that may be appropriate for a new situation at hand. It goes beyond case-based decision where the idea of graded similarity may hide some small but crucial differences between situations. The method relies on triples of known cases rather than on individual cases for making a prediction on the appropriateness of a potential decision, or for proposing a way of adapting a decision according to situations. The approach may be of interest in a variety of problems ranging from flexible querying systems to cooperative artificial agents.
Novianto, R & Williams, MA 3015, 'Emotion in robot decision making', Advances in Intelligent Systems and Computing, International Conference on Robot Intelligence Technology and Applications, Springer, Bucheon, Korea, pp. 221-232.View/Download from: UTS OPUS or Publisher's site
© Springer international publishing switzerland 2017. Social robots are expected to behave in a socially acceptablemanner. They have to accommodate emotions in their decision-makings when dealing with people in social environments. In this paper, we present a novel emotion mechanism that influences decision making and behaviors through attention. We describe its implementation in a cognitive architecture and demonstrate its capability in a robot companion experiment. Results show that the robot can successfully bias its behaviors in order to make users happy. Our proposed emotion mechanism can be used in social robots to predict emotions and bias behaviors in order to improve their performances.
Ojha, S & Williams, M-A 2017, 'Emotional Appraisal : A Computational Perspective', Website proceedings of the Fifth Annual Conference on Advances in Cognitive Systems, Fifth Annual Conference on Advances in Cognitive Systems, ACS, Troy, USA, pp. 1-15.View/Download from: UTS OPUS
Research on computational modelling of emotions has received significant attention in the last few decades. As such, several computational models of emotions have been proposed which have provided an unprecedented insight into the implications of the emotion theories emerging from cognitive psychology studies. Yet the existing computational models of emotion have distinct limitations namely:(i) low replicability - difficult to implement the given computational model by reading the description of the model, (ii) domain dependence - model only applicable in one or more predefined scenarios or domains, (iii) low scalability and integrability - difficult to use the system in larger or different domains and difficult to integrate the model in wide range of other intelligent systems. In this paper, we propose a completely domain-independent mathematical representation for computational modelling of emotion that provides better replicability and integrability. The implementation of our model is inspired by appraisal theory - an emotion theory which assumes that emotions result from the cognitive evaluation of a situation.
Ojha, S, Vitale, J & Williams, M-A 2017, 'A Domain-Independent Approach of Cognitive Appraisal Augmented by Higher Cognitive Layer of Ethical Reasoning', Proceedingsof the 39th Annual Meeting of the Cognitive Science Society, Annual Meeting of the Cognitive Science Society, Cognitive Science Society, London, pp. 2833-2838.View/Download from: UTS OPUS
According to cognitive appraisal theory, emotion in an individual is the result of how a situation/event is evaluated by the individual. This evaluation has different outcomes among people and it is often suggested to be operationalised by a set of rules or beliefs acquired by the subject throughout development. Unfortunately, this view is particularly detrimental for computational applications of emotion appraisal. In fact, it requires providing a knowledge base that is particularly difficult to establish and manage, especially in systems designed for highly complex scenarios, such as social robots. In addition,
according to appraisal theory, an individual might elicit more than one emotion at a time in reaction to an event. Hence, determining which emotional state should be attributed in relationship to a specific event is another critical issue not yet fully addressed by the available literature. In this work, we show that: (i) the cognitive appraisal process can be realised without a complex set of rules; instead, we propose that this process can be operationalised by knowing only the positive or negative
perceived effect the event has on the subject, thus facilitating extensibility and integrability of the emotional system; (ii) the final emotional state to attribute in relation to a specific situation is better explained by ethical reasoning mechanisms. These hypotheses are supported by our experimental results. Therefore, this contribution is particularly significant to provide a more simple and generalisable explanation of cognitive appraisal theory and to promote the integration between theories of emotion and ethics studies, currently often neglected by the available literature.
Raza, SA & Williams, M-A 2017, 'Potential Based Reward Shaping Using Learning to Rank', Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, ACM, pp. 261-262.View/Download from: UTS OPUS
Raza, SA & Williams, M-A 2017, 'Unconventional Formats of Background Knowledge from Human Teacher in Reward Shaping', Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, ACM, pp. 373-374.View/Download from: UTS OPUS
Tonkin, M, Vitale, J, Ojha, S, Clark, J, Pfeiffer, S, Judge, W, Wang, X & Williams, M 2017, 'Embodiment, Privacy and Social Robots: May I Remember You?', Social Robotics: 9th International Conference, ICSR 2017, International Conference on Social Robotics, Springer International Publishing, Tsukuba, Japan, pp. 506-515.View/Download from: UTS OPUS or Publisher's site
As social robots move from the laboratory into public settings the possibility of unwanted intrusion into a user's personal privacy is magnified.
The actual social interaction between human and robot may involve anthropomorphising of the robot by the user, and this may prompt the user to disclose private or sensitive information. To comprehend possible impacts we conducted an exploratory study with a novel privacy measure to understand changes to users' privacy considerations when interacting with an embodied robotic system vs a disembodied system.
In this paper we measure the difference in personal information provided to such systems, and discuss the idea that embodiment may increase users' risk tolerance and reduce their privacy concerns.
Tonkin, M, Vitale, J, Ojha, S, Williams, M-A, Fuller, P, Judge, W & Wang, X 2017, 'Would You Like to Sample? Robot Engagement in a Shopping Centre', 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), International Symposium on Robot and Human Interactive Communication, IEEE, Lisbon, Portugal, pp. 42-49.View/Download from: UTS OPUS or Publisher's site
Nowadays, robots are gradually appearing in public spaces such as libraries, train stations, airports and shopping centres. Only a limited percentage of research literature explores robot applications in public spaces. Studying robot applications in the wild is particularly important for designing commercially viable applications able to meet a specific goal. Therefore, in this paper we conduct an experiment to test a robot application in a shopping centre, aiming to provide results relevant for today's technological capability and market. We compared the performance of a robot and a human in promoting food samples in a shopping centre, a well known commercial application, and then analysed the effects of the type of engagement used to achieve this goal. Our results show that the robot is able to engage customers similarly to a human as expected. However unexpectedly, while an actively engaging human was able to perform better than a passively engaging human, we found the opposite effect for the robot. In this paper we investigate this phenomenon, with possible explanation ready to be explored and tested in subsequent research.
Vitale, J, Johnston, B & Williams, MA 2017, 'Facial Motor Information is Sufficient for Identity Recognition', Proceedings of the 39th Annual Meeting of the Cognitive Science Society, The 39th Annual Meeting of the Cognitive Science Society, Cognitive Science Society, London, pp. 3447-3452.View/Download from: UTS OPUS
The face is a central communication channel providing information about the identities of our interaction partners and their potential mental states expressed by motor configurations. Although it is well known that infants ability to recognise people follows a developmental process, it is still an open question how face identity recognition skills can develop and, in particular, how facial expression and identity processing potentially interact during this developmental process. We propose that by acquiring information of the facial motor configuration observed from face stimuli encountered throughout development would be sufficient to develop a face-space representation. This representation encodes the observed face stimuli as points of a multidimensional psychological space able to assist facial identity and expression recognition. We validate our hypothesis through computational simulations and we suggest potential implications of this understanding with respect to the available findings in face processing.
van Rijmenam, M, Erekhinskaya, T, Schweitzer, J & Williams, MA 2017, 'How Big Data Analytics Changes the Practice of Strategy When Navigating Times of Disruptive Innovation', Transforming Entrepreneurial - Thinking into Dynamic Capabilities, Strategic Management Society (SMS) Special Conference, Banff, Alberta, Canada.View/Download from: UTS OPUS
van Rijmenam, M, Schweitzer, J & Williams, MA 2017, 'A Distributed Future:How Blockchain Affects Strategic Management, Organisation Design & Governance', At the Interface, Academy of Management 2017 Annual Meeting, Academy of Management, Atlanta, Georia.View/Download from: UTS OPUS or Publisher's site
Blockchain is a new technology that transforms strategic management, organisational design and governance due to its decentralised and distributed characteristics. It is a database technology, a distributed ledger, that records and maintains indefinitely an ever-growing list of data records, which cannot be altered or tampered with. The usage of smart contracts on blockchains, affects strategic management as the process of developing, executing and evaluating decisions will become automated and irreversible. This will result in new, disruptive, organisation design, including that of a Decentralised Autonomous Organisation (DAO). These are organisations that establish governance without managers or employees, run completely by autonomous computer software, where trust among actors is established cryptographically. However, organisations that want to move to the Blockchain face numerous business and technical challenges. In this conceptual paper, we provide an overview of the Blockchain, how it affects strategic management, changes organisational design and requires a new form of corporate governance
Ojha, S & Williams, MA 2016, 'Ethically-Guided Emotional Responses for Social Robots: Should I Be Angry?', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Social Robotics (ICSR), SPRINGER-VERLAG BERLIN, Kansas City, USA.View/Download from: UTS OPUS or Publisher's site
Emotions play a critical role in human-robot interaction. Human-robot interaction in social contexts will be more effective if robots can understand human emotions and express (display) emotions accordingly as a means to communicate their own internal state. In this paper we present a novel computational model of robot emotion generation based on appraisal theory and guided by ethical judgement. There have been recent advances in developing emotion for robots. However, despite the extensive research on robot emotion, it is difficult to say if a particular robot is exhibiting appropriate emotions or even showing that it can empathize with humans by exhibiting similar emotions to humans in the same situation. A key question is - to what extent should a robot direct anger toward a young child or an elderly person for an act that it should show anger towards an ordinary adult to signal danger or stupidity? Realizing the need for an ethically guided approach to emotion expressions in social robots as they interact with people, we present a novel Ethical Emotion Generation System (EEGS) for the expression of the most acceptable emotions in social robots.
Peppas, P & Williams, MA 2016, 'Kinetic consistency and relevance in belief revision', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Logics in Artificial Intelligence, European Conference, Springer, Larnaca, Cyprus, pp. 401-414.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing AG 2016.A critical aspect of rational belief revision that has been neglected by the classical AGM framework is what we call the principle of kinetic consistency. Loosely speaking, this principle dictates that the revision policies employed by a rational agent at different belief sets, are not independent, but ought to be related in a certain way. We formalise kinetic consistency axiomatically and semantically, and we establish a representation result explicitly connecting the two. We then combine the postulates for kinetic consistency, with Parikh's postulate for relevant change, and add them to the classical AGM postulates for revision; we call this augmented set the extended AGM postulates. We prove the consistency and demonstrate the scope of the extended AGM postulates by showing that a whole new class of concrete revision operators introduced hererin, called PD operators, satisfies all extended AGM postulates. PD operators are of interest in their own right as they are natural generalisations of Dalal's revision operator.We conclude the paper with some examples illustrating the strength of the extended AGM postulates, even for iterated revision scenarios.
Williams, MA 2016, 'Decision-theoretic human-robot interaction: Designing reasonable and rational robot behavior', Social Robotics (LNCS), International Conference on Social Robotics (ICSR), Springer, Kansas City, USA, pp. 72-82.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing AG 2016.Autonomous robots are moving out of research labs and factory cages into public spaces; people's homes, workplaces, and lives. A key design challenge in this migration is how to build autonomous robots that people want to use and can safely collaborate with in undertaking complex tasks. In order for people to work closely and productively with robots, robots must behave in way that people can predict and anticipate. robots chose their next action using the classical sensethink- act processing cycle. robotists design actions and action choice mechanisms for robots. This design process determines robot behaviors, and how well people are able to interact with the robot. Crafting how a robot will choose its next action is critical in designing social robots for interaction and collaboration. This paper identifies reasonableness and rationality, two key concepts that are well known in Choice Theory, that can be used to guide the robot design process so that the resulting robot behaviors are easier for humans to predict, and as a result it is more enjoyable for humans to interact and collaborate. Designers can use the notions of reasonableness and rationality to design action selection mechanisms to achieve better robot designs for human-robot interaction. We show how Choice Theory can be used to prove that specific robot behaviors are reasonable and/or rational, thus providing a formal, useful and powerful design guide for developing robot behaviors that people find more intuitive, predictable and fun, resulting in more reliable and safe human-robot interaction and collaboration.
Abidi, S, Piccardi, M & Williams, M 2016, 'Static Action Recognition by Efficient Greedy Inference', Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision, IEEE Winter Conference on Applications of Computer Vision, IEEE, Lake Placid, NY, USA, pp. 1-8.View/Download from: UTS OPUS or Publisher's site
Action recognition from a single image is an important task for applications such as image annotation, robotic navigation, video surveillance and several others. Existing methods for recognizing actions from still images mainly rely on either bag-of-feature representations or pose estimation from articulated body-part models. However, the relationship between the action and the containing image is still substantially unexplored. Actually, the presence of given objects or specific backgrounds is likely to provide informative clues for the recognition of the action. For this reason, in this paper we propose approaching action recognition by first partitioning the entire image into superpixels, and then using their latent classes as attributes of the action. The action class is predicted based on a graphical model composed of measurements from each superpixel and a fully-connected graph of superpixel classes. The model is learned using a latent structural SVM approach, and an efficient, greedy algorithm is proposed to provide inference over the graph. Differently from most existing methods, the proposed approach does not require annotation of the actor (usually provided as a bounding box). Experimental results over the challenging Stanford 40 Action dataset have reported an impressive mean average precision of 72.3%, the highest achieved to date.
Romat, H, Williams, M-A, Wang, X, Johnston, B, Bard, H & ACM 2016, 'Natural Human-Robot Interaction Using Social Cues', Proceedings of the 11th ACM/IEEE International Conference on Human Robot Interaction, ACM/IEEE International Conference on Human Robot Interaction (HRI), IEEE, Christchurch, New Zealand, pp. 503-504.View/Download from: UTS OPUS or Publisher's site
This paper investigates the problem of how humans understand and control human-robot collaborative action and how to build natural interactions during human-robot collaborative action. We use a "pick and place" experiment to study collaborative activities between a human and a robot. The results show that even if human participants had a good understanding of the maximum reachability of the robot, they consistently take a surprisingly long time to help and assist the robot when a target object is out of its reach. We implemented a number of social cues in the experiment, analysed their effects in order to identify the role they could play to improve the fluency of human-robot collaboration. The experimental results showed that when the robot uses head movements, two hands or a gesture to indicate non-reachability, people react in a more natural way to assist the robot.
Vitale, J, Williams, M-A & Johnston, B 2016, 'The face-space duality hypothesis: a computational model', Proceedings of the 38th Annual Conference of the Cognitive Science Society, Annual Conference of the Cognitive Science Society, Cognitive Science Society, Philadelphia, pp. 514-519.View/Download from: UTS OPUS
Ramezani, N & Williams, M 2015, 'Smooth robot motion with an Optimal Redundancy Resolution for PR2 robot based on an analytic inverse kinematic solution', Proceedings of the 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), IEEE-RAS International Conference on Humanoid Robots, IEEE, Seoul, Korea, pp. 338-345.View/Download from: Publisher's site
Current support for the PR2 humanoid arm motion planning does not provide a redundancy resolution scheme and this can cause excessive movements and discontinuities in the generated trajectories. We provide an innovative solution and implementation in ROS that can be used to deliver smooth motions that people find to be intuitive and easy to predict. This paper, provides a comprehensive analysis of kinematics of the PR2 humanoid robot arm including an analytic Inverse Kinematics solution and an Optimal Redundancy Resolution scheme. First, a closed form IK computation method is introduced for the PR2 arms providing all feasible solutions in global jointspace for a given value of the first joint angle as a redundant parameter. Then, a redundancy optimization technique is customized and formulated based on a desired objective function that finds optimal values for the redundant parameter. The proposed technique computes robot motion plan more effectively so that the robot behaviors are expected to be more reliable. The technique has been implemented successfully on PR2 for a hand writing task.
Novianto, R & Williams, M-A 2014, 'Operant Conditioning in ASMO Cognitive Architecture', BICA 2014. 5th Annual International Conference on Biologically Inspired Cognitive Architectures, Biologically Inspired Cognitive Architecture, Elsevier, Massachusetts Institute of Technology, Cambridge, MA, USA, pp. 404-411.View/Download from: UTS OPUS
Peppas, P & Williams, M 2014, 'Belief Change and Semiorders', http://www.aaai.org/Press/Proceedings/kr14.php, Principles of Knowledge Representation and Reasoning, AAAI, Vienna.View/Download from: UTS OPUS
A central result in the AGM framework for belief revision
is the construction of revision functions in terms
of total preorders on possible worlds. These preorders
encode comparative plausibility: r ă r1 states that the
world r is at least as plausible as r1. Indifference in the
plausibility of two worlds, r, r1, denoted r ' r1, is defined
as r ⊀ r1 and r1 ⊀ r. Herein we take a closer look
at plausibility indifference. We contend that the transitivity
of indifference assumed in the AGM framework
is not always a desirable property for comparative plausibility.
Our argument originates from similar concerns
in preference modelling, where a structure weaker than
a total preorder, called a semiorder, is widely consider
to be a more adequate model of preference. In this paper
we essentially re-construct revision functions using
semiorders instead of total preorders.We formulate postulates
to characterise this new, wider, class of revision
functions, and prove that the postulates are sound and
complete with respect to the semiorder-based construction.
The corresponding class of contraction functions
(via the Levi and Harper Identities) is also characterised
Williams, M & Peppas, P 2014, 'Constructive models for contraction with intransitive plausibility indifference', Logics in Artificial Intelligence - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Logics in Artificial Intelligence, European Conference, Springer Verlag, Madeira, Portugal, pp. 355-367.View/Download from: UTS OPUS or Publisher's site
Plausibility rankings play a central role in modeling Belief Change, and they take different forms depending on the type of belief change under consideration: preorders on possible worlds, epistemic entrenchments, etc. A common feature of all these structures is that plausibility indifference is assumed to be transitive. In a previous article, , we argued that this is not always the case, and we introduced new sets of postulates for revision and contraction (weaker variants of the classical AGM postulates), that are liberated from the indifference transitivity assumption. Herein we complete the task by making the necessary adjustments to the epistemic entrenchment and the partial meet models. In particular we lift the indifference transitivity assumption from both these two models, and we establish representation results connecting the weaker models with the weaker postulates for contraction introduced in .
Novianto, R, Williams, M-A, Gärdenfors, P & Wightwick, G 2014, 'Classical conditioning in social robots', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Social Robotics (ICSR), Springer Verlag, Sydney, Australia, pp. 279-289.View/Download from: UTS OPUS or Publisher's site
Classical conditioning is important in humans to learn and predict events in terms of associations between stimuli and to produce responses based on these associations. Social robots that have a classical conditioning skill like humans will have an advantage to interact with people more naturally, socially and effectively. In this paper, we present a novel classical conditioning mechanism and describe its implementation in ASMO cognitive architecture. The capability of this mechanism is demonstrated in the Smokey robot companion experiment. Results show that Smokey can associate stimuli and predict events in its surroundings. ASMO's classical conditioning mechanism can be used in social robots to adapt to the environment and to improve the robots' performances.
Vitale, J, Williams, M-A & Johnston, B 2014, 'Socially impaired robots: Human social disorders and robots' socio-emotional intelligence', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Social Robotics (ICSR), Springer Verlag, Sydney, Australia, pp. 350-359.View/Download from: UTS OPUS or Publisher's site
Social robots need intelligence in order to safely coexist and interact with humans. Robots without functional abilities in understanding others and unable to empathise might be a societal risk and they may lead to a society of socially impaired robots. In this work we provide a survey of three relevant human social disorders, namely autism, psychopathy and schizophrenia, as a means to gain a better understanding of social robots' future capability requirements.We provide evidence supporting the idea that social robots will require a combination of emotional intelligence and social intelligence, namely socio-emotional intelligence. We argue that a robot with a simple socio-emotional process requires a simulation-driven model of intelligence. Finally, we provide some critical guidelines for designing future socio-emotional robots.
Wang, X, Williams, M-A, Gardenfors, P, Vitale, J, Abidi, S, Johnston, B, Kuipers, B & Huang, A 2014, 'Directing human attention with pointing', Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on, IEEE/RSJ International Symposium on Robot and Human Interactive Communication, IEEE, Edinburgh, Scotland, pp. 174-179.View/Download from: UTS OPUS or Publisher's site
Pointing is a typical means of directing a human's attention to a specific object or event. Robot pointing behaviours that direct the attention of humans are critical for human-robot interaction, communication and collaboration. In this paper, we describe an experiment undertaken to investigate human comprehension of a humanoid robot's pointing behaviour. We programmed a NAO robot to point to markers on a large screen and asked untrained human subjects to identify the target of the robots pointing gesture. We found that humans are able to identify robot pointing gestures. Human subjects achieved higher levels of comprehension when the robot pointed at objects closer to the gesturing arm and when they stood behind the robot. In addition, we found that subjects performance improved with each assessment task. These new results can be used to guide the design of effective robot pointing behaviours that enable more effective robot to human communication and improve human-robot collaborative performance.
Al-Sharawneh, JA, Sinnappan, S & Williams, M 2013, 'Credibility-based twitter social network analysis', Lecture Notes in Computer Science, Asia Pacific Web Conference, Springer, Sydney, Australia, pp. 323-331.View/Download from: UTS OPUS or Publisher's site
Social Network (SN) members in Twitter communicate in varied contexts such as crisis. Formations within social networks are unique as some members have more influence over other members; members with more influence are known as leaders or pioneers. Findi
Chen, S & Williams, M 2013, 'Grounding Privacy-by-Design for Information Systems', Pacific Asia Conference on Information Systems (PACIS), Pacific Asia Conference on Information Systems, AIS Electronic Library, Jeju Island, Korea.View/Download from: UTS OPUS
The Privacy-by-Design approach has gained an increasing acceptance for privacy management in the
privacy community. However, there is still a research gap in methodologies for implementing this
approach and a need to develop frameworks and systems to support Privacy-by-Design practice. In an
attempt to bridge this gap, this paper uncovers hidden issues of the Privacy-by-Design approach as a
means to derive privacy requirements for implementing information systems with privacy embedded by
Abidi, SS, Williams, M & Johnston, BG 2013, 'Human pointing as a robot directive', ACM/IEEE International Conference on Human-Robot Interaction, ACM/IEEE International Conference on Human-Robot Interaction, IEEE, Tokyo, Japan, pp. 67-68.View/Download from: Publisher's site
People are accustomed to directing other people's attention using pointing gestures. People enact and interpret pointing commands often and effortlessly. If robots understand human intentions (e.g. as encoded in pointing-gestures), they can reach higher
Felix Navarro, KM, Gay, VC, Golliard, L, Johnston, BG, Leijdekkers, P, Vaughan, EP, Wang, T & Williams, M 2013, 'SocialCycle What Can a Mobile App Do To Encourage Cycling', 38th IEEE Conference on Local Computer Networks (LCN 2013) and Workshops, IEEE Conference on Local Computer Networks, IEEE Computer Society, Sydney Australia, pp. 24-30.View/Download from: UTS OPUS or Publisher's site
Traffic congestion presents significant enviromnental, social and economic costs. Encouraging people to cycle and use other fonns of alternate transportation is one important aspect of addressing these problems. However, many city councils face significant difficulties in educating citizens and encouraging them to fonn new habits around these alternate fonns of transport. Mobile devices present a great opportunity to effect such positive behavior change. In this paper we discuss the results of a survey aimed at understanding how mobile devices can be used to encourage cycling and/or improve the cycling experience. We use the results of the survey to design and develop a mobile app called SocialCycle, which purpose is to encourage users to start cycling and to increase the number of trips that existing riders take by bicycle
Novianto, R, Johnston, BG & Williams, M 2013, 'Habituation and sensitisation learning in ASMO cognitive architecture', Lecture Notes in Computer Science, International Conference on Social Robotics (ICSR), Springer International Publishing, Bristol, United Kingdom, pp. 249-259.View/Download from: UTS OPUS or Publisher's site
As social robots are designed to interact with humans in unstructured environments, they need to be aware of their surroundings, focus on significant events and ignore insignificant events in their environments. Humans have demonstrated a good example of adaptation to habituate and sensitise to significant and insignificant events respectively. Based on the inspiration of human habituation and sensitisation, we develop novel habituation and sensitisation mechanisms and include these mechanisms in ASMO cognitive architecture. The capability of these mechanisms is demonstrated in the `Smokey robot companion experiment. Results show that Smokey can be aware of their surroundings, focus on significant events and ignore insignificant events. ASMOs habituation and sensitisation mechanisms can be used in robots to adapt to the environment. It can also be used to modify the interaction of components in a cognitive architecture in order to improve agents or robots performances.
Wang, W, Johnston, B & Williams, MA 2013, 'Recognition and representation of robot skills in real time: A theoretical analysis', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Social Robotics (ICSR), Springer, Bristol, UK, pp. 127-137.View/Download from: UTS OPUS or Publisher's site
Sharing reusable knowledge among robots has the potential to sustainably develop robot skills. The bottlenecks to sharing robot skills across a network are how to recognise and represent reusable robot skills in real-time and how to define reusable robot skills in a way that facilitates the recognition and representation challenge. In this paper, we first analyse the considerations to categorise reusable robot skills that manipulate objects derived from R.C. Schank's script representation of human basic motion, and define three types of reusable robot skills on the basis of the analysis. Then, we propose a method with potential to identify robot skills in real-time. We present a theoretical process of skills recognition during task performance. Finally, we characterise reusable robot skill based on new definitions and explain how the new proposed representation of robot skill is potentially advantageous over current state-of-the-art work. © Springer International Publishing 2013.
Williams, MA, Abidi, S, Gärdenfors, P, Wang, X, Kuipers, B & Johnston, B 2013, 'Interpreting robot pointing behavior', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Social Robotics (ICSR), Springer, Bristol, UK, pp. 148-159.View/Download from: UTS OPUS or Publisher's site
The ability to draw other agents' attention to objects and events is an important skill on the critical path to effective human-robot collaboration. People use the act of pointing to draw other people's attention to objects and events for a wide range of purposes. While there is significant work that aims to understand people's pointing behavior, there is little work analyzing how people interpret robot pointing. Since robots have a wide range of physical bodies and cognitive architectures, interpreting pointing will be determined by a specific robot's morphology and behavior. Humanoids and robots whose heads, torso and arms resemble humans that point may be easier for people to interpret, however if such robots have different perceptual capabilities to people then misinterpretation may occur. In this paper we investigate how ordinary people interpret the pointing behavior of a leading state-of-the-art service robot that has been designed to work closely with people. We tested three hypotheses about how robot pointing is interpreted. The most surprising finding was that the direction and pitch of the robot's head was important in some conditions. © Springer International Publishing 2013.
Raza, S, Haider, S & Williams, M 2013, 'Robot reasoning using first order bayesian networks', Lecture Notes in Computer Science, International Symposium on Integrated Uncertainty in Knowledge Modelling and Decision Making, Springer, Beijing, China, pp. 1-12.View/Download from: UTS OPUS or Publisher's site
This study presents the application of first-order Bayesian Networks (FOBN) to model and reason in domains with complex relational and rich probabilistic structures. The FOBN framework used in this study is 'multi-entity Bayesian networks (MEBN). MEBN ha
Bogdanovych, A, Stanton, CJ, Wang, X & Williams, M 2012, 'Real-Time Human-Robot Interactive Coaching System with Full-Body Control Interface', Lecture Notes in Computer Science, RoboCup Symposium held in Conjunction with the RoboCup Competition, Springer, Istanbul, Turkey, pp. 562-573.View/Download from: UTS OPUS or Publisher's site
The ambitious goal being pursued by researchers participating in the RoboCup challenge  is to develop a team of autonomous humanoid robots that is capable of winning against a team of human soccer players. An important step in this direction is to actively utilise human coaching to improve the skills of robots at both tactical and strategic levels. In this paper we explore the hypothesis that embedding a human into a robots body and allowing the robot to learn tactical decisions by imitating the human coach can be more efficient than programming the robot explicitly. To enable this, we have developed a sophisticated HRI system that allows a human to interact with, coach and control an Aldebaran Nao robot through the use of a motion capture suit, portable computing devices (iPhone and iPad), and a head mounted display (which allows the human controller to experience the robots visual perceptions of the world). This paper describes the HRI-Coaching system we have developed, detailing the underlying technologies and lessons learned from using it to control the robot. The system in its current stages shows high potential for human-robot coaching, but requires further calibration and development to allow a robot to learn by imitating the human coach.
Data purpose is a central concept in modeling privacy requirements. Existing purpose-based approaches for privacy protection have mainly focused on access control. The problem of ensuring the consistency between data purpose and data usage has been under-addressed. In an attempt to bridge this research gap, we develop a grounded understanding of data purpose and relevant key concepts that is fundamental to address the problem. We propose a Minimum Action Permission Principle as a basic guideline to establish a path to solutions to the consistency problem.
Chen, S & Williams, M 2012, 'Information Makes A Difference For Privacy Design', PACIS 2012 PROCEEDINGS, Pacific Asia Conference on Information Systems, PACIS, Ho Chi Minh City, pp. 1-13.View/Download from: UTS OPUS
In the current information age, information can make a difference to all aspects of ones life, emotionally, ethically, financially or societally. Information privacy plays a key role in enabling a difference in many dimensions such as trust, respect, reputation, security, resource, ability, employment, etc. The capability of information to make a difference to ones life is a fundamental factor; and privacy status of information is a key factor driving this difference. Understanding the impact of these two factors to ones life within an IS context is an important research gap in the discipline. This paper studies information + privacy, ontologically and integrally, in making a difference to ones life, within the IS context. In recognition of the importance of the Privacy-by- Design approach to IS development, a methodology is proposed to understand the grounds of information and model fundamental constructs for using Privacy-by-Design approach to develop robust privacy-friendly information systems
Williams, M 2012, 'Robot social intelligence', Lecture Notes in Computer Science, International Conference on Social Robotics (ICSR), Springer, Chengdu, China, pp. 45-55.View/Download from: UTS OPUS or Publisher's site
Robots are pervading human society today at an ever-accelerating rate, but in order to actualize their profound potential impact, robots will need cognitive capabilities that support the necessary social intelligence required to fluently engage with people and other robots. People are social agents and robots must develop sufficient social intelligence to engage with them effectively. Despite their enormous potential, robots will not be accepted in society unless they exhibit social intelligence skills. They cannot work with people effectively if they ignore the limitations, needs, expectations and vulnerability of people working in and around their workspaces. People are limited social agents, i.e. they do not have unlimited cognitive, computational and physical capabilities. People have limited ability in perceiving, paying attention, reacting to stimuli, anticipating, and problem-solving. In addition, people are constrained by their morphology; it limits their physical strength for example. People cannot be expected to and will not compensate for social deficiencies of robots, hence widespread acceptance and integration of robots into society will only be achieved if robots possess the sufficient social intelligence to communicate, interact and collaborate with people. In this paper we identify the key cognitive capabilities robots will require to achieve appropriate levels of social intelligence for safe and effective engagement with people. This work serves as a proto-blueprint that can inform the emerging roadmap and research agenda for the new exciting and challenging field of social robotics.
Agmon, N, Agrawal, V, Aha, DW, Aloimonos, Y, Buckley, D, Doshi, P, Geib, C, Grasso, F, Green, N, Johnston, B, Kaliski, B, Kiekintveld, C, Law, E, Lieberman, H, Mengshoel, OJ, Metzler, T, Modayil, J, Oard, DW, Onder, N, O'Sullivan, B, Pastra, K, Precup, D, Ramachandran, S, Reed, C, Sariel-Talay, S, Selker, T, Shastri, L, Singh, S, Smith, SF, Srivastava, S, Sukthankar, G, Uthus, DC & Williams, MA 2012, 'Reports of the AAAI 2011 conference workshops', AI Magazine, pp. 57-70.View/Download from: Publisher's site
The AAAI-11 workshop program was held Sunday and Monday, August 7-18, 2011, at the Hyatt Regency San Francisco in San Francisco, California USA. The AAAI-11 workshop program included 15 workshops covering a wide range of topics in artificial intelligence. The titles of the workshops were Activity Context Representation: Techniques and Languages; Analyzing Microtext; Applied Adversarial Reasoning and Risk Modeling; Artificial Intelligence and Smarter Living: The Conquest of Complexity; Artifiicial Intelligence for Data Center Management and Cloud Computing; Automated Action Planning for Autonomous Mobile Robots; Computational Models of Natural Argument; Generalized Planning; Human Computation; Human-Robot Interaction in Elder Care; Interactive Decision Theory and Game Theory, 2010; Language-Action Tools for Cognitive Artificial Agents: Integrating Vision, Action, and Language; Lifelong Learning from Sensorimotor Experience; Plan, Activity, and Intent Recognition; and Scalable Integration of Analytics and Visualization. This article presents short summaries of those events. Copyright © 2012, Association for the Advancement of Artificial Intelligence. All rights reserved.
Wang, W, Johnston, BG & Williams, M 2012, 'Social networking for robots to share knowledge, skills and know-how', International Conference on Social Robotics, International Conference on Social Robotics (ICSR), Springer, Chengdu, China, pp. 418-427.View/Download from: UTS OPUS or Publisher's site
A major bottleneck in robotics research and development is the difficulty and time required to develop and implement new skills for robots to realize task-independence. In spite of work done in terms of task model transfer among robots, so far little work has been done on how to make robots task-independent. In this paper, we describe our work-in-progress towards the development of a robot social network called Numbots that draws on the principle of sharing information in human social networking. We demonstrate how Numbots has the potential to assist knowledge sharing, know-how and skill transfer among robots to realize task-independence.
Haider, S, Abidi, SS & Williams, M 2012, 'On evolving a dynamic bipedal walk using Partial Fourier Series', 2012 IEEE International Conference on Robotics and Biomimetics, ROBIO 2012 - Conference Digest, IEEE International Conference on Robotics and Biomimetics, IEEE, Guangzhou, China, pp. 8-13.View/Download from: UTS OPUS or Publisher's site
The paper presents a Partial Fourier Series (PFS) based bipedal gait in sagittal and transverse planes. The parameters of the Fourier series are optimized through Evolutionary Algorithms (EA). In addition to evolving the two walks (forward and turn) separately, the paper demonstrates how the combination of the two enables a dynamic and adjustable walk. The stability of the walk is ensured through an effective use of the built-in gyroscope sensor. The evolved walk has been tested on the simulated version of the humanoid Nao robot and is being used within the RoboCup Soccer 3D Simulation competition
Raza, S, Haider, S & Williams, M 2012, 'Teaching coordinated strategies to soccer robots via imitation', 2012 IEEE International Conference on Robotics and Biomimetics, ROBIO 2012 - Conference Digest, IEEE International Conference on Robotics and Biomimetics, IEEE, Guangzhou, China, pp. 1434-1439.View/Download from: UTS OPUS or Publisher's site
Developing coordination among multiple agents and enabling them to exhibit teamwork is a challenging yet exciting task that can benefit many of the complex real-life problems. This research uses imitation to learn collaborative strategies for a team of agents. Imitation based learning involves learning from an expert by observing him/her demonstrating a task and then replicating it. The key idea is to involve multiple human experts during demonstration to teach autonomous agents how to work in coordination. The effectiveness of the proposed methodology has been assessed in a goal defending scenario of the RoboCup Soccer Simulation 3D league. The process involves multiple human demonstrators controlling soccer agents via game controllers and demonstrating them how to play soccer in coordination. The data gathered during this phase is used as training data to learn a classification model which is later used by the soccer agents to make autonomous decisions during actual matches. Different performance evaluation metrics are derived to compare the performance of imitating agent with that of the human-driven agent and hand-coded (if-then-else rules) agent.
Stanton, CJ, Ratanasena, E, Haider, S & Williams, M 2012, 'Perceiving forces, bumps, and touches from proprioceptive expectations', Lecture Notes in Computer Science, Robot Soccer World Cup, Springer, Istanbul, Turkey, pp. 377-388.View/Download from: UTS OPUS or Publisher's site
We present a method for enabling an Aldebaran Nao humanoid robot to perceive bumps and touches caused by physical contact forces. Dedicated touch, tactile or force sensors are not used. Instead, our approach involves the robot learning from experience to generate a proprioceptive motor sensory expectation from recent motor position commands. Training involves collecting data from the robot characterised by the absence of the impacts we wish to detect, to establish an expectation of normal motor sensory experience. After learning, the perception of any unexpected force is achieved by the comparison of predicted motor sensor values with sensed motor values for each DOF on the robot. We demonstrate our approach allows the robot to reliably detect small (and also large) impacts upon the robot (at each individual joint servo motor) with high, but also varying, degrees of sensitivity for different parts of the body. We discuss current and possible applications for robots that can develop and exploit proprioceptive expectations during physical interaction with the world
Al-Sharawneh, JA, Williams, M, Wang, X & Goldbaum, D 2011, 'Mitigating Risk in Web-Based Social Network Service Selection: Follow the Leader', The Sixth International Conference on Internet and Web Applications and Services (ICIW 2011), International Conference on Internet and Web Applications and Services, The International Academy, Research and Industry Association (IARIA), St. Maarten, The Netherlands Antilles, pp. 156-164.View/Download from: UTS OPUS
In the Service Web, a huge number of Web services compete to offer similar functionalities from distributed locations. Since no Web service is risk free, this paper aims to mitigate the risk in service selection using 'Follow the Leader' principle as a new approach for risk-reducing strategy. First, we define the user credibility model based on the 'Follow the Leader' principle in web-based social networks. Next we show how to evaluate the Web service credibility based on its trustworthiness and expertise. Finally, we present a dynamic selection model to select the best service with the perceived performance risk and customer risk-attitude considerations. To demonstrate the feasibility and effectiveness of the new 'Follow the Leader' driven approach to alleviate the risk in service selection, we used a Social Network Analysis Studio (SNAS) to verify the validity of the proposed model. The empirical results incorporated in this paper, demonstrate that our approach is a significantly innovative approach as riskreducing strategy in service selection.
Barkowsky, T, Bertel, S, Broz, F, Chaudhri, VK, Eagle, N, Genesereth, M, Halpin, H, Hamner, E, Hoffmann, G, Hölscher, C, Horvitz, E, Lauwers, T, McGuinness, DL, Michalowski, M, Mower, E, Shipley, TF, Stubbs, K, Vogl, R & Williams, MA 2011, 'Reports of the AAAI 2010 spring symposia', AI Magazine, pp. 115-122.
The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2010 Spring Symposium Series Monday through Wednesday, March 22-24, 2010, at Stanford University. The titles of the seven symposia were Artificial Intelligence for Development; Cognitive Shape Processing; Educational Robotics and Beyond: Design and Evaluation; Embedded Reasoning: Intelligence in Embedded Systems; Intelligent Information Privacy Management; It's All in the Timing: Representing and Reasoning about Time in Interactive Behavior; and Linked Data Meets Artificial Intelligence. Copyright © 2010.
Chen, S & Williams, M 2011, 'Grounding Data Purpose And Data Usage For Better Privacy Requirements Development: An Information System Perspective', The Pacific Asia Conference on Information Systems, Pacific Asia Conference on Information Systems, PACIS, Brisbane, Australia, pp. 1-13.View/Download from: UTS OPUS
Data purpose is a central concept to modeling privacy requirements for information systems. Existing purpose-centric approaches for privacy protection have mainly focused on access control. The problem of ensuring the consistency between data purpose and data usage has been under-addressed. Given the lack of practical purpose-centric solutions, we argue that a grounded understanding of the underlying concepts of data purpose and usage is fundamental to modeling privacy requirements. In recognition of an existing "privacy rights" framework, this paper develops an ontological grounding of data purpose and usage that can be used to understand their implications on fundamental privacy rights for modeling privacy requirements for information systems.
Novianto, R & Williams, M 2011, 'Innate and Learned Emotion Network', Proceedings of the Second Annual Meeting of the BICA Society - Biologically Inspired Cognitive Architectures 2011 - Frontiers in Artificial Intelligence and Applications vol 233, Biologically Inspired Cognitive Architectures, IOS Press, Arlington, USA, pp. 263-268.View/Download from: UTS OPUS or Publisher's site
Autonomous agents sometimes can only rely on the subjective information in terms of emotions to make decision due to the inavailability of the non-subjective knowledge. However, current emotion models lack of integrating innate emotion and learned emotion and tend to focus on a specific aspect. This paper describes the underlying new computational emotion model in ASMO which integrates both innate and learned emotions as well as reasoning based on probabilistic causal network. ASMO's emotion model is compared with other models and related works and shows its practical capabilities to utilize subjective knowledge in decision making.
Wang, X & Williams, MA 2011, 'Risk, uncertainty and possible worlds', Proceedings - 2011 IEEE International Conference on Privacy, Security, Risk and Trust and IEEE International Conference on Social Computing, PASSAT/SocialCom 2011, ASE/IEEE International Conference on Social Computing (SocialCom), IEEE, Boston, MA, USA, pp. 1278-1283.View/Download from: UTS OPUS or Publisher's site
Risk is an important and ubiquitous concept that plays a crucial role in decision makings across domains. Risk is also a vague notion that carries different meaning under different domain context and perspectives. This paper aims to provide a formal generalised definition of risk based on the possible world paradigm and expected utility theory and the meanings of risk from both qualitative and quantitative level. This definition of risk is developed from the perspective an intelligent agent or information system. It provides a solid theoretical foundation upon which we can construct an intelligent generalised risk modelling and management framework using techniques from Artificial Intelligence research. This framework can be implemented as an integral part of an information system for better decision support and management of businesses. © 2011 IEEE.
Rudduck, SG, Williams, M & Stoianoff, NP 2011, 'Visualizing the Shape of Quality: An application in the context of Intellectual Property', SHAPES 1.0 The Shape Of Things 2011, Proceedings of the First Interdisciplinary Workshop on SHAPES, Interdisciplinary Workshop on SHAPES, CUER Wokshop Proceeedings, Karlsruhe, Germany, pp. 1-10.View/Download from: UTS OPUS
The aim of this work is to explore how the concept of shape can be applied in the context of Intellectual Property Law (IPL). Despite the global nature of IPL, the system is plagued with considerable uncertainty, especially in the specific instrument of patents. We believe the shape concept can find a balance between the inventive ideas, patent claims and objects in the world. The outcomes of this can then be measured as a time-dependent expectancy that an invention will conform to legal rules when under examination by officials. Specifically, we establish an empirical-based benchmark which can be utilized to test whether shape (via visual figures) is useful in reducing the uncertainty (measured via number of examination actions) which an applicant might face in patenting technological ideas.
Al-Sharawneh, JA & Williams, M 2010, 'Credibility-aware Web-based Social Network Recommender: Follow the Leader', Proceedings of the 2nd ACM RecSys 10 Workshop on Recommender Systems and the Social Web, ACM Recommender Systems, WARWICK, Barcelona, Spain, pp. 1-8.View/Download from: UTS OPUS
In web-based social networks social trust relationships between users indicate the similarity of their needs and opinions. Trust in this sense can be used to make recommendations on the web because trust information enables the clustering of users based on their credibility which is aggregation of expertise and trustworthiness. In this paper, we propose a new approach to making recommendations based on leadersâ credibility in the âFollow the Leaderâ model as Top-N recommenders by incorporating social network information into user-based collaborative filtering. To demonstrate the feasibility and effectiveness of âFollow the Leaderâ as a new approach to making recommendations, first we developed a Social Network Analysis Studio (SNAS) that captures real data from the Epinions dataset, next we used it to verify the proposed model. The empirical results incorporated in this paper, demonstrate that our approach is a significantly innovative approach to making effective CF based recommendations especially for cold start users.
Al-Sharawneh, JA & Williams, M 2010, 'Credibility-based Social Network Recommendation: Follow the Leader', ACIS 2010 Proceedings - 21st Australasian Conference on Information Systems - Information Systems: Defining and Establishing a High Impact Discipline, Australasian Conference on Information Systems, AIS Library, Queensland University of Technology (QUT), Brisbane, QLD, Australia, pp. 1-11.View/Download from: UTS OPUS
In Web-based social networks (WBSN), social trust relationships between users indicate the similarity of their needs and opinions. Trust can be used to make recommendations on the web because trust information enables the clustering of users based on their credibility which is an aggregation of expertise and trustworthiness. In this paper, we propose a new approach to making recommendations based on leadersâ credibility in the âFollow the Leaderâ model as Top-N recommenders by incorporating social network information into user-based collaborative filtering. To demonstrate the feasibility and effectiveness of âFollow the Leaderâ as a new approach to making recommendations, first we develop a new analytical tool, Social Network Analysis Studio (SNAS), that captures real data and used it to verify the proposed model using the Epinions dataset. The empirical results demonstrate that our approach is a significantly innovative approach to making effective collaborative filtering based recommendations especially for cold start users.
Al-Sharawneh, JA, Williams, M & Goldbaum, D 2010, 'Web Service Reputation Prediction based on Customer Feedback Forecasting Model', Enterprise Distributed Object Computing Conference Workshops (EDOCW), 2010 14th IEEE International, Enterprise Distributed Object Computing Conference, IEEE Computer Society, Brazil, pp. 33-40.View/Download from: UTS OPUS or Publisher's site
In the Service Web, customersâ feedback constitutes a substantial component of Web Service reputation and trustworthiness, which in turn impacts the service uptake by consumers in the future. This paper presents an approach to predict reputation in service-oriented environments. For assessing a Web Service reputation, we define reputation key metrics to aggregate the feedback of different aspects of the ratings. In situations where rating feedback is not available, we propose a Feedback Forecasting Model (FFM), based on Expectation Disconfirmation Theory (EDT), to predict the reputation of a web service in dynamic settings. Then we introduce the concept âReputation Aspectâ and show how to compute it efficiently. Finally we show how to integrate the Feedback Forecasting Model into Aspect-Based Reputation Computation. To demonstrate the feasibility and effectiveness of our approach, we test the proposed model using our Service Selection Simulation Studio (4S). The simulation results included in this paper show the applicability and performance of the proposed Reputation Prediction based on the Customer Feedback Forecasting Model. We also show how our model is efficient, particularly in dynamic environments.
Chen, S & Williams, M 2010, 'Modeling Privacy Requirements for Quality Manipulation of Information on Social Networking Sites', 2010 AAAI Spring Symposium on Intelligent Information Privacy Management, National Conference of the American Association for Artificial Intelligence, AAAI Press, Stanford University, pp. 42-47.View/Download from: UTS OPUS
The volume and diversity of information shared and exchanged within and across social networking sites is increasing. As a result new and challenging requirements are needed for quality manipulation of the information. An important requirement is information usability with privacy dimensions. Existing social networking sites do not provide adequate functionalities to fulfill privacy requirements of information use. This is largely due to the lack of a privacy-by-design approach that conducts an effective privacy requirements analysis as a means to develop suitable models for social networking that protect privacy. To bridge this gap, this paper analyses and models privacy requirements for a recommendation service in social networking sites.
Chen, S & Williams, M 2010, 'Privacy: An Ontological Problem', The Pacific Asia Conference on Information Systems, The Pacific Asia Conference on Information Systems, PACIS, Taipei, Taiwan, pp. 1402-1413.View/Download from: UTS OPUS
Approaches to addressing privacy issues tend to assume privacy is well understood and typically approach the problem from a security perspective. However, security is more concerned with safety than with privacy. Given the lack of satisfaction with advanced privacy-enhancing-technologies, we argue that an ontological framework is fundamental to advancing the capabilities of technologyenabled solutions. In recognition that privacy is a right to control information about oneself, this paper develops a new ontological foundation for privacy - an initial and important step to modeling privacy as a means to improving the privacy protection effectiveness of information systems.
Chen, S & Williams, M 2010, 'Towards A Comprehensive Requirements Architecture For Privacy-Aware Social Recommender Systems', The Seventh Asia-Pacific Conferences on Conceptual Modelling, Asia-Pacific Conferences on Conceptual Modelling, Australian Computer Society, Inc, Brisbane, Australia, pp. 33-42.View/Download from: UTS OPUS
Social recommendations have been rapidly adopted as important components in social network sites. However, they assume a cooperative relationship between parties involved. This assumption can lead to the creation of privacy issues and new opportunities for privacy infringements. Traditional recommendation techniques fail to address these issues, and as a consequence the development of privacy-aware cooperative social recommender systems give rise to an important research gap. In this paper we identify key problems that arise from the privacy dimension of social recommendations and propose a comprehensive requirements architecture for building privacy-aware cooperative social recommender systems.
Elliot, S & Williams, M 2010, 'World-Class IS-Enabled Business Innovation: A Case Study of IS Leadership, Strategy and Governance', Pacific Asia Conference on Information Systems (PACIS 2010), DBLP Computer Science Bibliography, Taipei, Taiwan, pp. 1814-1821.View/Download from: UTS OPUS
While many global corporations acknowledge they lack corporate capabilities for successful technology-enabled business innovation, an Australian financial services provider has been ranked by an international ratings agency in its highest categories due to the capabilities of its Information Systems. Its Loan Processing system has been commended by the ratings agency as the principal reason for its high ranking and for the organizationâs inclusion on a global list of selected service providers. This paper presents a longitudinal case study of how an organization with 750 employees located in rural Australia came to develop world-class strategic Information Systems. From its first system nearly 30 years ago, this paper shows how the organization has grown in-house capabilities to devise, develop, implement and manage applications of technology from operational systems that automate specific functions to systems that inform and enable enterprise strategy. The implications for theory and practice are discussed.
Genesereth, M, Vogl, R & Williams, MA 2010, 'AAAI Spring Symposium - Technical Report: Preface', AAAI Spring Symposium - Technical Report.
Rudduck, SG & Williams, M 2010, 'Conceptual Ternary Diagrams for Shape Perception: A Preliminary Step', 2010 AAAI Spring Symposium Series: Cognitive Shape Processing, National Conference of the American Association for Artificial Intelligence, AAAI, Stanford University, USA, pp. 34-38.View/Download from: UTS OPUS
This work-in-progress provides a preliminary cognitive investigation of how the external visualization of the Ternary diagram (TD) might be used as an underlying model for exploring the representation of simple 3D cuboids according to the theory of Conceptual Spaces. Gärdenfors introduced geometrical entities, known as conceptual spaces, for modeling concepts. He considered multidimensional spaces equipped with a range of similarity measures (such as metrics) and guided by criteria and mechanisms as a geometrical model for concept formation and management. Our work is inspired by the conceptual spaces approach and takes ternary diagrams as its underlying conceptual model. The main motivation for our work is twofold. First, Ternary Diagrams are powerful conceptual representations that have a solid historical and mathematical foundation. Second, the notion of overlaying an Information- Entropy function on a ternary diagram can lead to new insights into applications of reasoning about shape and other cognitive processes.
Wang, W, Elliot, S & Williams, M 2010, 'An IS contribution to the UN Millennium Development Goals: Next Generation Vaccination Management in the Developing World', ACIS 2010 Proceedings, Australasian Conference on Information Systems, AIS Electronic Library (AISeL), Brisbane, Australia, pp. 1-10.View/Download from: UTS OPUS
More than 9.5 million people die in the developing world unnecessarily each year due not to a lack of medicine but due to poor information management. It is proposed that the IS Discipline could contribute to resolving this and similar global challenges by making a greater contribution to high impact and high visibility global issues such as the UNâs Millennium Development Goals. In this paper we illustrate the potential for the IS Discipline to take a leading role in high impact issues by presenting an innovative design for a mainstream IS solution to an illustrative global healthcare issue through appropriate applications of mobile technologies, cloud computing, social networking and geolocation services.
Wang, X & Williams, M 2010, 'A Graphical Model for Risk Analysis and Management', Lecture Notes in Artificial Intelligence 6291 - Knowledge Science, Engineering and Management, Knowledge Science, Engineering and Management, Springer-Verlag Berlin Heidelberg, Belfast, Northern Ireland, pp. 256-269.View/Download from: UTS OPUS or Publisher's site
Risk analysis and management are important capabilities in intelligent information and knowledge systems. We present a new approach using directed graph based models for risk analysis and management. Our modelling approach is inspired by and builds on the two level approach of the Transferable Belief Model. The credal level for risk analysis and model construction uses beliefs in causal inference relations among the variables within a domain and a pignistic(betting) level for the decision making. The risk model at the credal level can be transformed into a probabilistic model through a pignistic transformation function. This paper focuses on model construction at the credal level. Our modelling approach captures expert knowledge in a formal and iterative fashion based on the Open World Assumption(OWA) in contrast to Bayesian Network based approaches for managing uncertainty associated with risks which assume all the domain knowledge and data have been captured before hand. As a result, our approach does not require complete knowledges and is well suited for modelling risk in dynamic changing environments where information and knowledge is gathered over time as decisions need to be taken. Its performance is related to the quality of the knowledge at hand at any given time.
Wang, X & Williams, M 2010, 'A Practical Risk Management Framework for Intelligent Information Systems', PACIS 2010 Proceedings, Pacific Asia Conference on Information Systems (PACIS 2010), AIS Electronic Library (AISeL), Tapei, Taiwan.View/Download from: UTS OPUS
This paper reports progress towards the development of a practical risk analysis and management framework for intelligent information systems based on the state-of-art techniques in uncertainty management. We provide an analysis of challenges raised by the need to manage risk and identify a set of key requirements for a practical framework that can support risk management in real environments that are open, complex and dynamic. We assess a number of relevant theories, approaches and techniques for their suitability in addressing the risk management challenges. Finally, we present our current multi-level risk analysis and modelling framework, and use benchmark problems in two entirely different domains to illustrate the broad range of our framework applicability.
Williams, M 2010, 'Autonomy: Life and Being', Lecture Notes in Artificial Intelligence 6291 - Knowledge Science, Engineering and Management, Knowledge Science, Engineering and Management, Springer-Verlag Berlin Heidelberg, Belfast, Northern Ireland, pp. 137-147.View/Download from: UTS OPUS or Publisher's site
This paper uses robot experience to explore key concepts of autonomy, life and being. Unfortunately, there are no widely accepted definitions of autonomy, life or being. Using a new cognitive agent architecture we argue that autonomy is a key ingredient for both life and being, and set about exploring autonomy as a concept and a capability. Some schools of thought regard autonomy as the key characteristic that distinguishes a system from an agent; agents are systems with autonomy, but rarely is a definition of autonomy provided. Living entities are autonomous systems, and autonomy is vital to life. Intelligence presupposes autonomy too; what would it mean for a system to be intelligent but not exhibit any form of genuine autonomy. Our philosophical, scientific and legal understanding of autonomy and its implications is immature and as a result progress towards designing, building, managing, exploiting and regulating autonomous systems is retarded. In response we put forward a framework for exploring autonomy as a concept and capability based on a new cognitive architecture. Using this architecture tools and benchmarks can be developed to analyze and study autonomy in its own right as a means to further our understanding of autonomous systems, life and being. This endeavor would lead to important practical benefits for autonomous systems design and help determine the legal status of autonomous systems. It is only with a new enabling understanding of autonomy that the dream of Artificial Intelligence and Artificial Life can be realized. We argue that designing systems with genuine autonomy capabilities can be achieved by focusing on agent experiences of being rather than attempting to encode human experiences as symbolic knowledge and know-how in the artificial agents we build.
Novianto, R, Johnston, BG & Williams, M 2010, 'Attention in the ASMO cognitive architecture', Biologically Inspired Cognitive Architectures 2010 - Frontiers in Artificial Intelligence and Applications vol 221: Proceedings of the First Annual Meeting of the BICA Society, Annual Meeting of the BICA Society, IOS Press, Washington, USA, pp. 98-105.View/Download from: UTS OPUS or Publisher's site
The ASMO Cognitive Architecture has been developed to support key capabilities: attention, awareness and self-modification. In this paper we describe the underlying attention model in ASMO. The ASMO Cognitive Architecture is inspired by a biological attention theory, and offers a mechanism for directing and creating behaviours, beliefs, anticipation, discovery, expectations and changes in a complex system. Thus, our attention based architecture provides an elegant solution to the problem of behaviour development and behaviour selection particularly when the behaviours are mutually incompatible.
Williams, M, Gardenfors, P, Johnston, BG & Wightwick, GR 2010, 'Anticipation as a Strategy: A Design Paradigm for Robotics', Lecture Notes in Artificial Intelligence 6291 - Knowledge Science, Engineering and Management, Knowledge Science, Engineering and Management, Springer-Verlag Berlin Heidelberg, Belfast, Northern Ireland, pp. 341-353.View/Download from: UTS OPUS or Publisher's site
Anticipation plays a crucial role during any action, particularly in agents operating in open, complex and dynamic environments. In this paper we consider the role of anticipation as a strategy from a design perspective. Anticipation is a crucial skill in sporting games like soccer, tennis and cricket. We explore the role of anticipation in robot soccer matches in the context of reaching the RoboCup vision to develop a robot soccer team capable of defeating the FIFA World Champions in 2050. Anticipation in soccer can be planned or emergent but whether planned or emergent, anticipation can be designed. Two key obstacles stand in the way of developing more anticipatory robot systems; an impoverished understanding of the âanticipationâ process/capability and a lack of know-how in the design of anticipatory systems. Several teams at RoboCup have developed remarkable preemptive behaviors. The CMU Dive and UTS Dodge are two compelling examples. In this paper we take steps towards designing robots that can adopt anticipatory behaviors by proposing an innovative model of anticipation as a strategy that specifies the key characteristics of anticipation behaviors to be developed. The model can drive the design of autonomous systems by providing a means to explore and to represent anticipation requirements. Our approach is to analyze anticipation as a strategy and then to use the insights obtained to design a reference model that can be used to specify a set of anticipatory requirements for guiding an autonomous robot soccer system.
Al-Sharawneh, JA & Williams, M 2009, 'A Social Network Approach in Semantic Web Services Selection using Follow the Leader Behavior', Enterprise Distributed Object Computing Conference Workshops, 2009., Enterprise Distributed Object Computing Conference, IEEE, Auckland, New Zealand, pp. 310-319.View/Download from: UTS OPUS or Publisher's site
Automatic discovery of web services is a crucial task for e-Business communities. Locating and selecting âthe bestâ web service from a vast number of similar services that matches the user's requirements and preferences is a cognitive challenge and requires the use of an intelligent decision making framework. This paper develops a flexible ontological architecture and framework for Semantic Web Service Selection that exploits Goldbaum's innovative "Follow the Leader" model originally designed as an analytic tool for studying social network behavior and evolution. The framework proposes two new ontologies integrated in a recommender system, which guides a user to select the best service that matches their requirements and preferences. We test and evaluate several behaviors of market leader scenarios using a simulation agent.
Al-Sharawneh, JA & Williams, M 2009, 'ABMS: Agent-based Modeling and Simulation in Web Service Selection', International Conference on Engineering Management and Service Sciences (EMS 2009), International Conference on Engineering Management and Service Sciences, IEEE, Beijing, China., pp. 1-6.View/Download from: UTS OPUS
Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of autonomous interacting agents. It promises to have an important role in research and education. Some researchers have contended that ABMS âis a third way of doing scienceâ. ABMS has been applied to a wide range of research in a varied number of complex domain problems. Social simulation is playing an increasingly important role in todayâs interconnected society. In this paper we apply agent based modeling and simulation to investigate the impact of Goldbaum's innovative "Follow the Leader" in social networks in web services selection using a recommender system that guides a user to select the best service that matches his requirements and preferences. We test and evaluate several customersâ behaviors scenarios using our simulation tool âSSSS: Service Selection Simulation Studioâ.
Benferhat, S, Dubios, D, Prade, H & Williams, M 2009, 'A General Framework for Revising Belief Bases using Qualitative Jeffrey's Rule', Foundations of Intelligent Systems - 18th International Symposium, ISMIS 2009: Lecture Notes in Artificial Intelligence vol 5722, International Symposium on Foundations of Intelligent Systems, Springer, Prague, Czech Republic, pp. 612-621.View/Download from: UTS OPUS or Publisher's site
Intelligent agents require methods to revise their epistemic state as they acquire new information. Jeffreys rule, which extends conditioning to uncertain inputs, is currently used for revising probabilistic epistemic states when new information is uncertain. This paper analyses the expressive power of two possibilistic counterparts of Jeffreys rule for modeling belief revision in intelligent agents. We show that this rule can be used to recover most of the existing approaches proposed in knowledge base revision, such as adjustment, natural belief revision, drastic belief revision, revision of an epistemic by another epistemic state. In addition, we also show that that some recent forms of revision, namely improvement operators, can also be recovered in our framework.
Benferhat, S, Dubois, D, Prade, H & Williams, MA 2009, 'A general framework for revising belief bases using qualitative Jeffrey's rule', Commonsense 2009 - Proceedings of the 9th International Symposium on Logical Formalizations of Commonsense Reasoning, pp. 7-12.
Intelligent agents require methods to revise their epis- temic state as they acquire new information. Jerey's rule, which extends conditioning to uncertain inputs, is used to revise probabilistic epistemic states when new information is uncertain. This paper analyses the expressive power of two possibilistic counterparts of Jerey's rule for modeling belief revision in intelligent agents. We show that this rule can be used to recover most of the existing approaches proposed in knowledge base revision, such as adjustment, natural belief revi- sion, drastic belief revision, revision of an epistemic by another epistemic state. In addition, we also show that that some recent forms of revision, namely reinforce- ment operators, can also be recovered in our frame- work.
Chen, S & Williams, M 2009, 'Privacy In Social Networks: A Comparative Study', Pacific Asia Conference on Information Systems (PACIS) 2009 Proceedings, Pacific Asia Conference on Information Systems, Association for Information Systems, India School of Business, Hyderabad, India, pp. 1-12.View/Download from: UTS OPUS
Social networks provide unprecedented opportunity for individuals and organizations to share information. At the same time they present significant challenges to privacy that left unaddressed will stifle information sharing and innovation. In this paper we analyse four different prototypical existing social networks, and identify key problems that arise for a privacy-by-design approach to the development of a new breed of social networks.
Lakemeyer, G, Morgenstern, L & Williams, MA 2009, 'Preface', Commonsense 2009 - Proceedings of the 9th International Symposium on Logical Formalizations of Commonsense Reasoning.
Novianto, R & Williams, M 2009, 'The Role of Attention in Robot Self-Awareness', Robot and Human Interactive Communication, 2009. RO-MAN 2009. The 18th IEEE International Symposium on, IEEE International Symposium on Robot and Human Interactive Communication, IEEE, Toyama, pp. 1047-1053.View/Download from: UTS OPUS or Publisher's site
A robot may not be truly self-aware even though it can have some characteristics of self-awareness, such as having emotional states or the ability to recognize itself in the mirror. We define self-awareness in robots to be characterized by the capacity to direct attention toward their own mental state. This paper explores robot self-awareness and the role that attention plays in the achievement self-awareness. We propose a new attention based approach to self-awareness called ASMO and conduct a comparative analysis of approaches that highlights the innovation and benefits of ASMO. We then describe how our attention based self-awareness can be designed and used to develop self-awareness in state-of-the-art humanoidal robots.
Williams, M 2009, 'Evidence Transmutations: Gathering Admissible Evidence using Belief Revision', Proceedings of the 12th International Conference on Artificial Intelligence and Law, 12th International Conference on Artificial Intelligence and Law, ACM, Barcelona, Spain, pp. 216-217.
In this paper we explore the potential of using extended belief change operators for modeling the evolution of legal evidence. We introduce a new representation, an evidence structure that can be used to support rich change operators called transmutations. Evidence structures and transmutations support the iterated nature of evidence gathering and acquisition. Importantly reliability is maintained during transmutations and modified where necessary using the Principle of Minimal Change. We establish that this process possesses desirable properties; in particular we construct transmutations that do not assume logical omniscience and that satisfy the widely accepted AGM rationality postulates for revision and contraction but at the same time preserve a key value of the evidence, its reliability.
Williams, M 2009, 'Privacy Management, the Law & Business Strategies: A Case for Privacy Driven Design', International Conference on Computational Science and Engineering, International Conference on Computational Science and Engineering, IEEE Press, Vancouver, Canada, pp. 60-68.View/Download from: UTS OPUS or Publisher's site
This paper explores the adage that good privacy is good business. Businesses, like social networks, often seek to create value from personal information and monetize it. Unlocking and harvesting value embedded in personal information can lead to disclosure of private and sensitive information, and subsequent harm. Personal information management practices can be a means to competitive and strategic advantage, however they are also subject to privacy law. We explore the underlying tension between transparency and disclosure in the privacy verses business strategy in the pursuit of innovation arena, and argue that in order achieve sustained innovation next generation applications and services will require a fresh imaginative and strategic privacy by design approach. Personal information management is a complex task and cannot be adequately achieved without significant attention and commitment to privacy requirements in systems analysis and design. Due to the potential power, magnitude, complexity and scope of web technologies there is a pressing need to understand privacy requirements better, and to invest in developing tools and techniques for modeling, analyzing, designing and building more effective personal information management systems that seek consent where appropriate and that offer users natural choices and sophisticated mechanisms for controlling their personal information.
Williams, M & Elliot, S 2009, 'Strategic Treasury Risk Management in Uncertain and Changing Environments', Strategic Management Society Conference, Strategic Management Society, Washington, USA.
Williams, MA 2009, 'Privacy management, the law and global business strategies: A case for privacy driven design', AAAI Spring Symposium - Technical Report, pp. 71-79.
This paper is based on the adage that 'goodprivacy is good business'. Personal information holds significant value and web based businesses often seek to monetize that value. Unlocking personal information value in web based businesses, like social networks, can lead to disclosure of private and sensitive information, and subsequent harm. Personal information management business practices are subject to privacy law, but perhaps more importantly practices that protect personal information can be a means to competitive advantage, and as a result they can form the basis of effective business strategy. We explore the underlying tension between transparency and disclosure in the privacy verses business strategy arena, and argue that the next generation of web businesses based on powerful Web 3.0 applications and services will demand a privacy by design approach, rather than addressing privacy concerns as an afterthought. Due to the potential power, magnitude, complexity and scope of Web 3.0 there is a need to use sophisticated technology-enabled approaches to assist users to monitor and manage personal information and its usage in a more transparent proactive fashion. Copyright © 2009, Association for the Advancement of Artificial Intelligence.
A great deal of contention can be found within the published literature on grounding and the symbol grounding problem, much of it motivated by appeals to intuition and unfalsifiable claims. We seek to define a formal framework of representation grounding that is independent of any particular opinion, but that promotes classification and comparison. To this end, we identify a set of fundamental concepts and then formalize a hierarchy of six representational system classes that correspond to different perspectives on the representational requirements for intelligence, describing a spectrum of systems built on representations that range from symbolic through iconic to distributed and unconstrained. This framework offers utility not only in enriching our understanding of symbol grounding and the literature, but also in exposing crucial assumptions to be explored by the research community.
Johnston, BG & Williams, M 2009, 'A Formal Framework for the Symbol Grounding Problem', Proceedings of the Second Conference on Artificial General Intelligence, Conference on Artificial General Intelligence, Atlantis Press, Washington, USA, pp. 61-66.View/Download from: UTS OPUS or Publisher's site
A great deal of contention can be found within the published literature on grounding and the symbol grounding problem, much of it motivated by appeals to intuition and unfalsifiable claims. We seek to define a formal framework of representa- tion grounding that is independent of any particular opinion, but that promotes classification and comparison. To this end, we identify a set of fundamental concepts and then formalize a hierarchy of six representational system classes that corre- spond to different perspectives on the representational require- ments for intelligence, describing a spectrum of systems built on representations that range from symbolic through iconic to distributed and unconstrained. This framework offers utility not only in enriching our understanding of symbol grounding and the literature, but also in exposing crucial assumptions to be explored by the research community.
Johnston, BG & Williams, M 2009, 'Autonomous Learning of Commonsense Simulations', International Symposium on Logical Formalizations of Commonsense Reasoning, Symposium on Logical Formalizations of Commonsense Reasoning, UTSePress, Toronto, Canada, pp. 73-78.View/Download from: UTS OPUS
Parameter-driven simulations are an effective and efficient method for reasoning about a wide range of commonsense scenarios that can complement the use of logical formalizations. The advantage of simulation is its simplified knowledge elicitation process: rather than building complex logical formulae, simulations are constructed by simply selecting numerical values and graphical structures. In this paper, we propose the application of machine learning techniques to allow an embodied autonomous agent to automatically construct appropriate simulations from its real-world experience. The automation of learning can dramatically reduce the cost of knowledge elicitation, and therefore result in models of commonsense with breadth and depth not possible with traditional engineering of logical formalizations.
Johnston, BG & Williams, M 2009, 'Conservative and Reward-driven Behavior Selection in a Commonsense Reasoning Framework', 2009 AAAI Symposium: Multirepresentational Architectures for Human-Level Intelligence, National Conference of the American Association for Artificial Intelligence, AAAI Press, Washington, USA, pp. 14-19.View/Download from: UTS OPUS
Comirit is a framework for commonsense reasoning that combines simulation, logical deduction and passive machine learning. While a passive, observation-driven approach to learning is safe and highly conservative, it is limited to interaction only with those objects that it has previously observed. In this paper we describe a preliminary exploration of methods for extending Comirit to allow safe action selection in uncertain situations, and to allow reward-maximizing selection of behaviors.
Liu, W & Williams, M 2008, 'Strategies for Business in Virtual Worlds: Case Studies in Second Life', Pacific Asia Conference on Information Systems, PACIS 2008, Pacific Asia Conference on Information Systems, City University of Hong Kong Press, Suzhou, China, pp. 888-900.View/Download from: UTS OPUS
In this paper, we use qualitative and quantitative methodologies to analyse and understand a range of business strategies in Second Life.
Williams, M 2008, 'Representation = Grounded Information', Lecture Notes in Artificial Intelligence Vol 5351: PRCAI 2008: Trends in Artificial Intelligence, Pacific Rim International Conference on Artificial Intelligence, Springer, Hanoi, Vietnam, pp. 473-484.View/Download from: UTS OPUS or Publisher's site
The grounding problem remains one of the most fundamental issues in the field of Artificial Intelligence. We argue that representations are grounded information and that an intelligent system should be able to make and manage its own representations.
Williams, M & Trieu, M 2007, 'Grounded representation driven robot motion design', 11th RoboCup International Symposium, RoboCup 2007, Robot Soccer World Cup, Springer Berlin / Heidelberg, Atlanta, GA, pp. 520-527.View/Download from: UTS OPUS or Publisher's site
Grounding robot representations is an important problem in Artificial Intelligence. In this paper we show how a new grounding framework guided the development of an improved locomotion engine  for the AIBO. The improvements stemmed from higher quality representations that were grounded better than those in the previous system . Since the AIBO is more grounded under the new locomotion engine it makes better decisions and achieves its design goals more efficiently. Furthermore, a well grounded robot offers significant software engineering benefits since its behaviours can be developed, debugged and tested more effectively. Â© 2008 Springer-Verlag Berlin Heidelberg.
Johnston, BG & Williams, M 2008, 'Comirit: Commonsense Reasoning by Integrating Simulation and Logic', Artificial General Intelligance 2008 Proceedings of the First AGI Conference, Conference on Artificial General Intelligence, IOS Press, Inst of Technology, University of Memphis, TN, USA, pp. 200-211.View/Download from: UTS OPUS
Rich computer simulations or quantitative models can enable an agent to realistically predict real-world behaviour with precision and performance that is difficult to emulate in logical formalisms. Unfortunately, such simulations lack the deductive flexibility of techniques such as formal logics and so do not find natural application in the deductive machinery of commonsense or general purpose reasing systems. This dilemma can, however, be resolved via a hybrid architecture that combines tableaux-based reasoning with a framework for generic simulation based on the concept of 'molecular' models. This combination exploits the complementary strengths of logic and simulation, allowing an agent to build and reason with automatically constructed simulations in a problem-sensitive manner.
Johnston, BG, Yang, F, Mendoza, R, Chen, X & Williams, M 2008, 'Ontology Based Object Categorization for Robots', Lecture Notes in Artificial Intelligence Vol 5345: Practical Aspects of Knowledge Management - Proceedings of the 7th International Conference, PAKM 2008, International Conference on Practical Aspects of Knowledge Management, Springer, Yokohama, Japan, pp. 219-231.View/Download from: UTS OPUS or Publisher's site
Meaningfully managing the relationship between representations and the entities they represent remains a challenge in robotics known as grounding. In this paper we Semantic Web technologies to provide a powerful extension to existing proposals for grounding robotic systems and have consequently developed OBOC, the first robotic software system with an ontology-based vision syb-system.
Anshar, M & Williams, M 2007, 'Evolutionary Robot Gaits', International Conference on Intelligent Unmanned Systems, International Conference on Intelligent Unmanned Systems, Bali, Indonesia, pp. 217-223.
Gardenfors, P & Williams, M-A 1970, 'Multi-agent communication, planning, and collaboration based on perceptions, conceptions, and simulations', MENTAL STATES, VOL 1: EVOLUTION, FUNCTION, NATURE, Conference on International Language and Cognition, JOHN BENJAMINS B V PUBL, Coffs Harbour, AUSTRALIA, pp. 95-121.
Sims, F, Williams, M & Elliot, S 2007, 'Understanding the Mobile Experience Economy: A key to richer effective M-Business Models and Strategies', Proceedings of the International Conference on Mobile Business, International Conference on Mobile Business, IEEE Computer Society Press, Toronto, Canada, pp. 1-8.View/Download from: UTS OPUS or Publisher's site
A major challenge for firms these days is how to differentiate themselves in a global market and build competitive advantage. Many firms have been able to move up the value chain from a services base to an experience base as a means to attaining high levels of customer satisfaction and profitability. Consequently a better understanding of the experience economy will assist business managers and designers to develop effective strategies by focusing on the m-business experience and how this experience can build sustainable technology innovation, business models and strategies, and help design products for mobile delivery to meet the market's needs. In this paper we describe several experience economy models, identify their weaknesses, and introduce a new cognitive based experience model that can be used to develop more effective m-business infrastructure and applications. It offers a new understanding of experiences which emphasizes cognition as a whole which includes background knowledge, desires and intentions, rather than the sensory and perceptual aspects alone which are the focus of most traditional models. As a result the new model offers new predictive and explanatory power in understanding the m-business experience economy.
Johnston, BG & Williams, M 2007, 'A Generic Framework for Approximate Simulation in Commonsense Reasoning Systems', International Symposium on Logical Formalizations of Commonsense R, Symposium on Logical Formalizations of Commonsense Reasoning, AAAI Press, Stanford University, USA, pp. 71-76.View/Download from: UTS OPUS
This paper introduces the Slick architecture and outlines how it may be applied to solve the well known Egg-Cracking Problem. In contrast to other solutions to this problem that are based on formal logics, the Slick architecture is based on general- purpose and low-resolution quantitative simulations. On this benchmark problem, the Slick architecture offers greater elaboration tolerance and allows for faster elicitation of more general axioms. "This paper was selected by a process of anonymous peer reviewing for presentation at COMMONSENSE 2007" - first page of http://www.ucl.ac.uk/commonsense07/papers/johnston-and-williams.pdf "All submissions will be reviewed by the program committee listed at www.ucl.ac.uk/commonsense07/committee , and notification of acceptance will be given by November 24, 2006." - from CFP at http://www.ucl.ac.uk/commonsense07/cfp/
Mendoza, R, Johnston, BG, Yang, F, Huang, Z, Chen, X & Williams, M 2007, 'OBOC: Ontology Based Object Categorisation for Robots', The Fourth International Conference on Computational Intelligence, Robotics and Autonomous Systems, International Conference on Computational Intelligence, Robotics and Autonomous Systems, Massey University Press, Palmerston North, New Zealand, pp. 178-183.View/Download from: UTS OPUS
Meaningfully managing the relationship between representations and the entities they represent remains a challenge in robotics known as grounding. Useful insights can be found by approaching robotic systems development specifically with the grounding and symbol grounding problem in mind. In particular, Semantic Web technologies turn out to be not merely applicable to web-based software agents, but can also provide a powerful extension to existing proposals for grounded robotic systems development. Given the interoperability and openness of the Semantic Web, such technologies can increase the ability for a robot to introspect, communicate and be inspected - benefits that ultimately lead to more grounded systems with open-ended intelligent behaviour.
Chen, X, Liu, W & Williams, MA 2006, 'ACM International Conference Proceeding Series: Preface', ACM International Conference Proceeding Series.
Elliot, S, Williams, M & Bjorn-Anderson, N 2005, 'Strategic Management of technology-enabled disruptive innovation: Next generation web technologies', Proceedings of CIMCA 2005 jointly with International Conference on Intelligent Agents, Web Technologies, and Internet Commerce 2005, International Conference on Intelligent Agents, Web Technologies, and Internet Commerce, IEEE, Vienna, Australia, pp. 113-120.View/Download from: UTS OPUS
Technology-enabled business innovation presents the potential to structurally transform enterprise and industry practice, but uncertainty remains as to how such transformations might be managed. The search for higher returns from technology-enabled business innovation will inevitably lead to the adoption and exploitation of powerful, but disruptive technologies that bring with them higher levels of risk. Disruptive innovation is placed into context through the literature and through examples of past IT innovations with disruptive impact. This paper examines how organizations could obtain improved management of the adoption of potentially disruptive future generation Web technologies. The Business Innovation Technology Adoption Model (BITAM) is applied to emerging Web technologies to help identify the nature and extent of their potentially disruptive impact on business practice and management strategies so as to better enable mitigation of business risk.
Karol, A & Williams, M 2005, 'Distributed sensor fusion for object tracking', Robocup 2005: Robot Soccer World Cup ix, Lecture Notes in Artificial Intelligence, Robot Soccer World Cup, Springer-Verlag Berlin, Japan, pp. 504-511.View/Download from: UTS OPUS or Publisher's site
In a dynamic situation like robot soccer any individual player can only observe a limited portion of their environment at any given time. As such to develop strategies based upon planning and cooperation between different players it is imperative that th
Karol, A, Williams, M & Elliot, S 2006, 'The evolution of IS: Treasury decision support & management past, present & future', Past And Future Of Information Systems: 1976-2006 And Beyond, World Computer Congress, Springer, Santiago, CHILE, pp. 89-100.View/Download from: UTS OPUS or Publisher's site
This paper contributes to the discipline of Information Systems (IS) by illustrating the continuing evolution of IS applications to a single, core business function. Historical developments in IS and the major global treasury activity, foreign exchange t
Stanton, CJ & Williams, M 2005, 'A novel and practical approach towards color constancy for mobile robots using overlapping color space signatures', Robocup 2005: Robot Soccer World Cup ix, Lecture Notes in Artificial Intelligence, Robot Soccer World Cup, Springer-Verlag Berlin, Osaka, Japan, pp. 444-451.View/Download from: UTS OPUS or Publisher's site
Color constancy is the ability to correctly perceive an object's color regardless of illumination. Within the controlled, color-coded environments in which many robots operate (such as RoboCup), engineers have been able to avoid the color constancy probl
Grounding robot perceptions is an important problem in Artificial Intelligence. In this paper we show how a new grounding framework guided the development of an improved locomotion engine  for the AIBO. The improvements stemmed from higher quality representations that were grounded better than those in the previous system . Since the AIBO is more grounded under the new locomotion engine it makes better decisions and achieves its design goals more efficiently and effectively. Furthermore, a well grounded robot offers significant software engineering benefits since its behaviours can be developed with less effort, and even more importantly they can be debugged and tested on the fly. Copyright © held by author.
Xu, K, Chen, X, Liu, W & Williams, M 2006, 'Legged Robot Gait Locus Generation Based on Genetic Algorithms', International Symposium on Practical Cognitive Agents and Robots (PCAR 2006) - Proceedings, International Symposium on Practical Cognitive Agents and Robots, ACM digital library, Perth, Australia, pp. 51-62.View/Download from: UTS OPUS or Publisher's site
Achieving an effective gait locus for legged robots is a challenging task. It is often done manually in a laborious way due to the lack of research in automatic gait locus planning. Bearing this problem in mind, this article presents a gait locus planning method using inverse kinematics while incorporating genetic algorithms. Using quadruped robots as a platform for evaluation, this method is shown to generate a good gait locus for legged robots.
Hecker, M, Karol, A, Stanton, C & Williams, MA 2005, 'Smart sensor networks: Communication, collaboration and business decision making in distributed complex environments', ICMB 2005: INTERNATIONAL CONFERENCE ON MOBILE BUSINESS, International Conference on Mobile Business, IEEE COMPUTER SOC, Sydney, AUSTRALIA, pp. 242-248.View/Download from: Publisher's site
Hecker, M, Karol, A, Stanton, CJ & Williams, M 2005, 'Smart sensor networks: communication, collaboration and business decision making in distributed complex environments', Proceedings of the International Conference on Mobile Business (ICMB-05), International Conference on Mobile Business, IEEE, Sydney, Australia, pp. 1-7.View/Download from: UTS OPUS
Smart Sensor Networks are one of the most exciting research areas in information technology today; their potential for business applications is vast, but yet to be realised. In this paper we argue that intelligent sensor networks can only reach their potential for business applications if the network is grounded so as to support meaningful information sharing and knowledge generation as the basis for an effective business decision model. Our work is based on the idea that sensors extract information from their environment which, typically, must be fused with other sensor data and network knowledge so that effective business decision making is supported.
Karol, A & Williams, M 2005, 'Understanding Human Strategies for Change - An Empirical Study', Theoretical Aspects of Rationality and Knowledge - Proceedings of the Tenth Conference, Theoretical Aspects of Rationality and Knowledge, ACM digital library, Singapore, pp. 137-149.View/Download from: UTS OPUS
Stanton, CJ & Williams, M 2005, 'An innovative interactive web-enabled learning space for exploring intelligent mobile sensor networks and their business applications', Proceedings of International Conference on Mobile Business (ICMB'05), International Conference on Mobile Business, IEEE, Sydney, Australia, pp. 249-254.View/Download from: UTS OPUS
In this paper we describe a physical and virtual space for conducting research into intelligent mobile sensor networks. Intelligent mobile sensor networks present a number of difficult theoretical and practical research challenges. We design an imaginati
Tran, Q, Low, GC & Williams, M 2004, 'A preliminary comparative feature analysis of multi-agent systems development methodologies', Lecture Notes in Computer Science - Agent-Oriented Information Systems Ii, Agent-Oriented Information Systems Workshop, Springer-Verlag Berlin, Riga, Latvia, pp. 157-168.View/Download from: UTS OPUS or Publisher's site
While there are a considerable number of software engineering methodologies for developing multi-agent systems, not much work has been reported on the evaluation and comparison of these methodologies. This paper presents a comparative analysis of five we
Karol, A, Gray, RW, Williams, M & Elliot, S 2004, 'Improved eBusiness Treasury Risk Management Using Intelligent Agents', Managing New Wave Information Systems: Enterprise, Government and Society - Proceedings of the 15th Australasian Conference on Information Systems (ACIS2004), Australasian Conference on Information Systems, University of Tasmania, Hobart, Australia, pp. 54-55.View/Download from: UTS OPUS
Karol, A, Nebel, B, Stanton, CJ & Williams, M 2003, 'Case Based Game Play in the Robocup Four-Legged League: Part I The Theoretical Model', Robocup 2003: Robot Soccer World Cup VII, Robot Soccer World Cup, Springer Verlag, Padua, Italy, pp. 739-747.View/Download from: UTS OPUS or Publisher's site
Robot Soccer involves planning at many levels, and in this paper we develop high level planning strategies for robots playing in the RoboCup Four-Legged League using case based reasoning. We develop a framework for developing and choosing game plays. Game plays are widely used in many team sports e.g. soccer, hockey, polo, and rugby. One of the current challenges for robots playing in the RoboCup Four-Legged League is choosing the right behaviour in any game situation. We argue that a flexible theoretical model for using case based reasoning for game plays will prove useful in robot soccer. Our model supports game play selection in key game situations which should in turn significantly advantage the team.
Stanton, CJ & Williams, M 2003, 'Grounding Robot Sensory and Symbolic Information Using the Semantic Web', Robocup 2003: Robot Soccer World Cup VII, Robot Soccer World Cup, Springer Verlag, Padua, Italy, pp. 757-764.View/Download from: UTS OPUS or Publisher's site
Robots interacting with other agents in dynamic environments require robust knowledge management capabilities if they are to communicate, learn and exhibit intelligent behaviour. Symbol grounding involves creating, and maintaining, the linkages between internal symbols used for decision making with the real world phenomena to which those symbols refer. We implement grounding using ontologies designed for the Semantic Web. We use SONY AIBO robots and the robot soccer domain to illustrate our approach. Ontologies can provide an important bridge between the perceptual level and the symbolic level and in so doing they can be used to ground sensory information. A major advantage of using ontologies to ground sensory and symbolic information is that they enhance interoperability, knowledge sharing, knowledge reuse and communication between agents. Once objects are grounded in ontologies, Semantic Web technologies can be used to access, build, derive, and manage robot knowledge.
Williams, M & Elliot, S 2004, 'Corporate Control of Rogue Traders', Multidisciplinary Solutions to Industry and Government's e-Business Challenges, Proceedings of the IFIP WG.4 Working Conference on E-business, IFIP WG8.4 Working Conference on eBusiness, Trauner Verlag, Saltzburg, Austria, pp. 123-141.View/Download from: UTS OPUS
Abecker, A, Lipson, H, Antonsson, EK, Callaway, CB, Dignum, V, Doherty, P, Van Elst, L, Freed, M, Freedman, R, Guesgen, H, Jones, G, Koza, J, Kortenkamp, D, Maybury, M, McCarthy, J, Mitra, D, Renz, J, Schreckenghost, D & Williams, MA 2003, '2003 AAAI Spring Symposium Series', AI Magazine, pp. 131-139.
The American Association for Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2003 Spring Sympoisum Series. The titles of the symposia includes agent-mediated knowledge management, computational synthesis, foundations and applications of spatiotemporal reasoning and natural language generation in spoken and written dialogue. It was suggested that commonsense reasoning was required in a wide variety of systems from autonomous lawn mowers to deep-space probes.
Karol, A & Williams, M 2003, 'Understanding Human Strategies for Change - An Empirical Study', IJCAI-03 Workshop on Nonmonotonic Reasoning, Action and Change (NRAC'03) Working Notes, International Joint Conference on Artificial Intelligence, Unknown, Acapulco, Mexico, pp. 118-123.
Karol, A, Nebel, B, Stanton, CJ & Williams, M 2003, 'Case Based Game Play in the RoboCup Four Legged League Part I The Theoretical Model', RoboCup2003. Proceedings of the International Symposium with Team Description Papers, Robot Soccer World Cup, University of Padova, Padua, Italy, pp. 1-8.
Lee, I & Williams, M 2003, 'Multi-level Clustering and Reasoning about its Clusters Using Region Connection Calculus', Advances in Knowledge Discovery and Data Mining. 7th Pacific-Asia Conference, PAKDD 2003 Proceedings, Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer-Verlag Berlin Heidelberg, Seoul, Korea, pp. 283-294.View/Download from: UTS OPUS
Tran, Q, Low, GC & Williams, M 2003, 'A Feature Analysis Framework for Evaluating Multi-Agent System Development Methodologies', Foundations of Intelligent Systems. 14th International Symposium, ISMIS 2003 Proceedings, International Symposium on Foundations of Intelligent Systems, Springer-Verlag Berlin Heidelberg, Maebashi City, Japan, pp. 1-5.View/Download from: UTS OPUS or Publisher's site
This paper proposes a comprehensive and multi-dimensional feature analysis framework for evaluating and comparing methodologies for developing multi-agent systems (MAS). Developed from a synthesis of various existing evaluation frameworks, the novelty of our framework lies in the high degree of its completeness and the relevance of its evaluation criteria. The paper also presents a pioneering effort in identifying the standard steps and concepts to be supported by a MAS-development process and models.
Tran, QNN, Low, G & Williams, MA 2003, 'A feature analysis framework for evaluating multi-agent system development methodologies', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 613-617.
© Springer-Verlag Berlin Heidelberg 2003. This paper proposes a comprehensive and multi-dimensional feature analysis framework for evaluating and comparing methodologies for developing multi-agent systems (MAS). Developed from a synthesis of various existing evaluation frameworks, the novelty of our framework lies in the high degree of its completeness and the relevance of its evaluation criteria. The paper also presents a pioneering effort in identifying the standard steps and concepts to be supported by a MAS-development process and models.
Benferhat, S, Kaci, S, Le Berre, D & Williams, MA 2001, 'Weakening conflicting information for iterated revision and knowledge integration', IJCAI International Joint Conference on Artificial Intelligence, pp. 109-115.
The ability to handle exceptions, to perform iterated belief revision and to integrate information from multiple sources are essential skills for an intelligent agent. These important skills are related in the sense that they all rely on resolving inconsistent information. We develop a novel and useful strategy for conflict resolution, and compare and contrast it with existing strategies. Ideally the process of conflict resolution should conform with the principle of Minimal Change and should result in the minimal loss of information. Our approach to minimizing the loss of information is to weaken information involved in conflicts rather than completely removing it. We implemented and tested the relative performance of our new strategy in three different ways. We show that it retains more information than the existing Maxi-Adjustment strategy at no extra computational cost. Surprisingly, we are able to demonstrate that it provides a computationally effective compilation of the lexicographical strategy, a strategy which is known to have desirable theoretical properties.
Lang, J, Marquis, P & Williams, MA 2001, 'Updating epistemic states', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 297-308.
© Springer-Verlag Berlin Heidelberg 2001. Belief update is usually defined by means of operators acting on belief sets. We propose here belief update operators acting on epistemic states which convey much more information than belief sets since they express the relative plausibilities of the pieces of information believed by the agent. In the following, epistemic states are encoded as rankings on worlds. We extend a class of update operators (dependency- based updates) to epistemic states, by defining an operation playing the same role as knowledge transmutations  do for belief revision.
Sinnappan, S, Williams, MA & Muthaly, S 2001, 'Agent based architecture for internet marketing', Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science), pp. 158-169.
© 2001 Springer-Verlag Berlin Heidelberg. In the current highly competitive era of e-commerce, many firms are focusing their attention on forging and nurturing customer relationships. Businesses not only need to discover and capitalize actionable marketing intelligence but they also need to manipulate this valuable information to improve their relationship with their customer base. In this paper we describe an agent architecture paired with web mining techniques that provides personalized web sessions to nurture customer relationship by: Merging information from heterogeneous sources. Dealing with change in a dynamic environment. Handling decisions that need to be made with information that is incomplete and/or uncertain. Our agents are based on the BDI1 framework implemented using JACK2. and Java Server Pages. We envisage that our intelligent software agents inhabit a market space where they work for the benefit of businesses engaged in customer-To-business and business-To-business electronic commerce. The key contribution and focus of this paper is the development of an agent framework that ensures successful customer relationship management.
Williams, M & Gardenfors, P 2001, 'Reasoning about Categories in Conceptual Spaces', Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, International Joint Conference on Artificial Intelligence, Morgan Kaufmann, Seattle Washington, pp. 385-392.
IJCAI is the most prestigious international conference in AI. This paper provided the first computational model for conceptual spaces which can be applied in cognitive agents.
Benferhat, S, Dubois, D, Prade, H & Williams, MA 1999, 'A practical approach to fusing prioritized knowledge bases', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 222-236.View/Download from: Publisher's site
© Springer-Verlag Berlin Heidelberg 1999. This paper investigates simple well behaved syntactic methods to fuse prioritized knowledge bases which are semantically meaningful in the frameworks of possibility theory and of Spohn's ordinal conditional functions. Different types of scales for priorities are discussed: finite vs. infinite, numerical vs. ordinal. Syntactic fusion is envisaged here as a process which combines prioritized knowledge bases into a new prioritized knowledge base, and thus allows for subsequent iteration. Several fusion operations are proposed, according to whether or not the sources are dependent, or conflicting, or sharing the same scale.
Benferhat, S, Dubois, D, Prade, H & Williams, MA 1999, 'Practical approach to revising prioritized knowledge bases', International Conference on Knowledge-Based Intelligent Electronic Systems, Proceedings, KES, pp. 170-173.
This paper investigates simple syntactic methods to revise prioritized belief bases, that are semantically meaningful in the frameworks of possibility theory and of Spohn's ordinal conditional functions. Here, revising prioritized belief bases amounts to conditioning a distribution function on interpretations. Different types of scales for priorities are discussed: finite vs. infinite, numerical vs. ordinal. Syntactic revision is envisaged here as a process which transforms prioritized belief bases into a new prioritized belief base, and thus allows for the subsequent iteration.
Liu, W & Williams, MA 1999, 'A framework for multi-agent belief revision part I: The role of ontology', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 168-179.View/Download from: Publisher's site
© Springer-Verlag Berlin Heidelberg 1999. In this paper, we identify that failure to cater for the various forms of heterogeneity is one of the major drawbacks of the previous research on multi-agent belief revision(MABR). Three major categories of heterogeneity, namely social, semantic and syntactic heterogeneity are clarified. Several issues posed by such heterogeneities are addressed in the context of BR. The use of ontology is proposed as a powerful tool to tackle the heterogeneity issues so as to achieve the necessary reliable communication and system interoperability required by MABR. The question of what kind of ontology would be suitable to support MABR in a heterogenous setting is answered in Part I. In its sequel, Part II, a general framework for MABR is presented based on a shared knowledge structure which serves as the theoretical basis for ontology design.
Antoniou, G & Williams, MA 1998, 'Revising default theories', Proceedings of the International Conference on Tools with Artificial Intelligence, pp. 423-430.
Default logic is a prominent rigorous method of reasoning with incomplete information based on assumptions. It is a static reasoning approach, in the sense that it doesn't reason about changes and their consequences. On the other hand, its nonmonotonic behaviour appears when a change to a default theory is made. This paper studies the dynamic behaviour of default logic in the face of changes, a concept that we motivate by a reference to requirements engineering. The paper defines a contraction and a revision operator, and studies their properties. This work is part of an ongoing project whose aim is to build an integrated, domain-independent toolkit of logical methods for reasoning with changing and incomplete information. The techniques described in this paper will be implemented as part of the toolkit.
Antoniou, G & Williams, MA 1998, 'Some approaches to reasoning with incomplete and changing information', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 9-44.View/Download from: Publisher's site
Macnish, CK & Williams, MA 1998, 'From belief revision to design revision: Applying theory change to changing requirements', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 206-220.View/Download from: Publisher's site
© Springer-Verlag Berlin Heidelberg 1998. The ability to correctly analyse the impact of changes to system designs is an important goal in software engineering. A framework for addressing this problem has been proposed in which logical descriptions are developed alongside traditional representations. While changes to the resulting design have been considered, no formal framework for design change has been offered. This paper proposes such a framework using techniques from the field of belief revision. It is shown that under a particular strategy for belief revision, called a maxi-adjustment, design revisions can be modelled using standard revision operators. As such, the paper also offers a new area of application for belief revision. Previous attempts to apply belief revision theory have suffered from the criticism that deduced information is held on to more strongly than the facts from which it is derived. This criticism does not apply to the present application because we are concerned with goal decomposition rather than reasoning from facts, and it makes sense that goals should be held onto more strongly than the decompositions designed to achieve them.
Williams, MA 1998, 'Applications of belief revision', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 287-316.
Williams, MA 1997, 'Anytime belief revision', IJCAI International Joint Conference on Artificial Intelligence, pp. 74-79.
Belief Revision is a ubiquitous process underlying many forms of intelligent behaviour. The AGM paradigm is a powerful framework for modeling and implementing belief revision systems based on the principle of Minimal Change; it provides a rich and rigorous foundation for computer-based belief revision architectures. Maxi-adjustment is a belief revision strategy for theory bases that can be implemented using a standard theorem prover, and one that has been used successfully for several applications. In this paper we provide an anytime decision procedure for maxi-adjustments, and study its complexity. Furthermore, we outline a set of guidelines that serve as a protomethodology for building belief revision systems employing a maxi-adjustment. The algorithm is under development in the belief revision module of the CIN Project.
© 1997 IEEE. Belief revision is a fundamental process that underlies numerous forms of intelligent behaviour. Intelligent information systems must be adept at modifying their beliefs in a rational, coherent and consistent fashion. We show that iterated belief revision strategies can be implemented using standard relational database technology. We illustrate the main ideas using Spohn's (1988) notion of conditionalization. For the purpose of motivation we focus our attention on a problem in market research, namely modeling changes to consumer preferences. The key idea is that possible product profiles can be represented as tuples in a relational database, and the consumer's preference for products is captured using an ordinal ranking over the tuples. Using this representation iterated belief revision strategies can be implemented simply and efficiently. In particular, belief revision is performed using database transactions that modify this ranking.
Antoniou, G & Williams, MA 1996, 'Default reasoning and belief revision in the CIN project', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 691-693.
Antoniou, G, Courtney, AP, Ernsttand, J & Williams, MA 1996, 'A system for computing constrained default logic extensions', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 237-250.View/Download from: Publisher's site
© Springer-Verlag Berlin Heidelberg 1996. The aim of this paper is to describe the algorithmic foundations of the part of the program Exten responsible for the computation of extensions in Constrained Default Logic. Exten is a system that computes extensions for various default logics. The efficiency of the system is increased by pruning techniques for the search tree. We motivate and present these techniques, and demonstrate that they can cut down the size of the search tree significantly. Quite importantly, they complement well the recently developed stratification method. This technique has to be modified to work properly with Constrained Default Logic, and we show how this can be done. Exten supports experimentation with default logic, allowing the user to set various parameters. Also it has been designed to be open to future enhancements, which are supported by its object-oriented design. Exten is part of our long-term effort to develop an integrated toolkit for intelligent information management based on nonmonotonic reasoning and befief revision methods.
Williams, MA 1995, 'Conditionalizing expectations', ANZIIS 1995 - Proceedings of the 3rd Australian and New Zealand Conference on Intelligent Information Systems, pp. 111-116.View/Download from: Publisher's site
© 1995 IEEE. All rights reserved. An information system characterizes a view of the world. Typically this view is incomplete and subject to change, as a consequence such systems use nonmonotonic reasoning to form expectations about the world, and they modify their expectations as new information is acquired. For instance, database systems use the closed world assumption a very naive variety of nonmonotonic reasoning, and their expectations are updated when new information becomes available in the form of transactions. Gardenfors and Makinson have shown that nonmontonic inferences can be constructed from a preference ordering of expectations. In this paper we adapt the process of conditionalization and techniques developed in the area of belief revision to handle changes in the nonmonotonic information encapsulated in an expectation ordering. Thereby providing a mechanism for modeling the removal of old expectations, the incorporation of new expectations, as well as the raising and lowering of existing expectations. Changes to the expectation ordering using conditionalization are based on a relative measure of minimal change.
Williams, MA 1994, 'On the logic of theory base change', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 86-105.
© 1994, Springer Verlag. All rights reserved. Recently there has been considerable interest in change operators for theory bases, rather than entire theories. Especially since such operators could support computer-based implementations of revision systems. However, a perceived problem associated with theory base change operators is their sensitivity to the syntax of the theory base used. Although it has been argued that this sensitivity should reflect a higher level of commitment to formulae in the theory base than formulae derivable from the theory base. In this paper we develop a logic of theory base change, using constructions based on ensconcements. We show whenever two theory bases have equivalent ensconcements, the logical closure of their theory base revisions are identical. Moreover, we give explicit relationships associating theory base revision and theory base contraction, and provide explicit relationships between theory base change and theory change operations. We claim that these relationships show that our theory base revision and theory base contraction operators exhibit desirable behaviour.
Johnston, B, Shuard, C, Parajuli, P & Williams, M 2011, 'UTS Song and Dance', University of Technology Sydney.
Johnston, B & Williams, M 2010, 'Human-Robot Interactive Dance', Melbourne Exhibition Centre.
Williams, M & Johnston, B 2010, 'UTS Human-Robot Dance', University of Technology Sydney.
Williams, M 2009, 'Robots: Future Challenges', Powerhouse Museum.
Public Robot Performance
Instone, L, Mee, K, Palmer, JM, Williams, M & Vaughan, N National Climate Change Adaptation Research Facility 2013, Climate Change Adaptation and the Rental Sector, Gold Coast, Australia.View/Download from: UTS OPUS
The research employed an asset-based approach to understanding the capacities, assets and skills which tenants, landlords and housing managers bring to climate change adaptation. The project also took a pro-poor approach focusing on the adaptive capacity of low-income renters in the public and private sectors, addressing the equity dimensions of vulnerability and adaptation. In addition to analysing a range of secondary sources such as media articles, `green guides and policy documents, the research analysed primary data from interviews and focus groups, focusing on: The assets of the rental sector in adaptation Barriers which limit the capacity of individuals and organisations to exercise these assets The relationships between the stakeholders tenants, landlords and property managers which underlie both assets and barriers to adaptation. We found that the tenants we interviewed were motivated by concern about the impact of human activity on the environment, and exercised this concern through everyday sustainable household practices, as well as through engagement with community or political organisations. They believed however that their capacity to act in the home was inhibited by a lack of care from some landlords and property managers about the sustainability of rental housing. Public housing managers who were interviewed positioned the public housing sector as policy leaders in sustainability and adaptation, but as constrained by a lack of resources (human and financial) and the busy reactive nature of their work. Busyness and lack of resources was also seen as a constraint on private property managers capacity to advocate or arrange for sustainability modifications to the properties they managed. Property managers emerged as crucial `knowledge brokers mediating between landlords and tenants, but expressed a need for more information and training. Both tenants and property managers acknowledged that the current shortage of rental housing in ...
Cloud Robotics (CR) is an emerging and successful approach to robotics. The number of robots or other IoT
devices may increase drastically in the future which might need
enormous bandwidth and there might be security concerns. If
robots in CR are not secured then robots can even become
surveillance bot by hackers. Moreover, if an internet connection
is lost due to network hitches then in that crucial moment robot
may not be available to complete its given task. For example,
a robot assisting a person can stop working unexpectedly or
work with the instructions from hacker. In order to address
such problems, we propose a new approach to robotics - Fog
Robotics (FR) in this paper, so a network of robots can be used
more securely and efficiently as compared to CR.
- United Nations
- Stanford University
- Commonwealth Bank of Australia
- Visual Risk