Dr William L Raffe is a Senior Lecturer in the School of Computer Science, Co-Director of the Games Studio research lab, Program Coordinator of the BSc. in Games Development, and a member of the Faculty Board for the Faculty of Engineering and IT. He specializes in Computational Intelligence in Games research and is more generally interested in a range of topics from the research fields of Game Design (for Health, Environment, and Education), Machine Learning, Evolutionary Computing, Mixed Reality, and Human-Computer Interaction. William created and co-organizes the UTS Autumn Games Showcase and the UTS Student Games Jam as a means to highlight the strengths of the UTS games students and engage the local game development industry. William also coordinates and lectures game design and programming subjects as part of the Bachelor of Science in Game Development.
William was awarded his PhD in Computer Science in 2014 and his Bachelor of Computer Science (Honours) in 2009, both from RMIT University in Melbourne Australia. He widely publishes in international peer reviewed venues, is on the Program Committee for many of these, and is a Co-Chair for the Interactive Entertainment conference. William has been an invited panellist at the CeBIT Business and Technology Conference and often speaks at UTS STEM outreach events, open days, and research seminars.
William has worked closely with executives and theme park managers of Village Roadshow Ltd. while he was working as a Research Fellow on an ARC Linkage Grant (LP120100743). He is a former member of the RMIT Centre for Game Design Research, the RMIT Evolutionary Computing and Machine Learning Group, and the Exertion Games Lab. He has also been a member of the Golden Key International Honours Society since 2006.
Can supervise: YES
William's primary research focus surrounds the application of the data sciences (machine learning, metaheuristic optimisation, and data analytics) to game design, player modelling, artificial game-playing agents, and game authoring tools. In recent years, this has also incorporated human-computer interaction principles, mixed reality technologies (augmented and virtual), and game design for applications in health, education, and environmental modelling.
William supervises honours, masters, and PhD research projects in these fields and more broadly in the topics of games, machine learning, computer graphics, or human-computer interaction.
William is currently the lecturer and subject coordinator for:
- 31262 / 32003 - Computer Game Design
- 31263 / 32004 - Computer Game Programming
- 31264 / 32501 - Introduction to Computer Graphics
- 31248 - Games and Graphics Project
He also supervises students in minor research subjects and student work placements.
William is also the Program Coordinator of the Bachelor of Science in Game Development degree.
Tamassia, M, Zambetta, F, Raffe, WL, Mueller, FF & Li, X 2018, 'Learning Options From Demonstrations: A Pac-Man Case Study', IEEE TRANSACTIONS ON GAMES, vol. 10, no. 1, pp. 91-96.View/Download from: Publisher's site
Raffe, WL, Zambetta, F, Li, X & Stanley, KO 2015, 'Integrated Approach to Personalized Procedural Map Generation Using Evolutionary Algorithms', IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, vol. 7, no. 2, pp. 139-155.View/Download from: UTS OPUS or Publisher's site
In digital games, the map (sometimes referred to as the level) is the virtual environment that outlines the boundaries of play, aids in establishing rule systems, and supports the narrative. It also directly influences the challenges that a player will experience and the pace of gameplay, a property that has previously been linked to a player's enjoyment of a game . In most industry leading games, creating maps is a lengthy manual process conducted by highly trained teams of designers. However, for many decades procedural content generation (PCG) techniques have posed as an alternative to provide players with a larger range of experiences than would normally be possible. In recent years, PCG has even been proposed as a means of tailoring game content to meet the preferences and skills of a specific player, in what has been termed Experience-driven PCG (EDPCG) .
Taghikhah, F, Raffe, WL, Mitri, G, Du Toit, S, Voinov, A & Garcia, JA 2019, 'Last Island: Exploring Transitions to Sustainable Futures through Play', PROCEEDINGS OF THE AUSTRALASIAN COMPUTER SCIENCE WEEK MULTICONFERENCE (ACSW 2019), Australasian Computer Science Week Multiconference (ACSW), ASSOC COMPUTING MACHINERY, Sydney, AUSTRALIA.View/Download from: UTS OPUS or Publisher's site
Demediuk, S, Tamassia, M, Li, X & Raffe, WL 2019, 'Challenging Al: Evaluating the Effect of MCTS-Driven Dynamic Difficulty Adjustment on Player Enjoyment', PROCEEDINGS OF THE AUSTRALASIAN COMPUTER SCIENCE WEEK MULTICONFERENCE (ACSW 2019), Australasian Computer Science Week Multiconference (ACSW), ASSOC COMPUTING MACHINERY, Sydney, AUSTRALIA.View/Download from: UTS OPUS or Publisher's site
Demediuk, S, Tamassia, M, Raffe, WL, Zambetta, F, Mueller, FF & Li, X 2018, 'Measuring player skill using dynamic difficulty adjustment', ACM International Conference Proceeding Series, Australasian Computer Science Week Multiconference, Brisbane, Queensland.View/Download from: UTS OPUS or Publisher's site
© 2018 ACM. Video games have a long history of use for educational and training purposes, as they provided increased motivation and learning for players. One of the limitations of using video games in this manner is, players still need to be tested outside of the game environment to test their learning outcomes. Traditionally, determining a player's skill level in a competitive game, requires players to compete directly with each other. Through the application of the Adaptive Training Framework, this work presents a novel method to determine the skill level of the player after each interaction with the video game. This is done by measuring the effort of a Dynamic Difficult Adjustment agent, without the need for direct competition between players. The experiments conducted in this research show that by measuring the players Heuristic Value Average, we can obtain the same ranking of players as state-of-the-art ranking systems, without the need for direct competition.
Garcia, JA, Raffe, WL & Navarro, KF 2018, 'Assessing user engagement with a fall prevention game as an unsupervised exercise program for older people', ACM International Conference Proceeding Series, Australasian Symposium on Parallel and Distributed Computing, ACM, Published in: · Proceeding ACSW '18 Proceedings of the Australasian Computer Science Week Multiconference Article No. 37 Brisband, Queensland, Australia, pp. 1-8.View/Download from: UTS OPUS or Publisher's site
© 2018 ACM. Falling is, unfortunately, a leading cause of injury and death in the global elderly population. However, it has previously been shown that increased physical and cognitive activity can decrease the occurrence of falls in the elderly. This paper investigates the potential for a long-term, unsupervised fall prevention training tool in the form of the StepKinnection game, which was designed to exercise both reflex times and movement speed while also providing entertainment. Specifically, this game was used in a three month user study consisting of 10 participants over the age of 65. Adherence to the training program, enjoyment of the game, and ease of use of the game were investigated using a custom usability questionnaire, four established usability scales, heuristic evaluation of gameplay data, and semi-structured interviews. Results show that participants generally had positive attitudes towards the game, they felt that they would engage with this training program more than there current exercises, and that the game was easy to use without guidance or supervision beyond the initial set up support and instructions provided at the start of the experiment period.
Demediuk, S, Murrin, A, Bulger, D, Hitchens, M, Drachen, A, Raffe, WL & Tamassia, M 2018, 'Player retention in league of legends: A study using survival analysis', ACM International Conference Proceeding Series, Australasian Computer Science Week Multiconference, ACM, Brisbane, Queensland, Australia.View/Download from: UTS OPUS or Publisher's site
© 2018 ACM. Multi-player online esports games are designed for extended durations of play, requiring substantial experience to master. Furthermore, esports game revenues are increasingly driven by in-game purchases. For esports companies, the trends in players leaving their games therefore not only provide information about potential problems in the user experience, but also impacts revenue. Being able to predict when players are about to leave the game-churn prediction-is therefore an important solution for companies in the rapidly growing esports sector, as this allows them to take action to remedy churn problems. The objective of the work presented here is to understand the impact of specific behavioral characteristics on the likelihood of a player continuing to play the esports title League of Legends. Here, a solution to the problem is presented based on the application of survival analysis, using Mixed Effects Cox Regression, to predict player churn. Survival Analysis forms a useful approach for the churn prediction problem as it provides rates as well as an assessment of the characteristics of players who are at risk of leaving the game. Hazard rates are also presented for the leading indicators, with results showing that duration between matches played is a strong indicator of potential churn.
Raffe, WL & Garcia, JA 2018, 'Combining skeletal tracking and virtual reality for game-based fall prevention training for the elderly', 2018 IEEE 6th International Conference on Serious Games and Applications for Health, SeGAH 2018, International Conference on Serious Games and Applications for Health, IEEE, Vienna, Austria, pp. 1-7.View/Download from: UTS OPUS or Publisher's site
© 2018 IEEE. This paper provides a preliminary appraisal of combining commercial skeletal tracking and virtual reality technologies for the purposes of innovative gameplay interfaces in fall prevention exergames for the elderly. This work uses the previously published StepKinnection game, which used skeletal tracking with a flat screen monitor, as a primary point of comparison for the proposed combination of these interaction modalities. Here, a Microsoft Kinect is used to track the player's skeleton and represent it as an avatar in the virtual environment while the HTC Vive is used for head tracking and virtual reality visualization. Multiple avatar positioning modes are trialled and discussed via a small self-reflective study (with the authors as participants) to examine their ability to allow accurate stepping motions, maintain physical comfort, and encourage self-identification or empathy with the avatar. While this is just an initial study, it highlights promising opportunities for designing engaging step training games with this integrated interface but also highlights its limitations, especially in the context of an unsupervised exercise program of older people in independent living situations.
Tamassia, M, Raffe, W, Sifa, R, Drachen, A, Zambetta, F & Hitchens, M 2016, 'Predicting player churn in destiny: A Hidden Markov models approach to predicting player departure in a major online game', IEEE Conference on Computatonal Intelligence and Games, CIG, IEEE Conference on Computational Intelligence and Games, Santorini, Greece.View/Download from: UTS OPUS or Publisher's site
© 2016 IEEE. Destiny is, to date, the most expensive digital game ever released with a total operating budget of over half a billion US dollars. It stands as one of the main examples of AAA titles, the term used for the largest and most heavily marketed game productions in the games industry. Destiny is a blend of a shooter game and massively multi-player online game, and has attracted dozens of millions of players. As a persistent game title, predicting retention and churn in Destiny is crucial to the running operations of the game, but prediction has not been attempted for this type of game in the past. In this paper, we present a discussion of the challenge of predicting churn in Destiny, evaluate the area under curve (ROC) of behavioral features, and use Hidden Markov Models to develop a churn prediction model for the game.
Demediuk, S, Tamassia, M, Raffe, WL, Zambetta, F, Li, X & Mueller, F 2017, 'Monte Carlo tree search based algorithms for dynamic difficulty adjustment', 2017 IEEE Conference on Computational Intelligence and Games (CIG), Computational Intelligence and Games, IEEE, New York, NY, USA.View/Download from: UTS OPUS or Publisher's site
Maintaining player immersion is a crucial step in making an enjoyable video game. One aspect of player immersion is the level of challenge the game presents to the player. To avoid a mismatch between a player's skill and the challenge of a game, which can result from traditional manual difficulty selection mechanisms (e.g. easy, medium, hard), Dynamic Difficulty Adjustment (DDA) has previously been proposed as a means of automatically detecting a player's skill and adjusting the level of challenge the game presents accordingly. This work contributes to the field of DDA by proposing a novel approach to artificially intelligent agents for opponent control. Specifically, we propose four new DDA Artificially Intelligent (AI) agents: Reactive Outcome Sensitive Action Selection (Reactive OSAS), Proactive OSAS, and their "True" variants. These agents provide the player with an level of difficulty tailored to their skill in real-time by altering the action selection policy and the heuristic playout evaluation of Monte Carlo Tree Search. The DDA AI agents are tested within the FightingICE engine, which has been used in the past as an environment for AI agent competitions. The results of the experiments against other AI agents and human players show that these novel DDA AI agents can adjust the level of difficulty in real-time, by targeting a zero health difference as the outcome of the fighting game. This work also demonstrates the trade-off existing between targeting the outcome exactly (Reactive OSAS) and introducing proactive behaviour (i.e., the DDA AI agent fights even if the health difference is zero) to increase the agents believability (Proactive OSAS).
Demediuk, S, Raffe, WL & Li, X 2016, 'An adaptive training framework for increasing player proficiency in games and simulations', Proceedings of the Annual Symposium on Computer-Human Interaction in Play Companion, Annual Symposium on Computer-Human Interaction in Play, ACM, Austin, USA, pp. 125-131.View/Download from: UTS OPUS or Publisher's site
To improve a player's proficiency at a particular video game, the player must be presented with an appropriate level of challenge. This level of challenge must remain relative to the player as their proficiency changes. The current fixed difficulty settings (e.g. easy, medium or hard) provide a limited range of difficulty for the player. This work aims to address this problem through developing an adaptive training framework that utilities existing work in Dynamic Difficulty Adjustment to construct an adaptive AI opponent. The framework also provides a way to measure the player's proficiency, by analysing the level of challenge the adaptive AI opponent provides for the player. This work tests part of the proposed adaptive training framework through a pilot study that uses a real-time fighting game. Copyright is held by the owner/author(s).
Tamassia, M, Zambetta, F, Raffe, WL, Mueller, FF & Li, X 2016, 'Dynamic choice of state abstraction in Q-learning', Frontiers in Artificial Intelligence and Applications, European Conference on Artificial Intelligence, IOS Press, The Hague, Holland, pp. 46-54.View/Download from: UTS OPUS or Publisher's site
© 2016 The Authors and IOS Press.Q-learning associates states and actions of a Markov Decision Process to expected future reward through online learning. In practice, however, when the state space is large and experience is still limited, the algorithm will not find a match between current state and experience unless some details describing states are ignored. On the other hand, reducing state information affects long term performance because decisions will need to be made on less informative inputs. We propose a variation of Q-learning that gradually enriches state descriptions, after enough experience is accumulated. This is coupled with an ad-hoc exploration strategy that aims at collecting key information that allows the algorithm to enrich state descriptions earlier. Experimental results obtained by applying our algorithm to the arcade game Pac-Man show that our approach significantly outperforms Q-learning during the learning process while not penalizing long-term performance.
Raffe, WL, Tamassia, M, Zambetta, F, Li, X & Mueller, FF 2015, 'Enhancing theme park experiences through adaptive cyber-physical play', 2015 IEEE Conference on Computational Intelligence and Games (CIG), IEEE Symposium on Computational Intelligence and Games, IEEE, Tainan, Taiwan, pp. 503-510.View/Download from: UTS OPUS or Publisher's site
© 2015 IEEE. In this vision paper we explore the potential for enhancing theme parks through the introduction of adaptive cyber-physical attractions. That is, some physical attraction that is controlled by a digital system, which takes participants' actions as input and, in turn, alters the participants' experiences. This paper is thus divided into three main parts; 1) a look at the types of attractions that a typical theme park may offer and, from this, the identification of a gap in an agency versus structure spectrum that recent research and industry developments are starting to fill; 2) a discussion of the advantages that cyber-physical play has in filling this gap and a few examples of envisioned future attractions; and 3) how such cyber-physical play can uniquely allow for adaptive attractions, whereby the physical attraction is personalized to suit the capabilities or preferences of the current attraction participants, as well as some foreseeable design considerations and challenges in doing so. Through the combination of these three parts, we hope to promote further research into augmenting theme parks with adaptive cyber-physical play attractions.
Ivanovo, J, Raffe, WL, Zambetta, F & Li, X 2015, 'Combining Monte Carlo tree search and apprenticeship learning for capture the flag', 2015 IEEE Conference on Computational Intelligence and Game, IEEE Symposium on Computational Intelligence and Games, IEEE, Tainan, Taiwan, pp. 154-161.View/Download from: UTS OPUS or Publisher's site
© 2015 IEEE. In this paper we introduce a novel approach to agent control in competitive video games which combines Monte Carlo Tree Search (MCTS) and Apprenticeship Learning (AL). More specifically, an opponent model created through AL is used during the expansion phase of the Upper Confidence Bounds for Trees (UCT) variant of MCTS. We show how this approach can be applied to a game of Capture the Flag (CTF), an environment which is both non-deterministic and partially observable. The performance gain of a controller utilizing an opponent model learned via AL when compared to a controller using just UCT is shown both with win/loss ratios and True Skill rankings. Additionally, we build on previous findings by providing evidence of a bias towards a particular style of play in the AI Sandbox CTF environment. We believe that the approach highlighted here can be extended to a wider range of games other than just CTF.
Raffe, WL, Tamassia, M, Zambetta, F, Li, X, Pell, SJ & Mueller, FF 2015, 'Player-computer interaction features for designing digital play experiences across six degrees of water contact', Proceedings of the 2015 Annual Symposium on Computer-Human Interaction in Play, Annual Symposium on Computer-Human Interaction in Play, ACM, London, United Kingdom, pp. 295-306.View/Download from: UTS OPUS or Publisher's site
© 2015 ACM. Physical games involving the use of water or that are played in a water environment can be found in many cultures throughout history. However, these experiences have yet to see much benefit from advancements in digital technology. With advances in interactive technology that is waterproof, we see a great potential for digital water play. This paper provides a guide for commencing projects that aim to design and develop digital water-play experiences. A series of interaction features are provided as a result of reflecting on prior work as well as our own practice in designing playful experiences for water environments. These features are examined in terms of the effect that water has on them in relation to a taxonomy of six degrees of water contact, ranging from the player being in the vicinity of water to them being completely underwater. The intent of this paper is to prompt forward thinking in the prototype design phase of digital water-play experiences, allowing designers to learn and gain inspiration from similar past projects before development begins.
Tamassia, M, Zambetta, F, Raffe, W & Li, X 2015, 'Learning options for an MDP from demonstrations', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Australasian Conference on Artificial Life and Computational Intelligence, Springer, Newcastle, NSW, Australia, pp. 226-242.View/Download from: UTS OPUS
© Springer International Publishing Switzerland 2015. The options framework provides a foundation to use hierarchical actions in reinforcement learning. An agent using options, along with primitive actions, at any point in time can decide to perform a macro-action made out of many primitive actions rather than a primitive action. Such macro-actions can be hand-crafted or learned. There has been previous work on learning them by exploring the environment. Here we take a different perspective and present an approach to learn options from a set of experts demonstrations. Empirical results are also presented in a similar setting to the one used in other works in this area.
Raffe, WL, Zambetta, F & Li, X 2013, 'Neuroevolution of content layout in the PCG: Angry bots video game', 2013 IEEE Congress on Evolutionary Computation, CEC 2013, IEEE Congress on Evolutionary Computation, IEEE, Cancun, Mexico, pp. 673-680.View/Download from: UTS OPUS or Publisher's site
This paper demonstrates an approach to arranging content within maps of an action-shooter game. Content here refers to any virtual entity that a player will interact with during game-play, including enemies and pick-ups. The content layout for a map is indirectly represented by a Compositional Pattern-Producing Networks (CPPN), which are evolved through the Neuroevolution of Augmenting Topologies (NEAT) algorithm. This representation is utilized within a complete procedural map generation system in the game PCG: Angry Bots. In this game, after a player has experienced a map, a recommender system is used to capture their feedback and construct a player model to evaluate future generations of CPPNs. The result is a content layout scheme that is optimized to the preferences and skill of an individual player. We provide a series of case studies that demonstrate the system as it is being used by various types of players. © 2013 IEEE.
Raffe, WL, Zambetta, F & Li, X 2012, 'A Survey of Procedural Terrain Generation Techniques using Evolutionary Algorithms', 2012 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), IEEE Congress on Evolutionary Computation (CEC), IEEE, Brisbane, AUSTRALIA.
Raffe, WL, Zambetta, F & Li, X 2011, 'Evolving patch-based terrains for use in video games', GECCO '11 Proceedings of the 13th annual conference on Genetic and evolutionary computation, The Genetic and Evolutionary Computation Conference, ACM, Dublin, Ireland, pp. 363-370.View/Download from: UTS OPUS or Publisher's site
Procedurally generating content for video games is gaining interest as an approach to mitigate rising development costs and meet users' expectations for a broader range of experiences. This paper explores the use of evolutionary algorithms to aid in the content generation process, especially the creation of three-dimensional terrain. We outline a prototype for the generation of in-game terrain by compiling smaller height-map patches that have been extracted from sample maps. Evolutionary algorithms are applied to this generation process by using crossover and mutation to evolve the layout of the patches. This paper demonstrates the benefits of an interactive two-level parent selection mechanism as well as how to seamlessly stitch patches of terrain together. This unique patch-based terrain model enhances control over the evolution process, allowing for terrain to be refined more intuitively to meet the user's expectations. Copyright 2011 ACM.
Raffe, W, Hu, J, Zambetta, F & Xi, K 2010, 'A dual-layer clustering scheme for real-time identification of plagiarized massive multiplayer games (MMG) assets', Proceedings of the 2010 5th IEEE Conference on Industrial Electronics and Applications, ICIEA 2010, pp. 307-312.View/Download from: Publisher's site
Theft of virtual assets in massive multiplayer games (MMG) is a significant issue. Conventional image based pattern and object recognition techniques are becoming more effective identifying copied objects but few results are available for effectively identifying plagiarized objects that might have been modified from the original objects especially in the real-time environment where a large sample of objects are present. In this paper we present a dual-layer clustering algorithm for efficient identification of plagiarized MMG objects in an environment with real-time conditions, modified objects and large samples of objects are present. The proposed scheme utilizes a concept of effective pixel banding for the first pass clustering and then uses Hausdorff Distance mechanism for further clustering. The experimental results demonstrate that our method drastically reduces execution time while achieving good performance of identification rate, with a genuine acceptance rate of 88%. © 2010 IEEE.