Dr William L Raffe is a Senior Lecturer in the School of Software, Co-Director of the Games Studio research lab, Program Coordinator of the BSc. in Games Development, and a member of the Faculty Board for the Faculty of Engineering and IT. He specializes in Computational Intelligence in Games research and is more generally interested in a range of topics from the research fields of Game Design (for Health, Environment, and Education), Machine Learning, Evolutionary Computing, Mixed Reality, and Human-Computer Interaction. William created and co-organizes the UTS Autumn Games Showcase and the UTS Student Games Jam as a means to highlight the strengths of the UTS games students and engage the local game development industry. William also coordinates and lectures game design and programming subjects as part of the Bachelor of Science in Game Development.
William was awarded his PhD in Computer Science in 2014 and his Bachelor of Computer Science (Honours) in 2009, both from RMIT University in Melbourne Australia. He widely publishes in international peer reviewed venues, is on the Program Committee for many of these, and is a Co-Chair for the Interactive Entertainment conference. William has been an invited panellist at the CeBIT Business and Technology Conference and often speaks at UTS STEM outreach events, open days, and research seminars.
William has worked closely with executives and theme park managers of Village Roadshow Ltd. while he was working as a Research Fellow on an ARC Linkage Grant (LP120100743). He is a former member of the RMIT Centre for Game Design Research, the RMIT Evolutionary Computing and Machine Learning Group, and the Exertion Games Lab. He has also been a member of the Golden Key International Honours Society since 2006.
William's primary research focus surrounds the application of the data sciences (machine learning, metaheuristic optimisation, and data analytics) to game design, player modelling, artificial game-playing agents, and game authoring tools. In recent years, this has also incorporated human-computer interaction principles, mixed reality technologies (augmented and virtual), and game design for applications in health, education, and environmental modelling.
William supervises honours, masters, and PhD research projects in these fields and more broadly in the topics of games, machine learning, computer graphics, or human-computer interaction.
William is currently the lecturer and subject coordinator for:
- 31262 / 32003 - Computer Game Design
- 31263 / 32004 - Computer Game Programming
- 31264 / 32501 - Introduction to Computer Graphics
- 31248 - Games and Graphics Project
He also supervises students in minor research subjects and student work placements.
William is also the Program Coordinator of the Bachelor of Science in Game Development degree.
Tamassia, M., Zambetta, F., Raffe, W., Mueller, F. & Li, X. 2017, 'Learning Options from Demonstrations: A Pac-Man Case Study', IEEE Transactions on Computational Intelligence and AI in Games, pp. 1-1.View/Download from: Publisher's site
Raffe, W.L., Zambetta, F., Li, X. & Stanley, K.O. 2015, 'Integrated Approach to Personalized Procedural Map Generation Using Evolutionary Algorithms', IEEE Transactions on Computational Intelligence and AI in Games, vol. 7, no. 2, pp. 139-155.View/Download from: UTS OPUS or Publisher's site
© 2015 IEEE. In this paper, we propose the strategy of integrating multiple evolutionary processes for personalized procedural content generation (PCG). In this vein, we provide a concrete solution that personalizes game maps in a top-down action-shooter game to suit an individual player's preferences. The need for personalized PCG is steadily growing as the player market diversifies, making it more difficult to design a game that will accommodate a broad range of preferences and skills. In the solution presented here, the geometry of the map and the density of content within that geometry are represented and generated in distinct evolutionary processes, with the player's preferences being captured and utilized through a combination of interactive evolution and a player model formulated as a recommender system. All these components were implemented into a test bed game and experimented on through an unsupervised public experiment. The solution is examined against a plausible random baseline that is comparable to random map generators that have been implemented by independent game developers. Results indicate that the system as a whole is receiving better ratings, that the geometry and content evolutionary processes are exploring more of the solution space, and that the mean prediction accuracy of the player preference models is equivalent to that of existing recommender system literature. Furthermore, we discuss how each of the individual solutions can be used with other game genres and content types.
In digital games, the map (sometimes referred to as the level) is the virtual environment that outlines the boundaries of play, aids in establishing rule systems, and supports the narrative. It also directly influences the challenges that a player will experience and the pace of gameplay, a property that has previously been linked to a player's enjoyment of a game . In most industry leading games, creating maps is a lengthy manual process conducted by highly trained teams of designers. However, for many decades procedural content generation (PCG) techniques have posed as an alternative to provide players with a larger range of experiences than would normally be possible. In recent years, PCG has even been proposed as a means of tailoring game content to meet the preferences and skills of a specific player, in what has been termed Experience-driven PCG (EDPCG) .
Demediuk, S., Tamassia, M., Raffe, W.L., Zambetta, F., Mueller, F.F. & Li, X. 2018, 'Measuring player skill using dynamic difficulty adjustment', ACM International Conference Proceeding Series.View/Download from: Publisher's site
© 2018 ACM. Video games have a long history of use for educational and training purposes, as they provided increased motivation and learning for players. One of the limitations of using video games in this manner is, players still need to be tested outside of the game environment to test their learning outcomes. Traditionally, determining a player's skill level in a competitive game, requires players to compete directly with each other. Through the application of the Adaptive Training Framework, this work presents a novel method to determine the skill level of the player after each interaction with the video game. This is done by measuring the effort of a Dynamic Difficult Adjustment agent, without the need for direct competition between players. The experiments conducted in this research show that by measuring the players Heuristic Value Average, we can obtain the same ranking of players as state-of-the-art ranking systems, without the need for direct competition.
Demediuk, S., Tamassia, M., Raffe, W.L., Zambetta, F., Li, X. & Mueller, F. 2017, 'Monte Carlo tree search based algorithms for dynamic difficulty adjustment', Computational Intelligence and Games (CIG), 2017 IEEE Conference on, IEEE, New York, NY, USA, USA.View/Download from: Publisher's site
Maintaining player immersion is a crucial step in making an enjoyable video game. One aspect of player immersion is the level of challenge the game presents to the player. To avoid a mismatch between a player's skill and the challenge of a game, which can result from traditional manual difficulty selection mechanisms (e.g. easy, medium, hard), Dynamic Difficulty Adjustment (DDA) has previously been proposed as a means of automatically detecting a player's skill and adjusting the level of challenge the game presents accordingly. This work contributes to the field of DDA by proposing a novel approach to artificially intelligent agents for opponent control. Specifically, we propose four new DDA Artificially Intelligent (AI) agents: Reactive Outcome Sensitive Action Selection (Reactive OSAS), Proactive OSAS, and their "True" variants. These agents provide the player with an level of difficulty tailored to their skill in real-time by altering the action selection policy and the heuristic playout evaluation of Monte Carlo Tree Search. The DDA AI agents are tested within the FightingICE engine, which has been used in the past as an environment for AI agent competitions. The results of the experiments against other AI agents and human players show that these novel DDA AI agents can adjust the level of difficulty in real-time, by targeting a zero health difference as the outcome of the fighting game. This work also demonstrates the trade-off existing between targeting the outcome exactly (Reactive OSAS) and introducing proactive behaviour (i.e., the DDA AI agent fights even if the health difference is zero) to increase the agents believability (Proactive OSAS).
Tamassia, M., Raffe, W., Sifa, R., Drachen, A., Zambetta, F. & Hitchens, M. 2017, 'Predicting player churn in destiny: A Hidden Markov models approach to predicting player departure in a major online game', IEEE Conference on Computatonal Intelligence and Games, CIG.View/Download from: UTS OPUS or Publisher's site
© 2016 IEEE. Destiny is, to date, the most expensive digital game ever released with a total operating budget of over half a billion US dollars. It stands as one of the main examples of AAA titles, the term used for the largest and most heavily marketed game productions in the games industry. Destiny is a blend of a shooter game and massively multi-player online game, and has attracted dozens of millions of players. As a persistent game title, predicting retention and churn in Destiny is crucial to the running operations of the game, but prediction has not been attempted for this type of game in the past. In this paper, we present a discussion of the challenge of predicting churn in Destiny, evaluate the area under curve (ROC) of behavioral features, and use Hidden Markov Models to develop a churn prediction model for the game.
Demediuk, S., Raffe, W.L. & Li, X. 2016, 'An adaptive training framework for increasing player proficiency in games and simulations', Proceedings of the Annual Symposium on Computer-Human Interaction in Play Companion, Annual Symposium on Computer-Human Interaction in Play, ACM, Austin, USA, pp. 125-131.View/Download from: UTS OPUS or Publisher's site
To improve a player's proficiency at a particular video game, the player must be presented with an appropriate level of challenge. This level of challenge must remain relative to the player as their proficiency changes. The current fixed difficulty settings (e.g. easy, medium or hard) provide a limited range of difficulty for the player. This work aims to address this problem through developing an adaptive training framework that utilities existing work in Dynamic Difficulty Adjustment to construct an adaptive AI opponent. The framework also provides a way to measure the player's proficiency, by analysing the level of challenge the adaptive AI opponent provides for the player. This work tests part of the proposed adaptive training framework through a pilot study that uses a real-time fighting game. Copyright is held by the owner/author(s).
Tamassia, M., Zambetta, F., Raffe, W.L., Mueller, F.F. & Li, X. 2016, 'Dynamic choice of state abstraction in Q-learning', Frontiers in Artificial Intelligence and Applications, European Conference on Artificial Intelligence, IOS Press, The Hague, Holland, pp. 46-54.View/Download from: UTS OPUS or Publisher's site
© 2016 The Authors and IOS Press.Q-learning associates states and actions of a Markov Decision Process to expected future reward through online learning. In practice, however, when the state space is large and experience is still limited, the algorithm will not find a match between current state and experience unless some details describing states are ignored. On the other hand, reducing state information affects long term performance because decisions will need to be made on less informative inputs. We propose a variation of Q-learning that gradually enriches state descriptions, after enough experience is accumulated. This is coupled with an ad-hoc exploration strategy that aims at collecting key information that allows the algorithm to enrich state descriptions earlier. Experimental results obtained by applying our algorithm to the arcade game Pac-Man show that our approach significantly outperforms Q-learning during the learning process while not penalizing long-term performance.
Ivanovo, J., Raffe, W.L., Zambetta, F. & Li, X. 2015, 'Combining Monte Carlo tree search and apprenticeship learning for capture the flag', 2015 IEEE Conference on Computational Intelligence and Game, IEEE Symposium on Computational Intelligence and Games, IEEE, Tainan, Taiwan, pp. 154-161.View/Download from: UTS OPUS or Publisher's site
© 2015 IEEE. In this paper we introduce a novel approach to agent control in competitive video games which combines Monte Carlo Tree Search (MCTS) and Apprenticeship Learning (AL). More specifically, an opponent model created through AL is used during the expansion phase of the Upper Confidence Bounds for Trees (UCT) variant of MCTS. We show how this approach can be applied to a game of Capture the Flag (CTF), an environment which is both non-deterministic and partially observable. The performance gain of a controller utilizing an opponent model learned via AL when compared to a controller using just UCT is shown both with win/loss ratios and True Skill rankings. Additionally, we build on previous findings by providing evidence of a bias towards a particular style of play in the AI Sandbox CTF environment. We believe that the approach highlighted here can be extended to a wider range of games other than just CTF.
Raffe, W.L., Tamassia, M., Zambetta, F., Li, X. & Mueller, F.F. 2015, 'Enhancing theme park experiences through adaptive cyber-physical play', 2015 IEEE Conference on Computational Intelligence and Games (CIG), IEEE Symposium on Computational Intelligence and Games, IEEE, Tainan, Taiwan, pp. 503-510.View/Download from: UTS OPUS or Publisher's site
© 2015 IEEE. In this vision paper we explore the potential for enhancing theme parks through the introduction of adaptive cyber-physical attractions. That is, some physical attraction that is controlled by a digital system, which takes participants' actions as input and, in turn, alters the participants' experiences. This paper is thus divided into three main parts; 1) a look at the types of attractions that a typical theme park may offer and, from this, the identification of a gap in an agency versus structure spectrum that recent research and industry developments are starting to fill; 2) a discussion of the advantages that cyber-physical play has in filling this gap and a few examples of envisioned future attractions; and 3) how such cyber-physical play can uniquely allow for adaptive attractions, whereby the physical attraction is personalized to suit the capabilities or preferences of the current attraction participants, as well as some foreseeable design considerations and challenges in doing so. Through the combination of these three parts, we hope to promote further research into augmenting theme parks with adaptive cyber-physical play attractions.
Raffe, W.L., Tamassia, M., Zambetta, F., Li, X., Pell, S.J. & Mueller, F.F. 2015, 'Player-computer interaction features for designing digital play experiences across six degrees of water contact', Proceedings of the 2015 Annual Symposium on Computer-Human Interaction in Play, Annual Symposium on Computer-Human Interaction in Play, ACM, London, United Kingdom, pp. 295-306.View/Download from: UTS OPUS or Publisher's site
© 2015 ACM. Physical games involving the use of water or that are played in a water environment can be found in many cultures throughout history. However, these experiences have yet to see much benefit from advancements in digital technology. With advances in interactive technology that is waterproof, we see a great potential for digital water play. This paper provides a guide for commencing projects that aim to design and develop digital water-play experiences. A series of interaction features are provided as a result of reflecting on prior work as well as our own practice in designing playful experiences for water environments. These features are examined in terms of the effect that water has on them in relation to a taxonomy of six degrees of water contact, ranging from the player being in the vicinity of water to them being completely underwater. The intent of this paper is to prompt forward thinking in the prototype design phase of digital water-play experiences, allowing designers to learn and gain inspiration from similar past projects before development begins.
Tamassia, M., Zambetta, F., Raffe, W. & Li, X. 2015, 'Learning options for an MDP from demonstrations', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Australasian Conference on Artificial Life and Computational Intelligence, Springer, Newcastle, NSW, Australia, pp. 226-242.View/Download from: UTS OPUS
© Springer International Publishing Switzerland 2015. The options framework provides a foundation to use hierarchical actions in reinforcement learning. An agent using options, along with primitive actions, at any point in time can decide to perform a macro-action made out of many primitive actions rather than a primitive action. Such macro-actions can be hand-crafted or learned. There has been previous work on learning them by exploring the environment. Here we take a different perspective and present an approach to learn options from a set of experts demonstrations. Empirical results are also presented in a similar setting to the one used in other works in this area.
Raffe, W.L., Zambetta, F. & Li, X. 2013, 'Neuroevolution of content layout in the PCG: Angry bots video game', 2013 IEEE Congress on Evolutionary Computation, CEC 2013, IEEE Congress on Evolutionary Computation, IEEE, Cancun, Mexico, pp. 673-680.View/Download from: UTS OPUS or Publisher's site
This paper demonstrates an approach to arranging content within maps of an action-shooter game. Content here refers to any virtual entity that a player will interact with during game-play, including enemies and pick-ups. The content layout for a map is indirectly represented by a Compositional Pattern-Producing Networks (CPPN), which are evolved through the Neuroevolution of Augmenting Topologies (NEAT) algorithm. This representation is utilized within a complete procedural map generation system in the game PCG: Angry Bots. In this game, after a player has experienced a map, a recommender system is used to capture their feedback and construct a player model to evaluate future generations of CPPNs. The result is a content layout scheme that is optimized to the preferences and skill of an individual player. We provide a series of case studies that demonstrate the system as it is being used by various types of players. © 2013 IEEE.
Raffe, W.L., Zambetta, F. & Li, X. 2012, 'A survey of procedural terrain generation techniques using evolutionary algorithms', 2012 IEEE Congress on Evolutionary Computation, CEC 2012.View/Download from: Publisher's site
This paper provides a review of existing approaches to using evolutionary algorithms (EA) during procedural terrain generation (PTG) processes in video games. A reliable PTG algorithm would allow game maps to be created partially or completely autonomously, reducing the development cost of a game and providing players with more content. Specifically, the use of EA raises possibilities of more control over the terrain generation process, as well as the ability to tailor maps for individual users. In this paper we outline the prominent algorithms that use EA in terrain generation, describing their individual advantages and disadvantages. This is followed by a comparison of the core features of these approaches and an analysis of their appropriateness for generating game terrain. This survey concludes with open challenges for future research. © 2012 IEEE.
Raffe, W.L., Zambetta, F. & Li, X. 2011, 'Evolving patch-based terrains for use in video games', GECCO '11 Proceedings of the 13th annual conference on Genetic and evolutionary computation, The Genetic and Evolutionary Computation Conference, ACM, Dublin, Ireland, pp. 363-370.View/Download from: UTS OPUS or Publisher's site
Procedurally generating content for video games is gaining interest as an approach to mitigate rising development costs and meet users' expectations for a broader range of experiences. This paper explores the use of evolutionary algorithms to aid in the content generation process, especially the creation of three-dimensional terrain. We outline a prototype for the generation of in-game terrain by compiling smaller height-map patches that have been extracted from sample maps. Evolutionary algorithms are applied to this generation process by using crossover and mutation to evolve the layout of the patches. This paper demonstrates the benefits of an interactive two-level parent selection mechanism as well as how to seamlessly stitch patches of terrain together. This unique patch-based terrain model enhances control over the evolution process, allowing for terrain to be refined more intuitively to meet the user's expectations. Copyright 2011 ACM.
Raffe, W., Hu, J., Zambetta, F. & Xi, K. 2010, 'A dual-layer clustering scheme for real-time identification of plagiarized massive multiplayer games (MMG) assets', Proceedings of the 2010 5th IEEE Conference on Industrial Electronics and Applications, ICIEA 2010, pp. 307-312.View/Download from: Publisher's site
Theft of virtual assets in massive multiplayer games (MMG) is a significant issue. Conventional image based pattern and object recognition techniques are becoming more effective identifying copied objects but few results are available for effectively identifying plagiarized objects that might have been modified from the original objects especially in the real-time environment where a large sample of objects are present. In this paper we present a dual-layer clustering algorithm for efficient identification of plagiarized MMG objects in an environment with real-time conditions, modified objects and large samples of objects are present. The proposed scheme utilizes a concept of effective pixel banding for the first pass clustering and then uses Hausdorff Distance mechanism for further clustering. The experimental results demonstrate that our method drastically reduces execution time while achieving good performance of identification rate, with a genuine acceptance rate of 88%. © 2010 IEEE.