Lasitha received B.Sc.Eng (Hons.) degree in 2004 (Electrical engineering), M.Phil. in 2011 from University of Moratuwa and Ph.D. in 2016 (robotics) from University of Technology Sydney.
He has over 7 years of research and academic experience and contributed to several successful industry/research projects.
He has over 6 years of teaching experience. From 2012 autumn semester (7 semesters), Lasitha worked as a casual academic tutor/lecturer in the course Mechatronics 1 (48622) at UTS. Before joining UTS, he worked as a lecturer in mechatronics engineering at Uva Wellassa University (UWU), Sri Lanka for more than 3 years. At UWU he pioneered to develop course curriculum for the mechatronic degree program.
Lasitha has years of industry experience and has worked in several university-industry collaboration projects. Currently, he is working as a senior mechatronic engineer at University of Technology Sydney (UTS) where he is leading an engineering team to develop a sewer inspection robot. This industry project is funded by Sydney Water with research funding worth over AUD 1.3 million. Previously, he worked as a mechatronics engineer at Swarm Farm Robotics. Swarm Farm is a robotics company based in QLD that develops a swarm of lightweight agricultural robots for farm operations.
Machine Learning, Robotics
Piyathilaka, L & Kodagoda, S 2015, 'Learning Hidden Human Context in 3D Office Scenes by Mapping Affordances Through Virtual Humans', Unmanned Systems, vol. 03, no. 04, pp. 299-310.View/Download from: UTS OPUS or Publisher's site
Ability to learn human context in an environment could be one of the most desired fundamental abilities that a robot should have when sharing a workspace with human co-workers. Arguably, a robot with appropriate human context awareness could lead to a better human–robot interaction. In this paper, we address the problem of learning human context in an office environment by only using 3D point cloud data. Our approach is based on the concept of affordance-map, which involves mapping latent human actions in a given environment by looking at geometric features of the environment. This enables us to learn the human context in the environment without observing real human behaviors which themselves are a nontrivial task to detect. Once learned, affordance-map allows us to assign an affordance cost value for each grid location of the map. These cost maps are later used to develop an active object search strategy and to develop a context-aware global path planning strategy.
Piyathilaka, JM & Kodagoda, S 2015, 'Human Activity Recognition for Domestic Robots' in Mejias, L, Corke, P & Roberts, J (eds), Field and Service Robotics: Results of the 9th International Conference, Springer International Publishing, Springer, pp. 395-408.View/Download from: UTS OPUS or Publisher's site
Piyathilaka, L & Kodagoda, S 2015, 'Affordance-map: Mapping human context in 3D scenes using cost-sensitive SVM and virtual human models', 2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015, IEEE International Conference on Robotics and Biomimetics, IEEE, Zhuhai, pp. 2035-2040.View/Download from: UTS OPUS or Publisher's site
Robots are often required to operate in environments where humans are not present, but yet require the human context information for better human robot interaction. Even when humans are present in the environment, detecting their presence in cluttered environments could be challenging. As a solution to this problem, this paper presents the concept of affordance-map which learns human context by looking at geometric features of the environment. Instead of observing real humans to learn human context, it uses virtual human models and their relationships with the environment to map hidden human affordances in 3D scenes. The affordance-map learning problem is formulated as a multi label classification problem that can be learned using cost-sensitive SVM. Experiments carried out in a real 3D scene dataset recorded promising results and proved the applicability of affordance-map for mapping human context.
Piyathilaka, JM & Kodagoda, S 2014, 'Affordance-Map : A Map for Context-Aware Path Planning.', Proceedings of the Australasian Conference on Robotics and Automation 2014, Australasian Conference on Robotics and Automation, Australian Robotics and Automation Association Inc, Melbourne, pp. 1-8.View/Download from: UTS OPUS
Context-awareness' could be one of the most desired fundamental abilities that a robot should have when sharing a workspace with humans co-workers. Arguably, a robot with appropriate context-awareness could lead to a better human robot interaction. In this paper, we address the problem of combining context-
awareness with robotic path planning. Our approach is based on affordance-map, which involves mapping latent human actions in a given environment by looking at geometric features of the environment. This enables us to learn human context in an given environment without observing real human behaviours which them-
selves are a non-trivial task to detect. Once learned, affordance-map allows us to assign anaffordance cost value for each grid location of the map. These cost maps are later used to develop a context-aware global path planning strategy by using the well known A* algorithm. The proposed method was tested in a real office
environment and proved our algorithm is capable of moving a robot in a path that minimises the distractions to human co-workers.
Piyathilaka, JM & Kodagoda, S 2014, 'Active Visual Object Search Using Affordance-Map in Real World : A Human-Centric Approach', Proceedings of the 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV), International Conference on Control, Automation, Robotics and Vision, IEEE, Singapore, pp. 1427-1432.View/Download from: UTS OPUS or Publisher's site
Human context is the most natural explanation why objects are placed and arranged in a particular order in an indoor environment. Usually, humans arrange objects in order to support their intended activities in a given environment. However, most of the common approaches for robotic object search involve modelling object-object relationships. In this paper, we hypothesize such relationships are centered around humans and bring human context to object search by modelling human-objects
relationships through affordance-map. It identifies locations in a 3D map which support a particular affordance using virtual human models. Therefore, our approach does not require to observe real humans in the scene. The affordance-map and object-human-robot relationship are then used to infer the object search
strategy. We tested our algorithm using a mobile robot that actively searched for the object 'computer monitors' in an office environment with promising results
Piyathilaka, JM & Kodagoda, S 2013, 'Gaussian Mixture Based HMM for Human Daily Activity Recognition Using 3D Skeleton Features', Gaussian Mixture Based HMM for Human Daily Activity Recognition Using 3D Skeleton Features, IEEE Conference on Industrial Electronics and Applications, IEEE, Melbourne, VIC, Australia, pp. 567-572.View/Download from: UTS OPUS or Publisher's site
Ability to recognize human activities will enhance the capabilities of a robot that interacts with humans. However automatic detection of human activities could be challenging due to the individual nature of the activities. In this paper, we present human activity detection model that uses only 3-D skeleton features generated from an RGB-D sensor (Microsoft Kinect). To infer the human activities, we implemented Gaussian Mixture Modal (GMM) based Hidden Markov model(HMM). GM outputs of the HMM were effectively able to capture multimodel nature of 3D positions of each skeleton joint. We tested our model in a publicly available data-set that consists of twelve different daily activities performed by four different people.The proposed model recorded recognition recall accuracy of 84% with previously seen people and 78% with previously unseen people.
Piyathilaka, L & Munasinghe, R 2011, 'Vision-only outdoor localization of two-wheel tractor for autonomous operation in agricultural fields', 2011 6th International Conference on Industrial and Information Systems, ICIIS 2011 - Conference Proceedings, pp. 358-363.View/Download from: Publisher's site
This paper outlines the problem of outdoor autonomous robot localization for agricultural operations. Vision based outdoor localization system is developed entirely of off-the-shelf components that can accurately guide a field robot in small agricultural fields. Visual odometry using downward faced camera is proposed as a high resolution relative localization system. A stereo vision based range measurement system is also developed and field tested as an absolute localization system that can bound the incremental error caused by the visual odometry system. The extended Kalman filter with measurement gating is implemented using both visual odometry and stereo range measurement data. Field tests demonstrated proposed system is capable of localization of two wheel tractor at a very low cost with acceptable accuracy in small agricultural fields. © 2011 IEEE.
Piyathilaka, L & Munasinghe, R 2010, 'An experimental study on using visual odometry for short-run self localization of field robot', Proceedings of the 2010 5th International Conference on Information and Automation for Sustainability, ICIAfS 2010, pp. 150-155.View/Download from: Publisher's site
One of the most challenging problems of field robots is self-localization, which involves incremental update of position while in motion. Though wheel based odometry is cheaper to implement its accuracy degrades when wheels slip. In this paper performance of low-cost visual odometry approach is experimented as a feasibility test for field robot localization. We have used a downward-facing camera and tested localization error in view of various parameters such as frame size, frame rate, etc. A FFT-based image registration techniques was utilized to determine the precise translation distances and heading between consecutive frames captured from ground surface. Basic navigation experiments, including a loop-closing test and error propagation were conducted and interesting numerical results have been reported. © 2010 IEEE.
Piyathilaka, L & Munasinghe, R 2010, 'Multi-camera visual odometry for skid steered field robot', Proceedings of the 2010 5th International Conference on Information and Automation for Sustainability, ICIAfS 2010, pp. 189-194.View/Download from: Publisher's site
Position estimation using wheel odometric systems tends to give rather poor performances for an outdoor four-wheel skid steered mobile robot. Therefore autonomous control of these vehicles is extremely challenging in outdoor environments. This paper describes an outdoor localization system based on visual odometry for skid steered vehicle using forward faced camera and a downward faced camera. Optical flow field data is statistically analyzed to correctly estimate the position of the robot. Kalman Filtering is used to fuse data from two cameras for optimum performance. Also real-time Instantaneous Center of Rotation (ICR) detection using optical flow field data is proposed to calculate the heading angle. Two consumer grade cameras were used and algorithm was tested using open source image processing libraries. The proposed system yielded an acceptable positioning accuracy on short runs in typical outdoor terrains. © 2010 IEEE.
Piyathilaka, L & Munasinghe, R 2010, 'Modeling and simulation of power tiller for autonomous operation in agricultural fields', 2010 The 2nd International Conference on Computer and Automation Engineering, ICCAE 2010, pp. 743-748.View/Download from: Publisher's site
This paper details the development of a model for a walking tractor also known as power tiller. A complete model for power tiller is developed considering kinematics, dynamics and engine dynamics for autonomous operation. Inputs to the simulation model are left clutch control signal and right clutch control signal. Data from an Inertial Measurement Unit (IMU) attached to the power tiller enables validation of the model using analogue matching and integral least squares. The result of the validation is that the model is a suitable representation of the power tiller. Developed model is intended to use for development of a controller for the remote and autonomous operation at agricultural fields. ©2010 IEEE.
Nanayakkara, T, Piyathilaka, JMLC, Siriwardana, AP, Subasinghe, SAAM & Jamshidi, M 2007, 'Orchestration of advanced motor skills in a group of humans through an elitist visual feedback mechanism', 2007 IEEE International Conference on System of Systems Engineering, SOSE.View/Download from: Publisher's site
A group of humans with diverse body dynamics and training backgrounds working on machines with diferent dynamics can be considered as a system of live systems. This paper presents a method that can be adopted to automate the evolution of an elite skill in a factory of workers operating a given type of machines to produce a given product through successive induction of an evolved elite skill on other workers. It also proposed a simple model that can be used to explain complex phenomena that can not be explained by the conventional learning schemes. A wireless data exchange system was adopted to transmit the machine speed profiles to a central data server. A technical expert selected the current elite speed profile from among the database of profiles registered by each worker in the server and broadcast the selected profile to the data terminals of all other workers. Every worker made an attempt to match the given elite profile. The crossover between the elite target profile and the natural skills of each worker successively generated machine speed profiles that could beat the given elite profile. This process repeated until the average efficiency of the whole group converged to an optimum. The proposed method was implemented in a leading Garment exporter in Sri Lanka. The results showed that the overall efficiency of the factory improved from 45% to 74%. ©2007 IEEE.