Alen Alempijevic received his B.Eng. Computer Systems Engineering with First Class Honors and PhD degree from the University of Technology Sydney (Australia) in 2004 and 2009 respectively.
He joined the UTS node of the ARC Centre of Excellence in Autonomous Systems as a Research Fellow in 2009 and is currently a Senior Lecturer with the Centre for Autonomous Systems, School of Electrical, Mechanical and Mechatronic Systems at UTS.
Alen has been part of the Australian research team collaborating with University of California Berkeley in 2007 enabling vehicle autonomy under the Department of Defense sponsored DARPA Urban Grand Challenge.
Alen is currently a Principle Chief Investigator on several industry driven projects investigating the challenges in perception and application of machine learning techniques on infrastructure maintenance, underground coal mining, human-robot interaction and estimation of biological systems.
Alen is a committee member at various robotics conferences (e.g. Australasian Conference on Robotics and Automation, Robotics: Science and Systems (RSS), 2014) and a conference symposium organizer (e.g. International Conference on Intelligent Sensors, Sensor Networks and Information Processing).
Alen is an active reviewer of leading Robotics journals, Journal of Field Robotics (JFR), International Journal of Robotics Research (IJRR), Robotics, IEEE Transactions on.
Can supervise: YES
Robotic systems, autonomous vehicles, sensor registration and data fusion, perception systems, computer vision, machine learning, software integration.
Rahman, S, Quin, P, Walsh, T, Vidal-Calleja, T, McPhee, MJ, Toohey, E & Alempijevic, A 2018, 'Preliminary estimation of fat depth in the lamb short loin using a hyperspectral camera', Animal Production Science, vol. 58, no. 8, pp. 1488-1496.View/Download from: UTS OPUS or Publisher's site
© 2018 CSIRO. The objectives of the present study were to describe the approach used for classifying surface tissue, and for estimating fat depth in lamb short loins and validating the approach. Fat versus non-fat pixels were classified and then used to estimate the fat depth for each pixel in the hyperspectral image. Estimated reflectance, instead of image intensity or radiance, was used as the input feature for classification. The relationship between reflectance and the fat/non-fat classification label was learnt using support vector machines. Gaussian processes were used to learn regression for fat depth as a function of reflectance. Data to train and test the machine learning algorithms was collected by scanning 16 short loins. The near-infrared hyperspectral camera captured lines of data of the side of the short loin (i.e. with the subcutaneous fat facing the camera). Advanced single-lens reflex camera took photos of the same cuts from above, such that a ground truth of fat depth could be semi-automatically extracted and associated with the hyperspectral data. A subset of the data was used to train the machine learning model, and to test it. The results of classifying pixels as either fat or non-fat achieved a 96% accuracy. Fat depths of up to 12 mm were estimated, with an R 2 of 0.59, a mean absolute bias of 1.72 mm and root mean square error of 2.34 mm. The techniques developed and validated in the present study will be used to estimate fat coverage to predict total fat, and, subsequently, lean meat yield in the carcass.
Ulapane, N, Alempijevic, A, Miro, JV & Vidal-Calleja, T 2018, 'Non-destructive evaluation of ferromagnetic material thickness using Pulsed Eddy Current sensor detector coil voltage decay rate', NDT & E INTERNATIONAL, vol. 100, pp. 108-114.View/Download from: UTS OPUS or Publisher's site
Ulapane, N, Alempijevic, A, Calleja, TV & Miro, JV 2017, 'Pulsed Eddy Current Sensing for Critical Pipe Condition Assessment', SENSORS, vol. 17, no. 10.View/Download from: UTS OPUS or Publisher's site
Skinner, B, McPhee, MJ, Walmsley, BJ, Littler, B, Siddell, J, M.Cafe, L, Wilkins, JF, Oddy, VH & Alempijevic, A 2017, 'Live animal assessments of rump fat and muscle score in Angus cows and steers using 3-dimensional imaging', Journal of Animal Science, vol. 95, no. 4, pp. 1847-1857.View/Download from: UTS OPUS or Publisher's site
The objective of this study was to develop a proof of concept for using off-the-shelf Red Green Blue-Depth (RGB-D) Microsoft Kinect cameras to objectively assess P8 rump fat (P8 fat; mm) and muscle score (MS) traits in Angus cows and steers. Data from low and high muscled cattle (156 cows and 79 steers) were collected at multiple locations and time points. The following steps were required for the 3-dimensional (3D) image data and subsequent machine learning techniques to learn the traits: 1) reduce the high dimensionality of the point cloud data by extracting features from the input signals to produce a compact and representative feature vector, 2) perform global optimization of the signatures using machine learning algorithms and a parallel genetic algorithm, and 3) train a sensor model using regression-supervised learning techniques on the ultrasound P8 fat and the classified learning techniques for the assessed MS for each animal in the data set. The correlation of estimating hip height (cm) between visually measured and assessed 3D data from RGB-D cameras on cows and steers was 0.75 and 0.90, respectively. The supervised machine learning and global optimization approach correctly classified MS (mean [SD]) 80 (4.7) and 83% [6.6%] for cows and steers, respectively. Kappa tests of MS were 0.74 and 0.79 in cows and steers, respectively, indicating substantial agreement between visual assessment and the learning approaches of RGB-D camera images. A stratified 10-fold cross-validation for P8 fat did not find any differences in the mean bias ( = 0.62 and = 0.42 for cows and steers, respectively). The root mean square error of P8 fat was 1.54 and 1.00 mm for cows and steers, respectively. Additional data is required to strengthen the capacity of machine learning to estimate measured P8 fat and assessed MS. Data sets for and continental cattle are also required to broaden the use of 3D cameras to assess cattle. The results demonstrate the importance of capturing curv...
The industrial revolution undoubtedly defined the role of machines in our society, and it directly shaped the paradigm for human machine interaction - a paradigm which was inherited by the field of Human Robot Interaction (HRI) as the machines became robots. This paper argues that, for a foreseeable set of interactions, reshaping this paradigm would result in more effective and more often successful interactions. This paper presents our Robot Centric paradigm for HRI. Evidence in the form of summaries of relevant literature and our past efforts in developing social-robotics enabling technology is presented to support our paradigm. A definition and a set of recommendations for designing the key enabling component, sociocontextual cues, of our paradigm are presented. Finally, empirical evidence generated through a number of experiments and field studies (N = 456 and N = 320) demonstrates our paradigm is both feasibly incorporated into HRI and moreover, yields significant contributions to the successfulness of a set of HRIs.
Upcroft, B, Makarenko, A, Moser, M, Alempijevic, A, Donikian, A, Uther, W & Fitch, R 2007, 'Empirical Evaluation of an Autonomous Vehicle in an Urban Environment', Journal Of Aerospace Computing, Information, And Communication, vol. 4, no. 12, pp. 1086-1107.View/Download from: UTS OPUS or Publisher's site
Operation in urban environments creates unique challenges for research in autonomous ground vehicles. In this paper, we describe a novel autonomous platform developed by the Sydney-Berkeley Driving Team for entry into the 2007 DARPA Urban Challenge competition. We report empirical results analyzing the performance of the vehicle while navigating a 560-meter test loop multiple times in an actual urban setting with severe GPS outage. We show that our system is robust against failure of global position estimates and can reliably traverse standard two-lane road networks using vision for localization.
Fitch, R, Alempijevic, A & Peynot, T 2016, 'Global reconfiguration of a team of networked mobile robots among obstacles', Springer, Berlin, Germany, pp. 639-656.View/Download from: UTS OPUS or Publisher's site
This paper presents a full system demonstration of dynamic sensor-based reconfiguration of a networked robot team. Robots sense obstacles in their environment locally and dynamically adapt their global geometric configuration to conform to an abstract goal shape.We present a novel two-layer planning and control algorithm for team reconfiguration that is decentralized and assumes local (neighbour-to-neighbour) communication only. The approach is designed to be resource-efficient and we show experiments using a team of nine mobile robots with modest computation, communication, and sensing. The robots use acoustic beacons for localisation and can sense obstacles in their local neighbourhood using IR sensors. Our results demonstrate globally-specified reconfiguration
from local information in a real robot network, and highlight limitations
of standard mesh networks in implementing decentralised algorithms.
Peynot, T, Fitch, R, McAllister, R & Alempijevic, A 2013, 'Resilient Navigation through Probabilistic Modality Reconfiguration', Springer Verlag, Berlin, Germany, pp. 75-88.View/Download from: UTS OPUS or Publisher's site
This paper proposes an approach to achieve resilient navigation for indoor mobile robots. Resilient navigation seeks to mitigate the impact of control, localisation, or map errors on the safety of the platform while enforcing the robots ability to achieve its goal. We show that resilience to unpredictable errors can be achieved by combining the benefits of independent and complementary algorithmic approaches to navigation, or modalities, each tuned to a particular type of environment or situation. In this paper, the modalities comprise a path planning method and a reactive motion strategy. While the robot navigates, a Hidden Markov Model continually estimates the most appropriate modality based on two types of information: context (information known a priori) and monitoring (evaluating unpredictable aspects of the current situation). The robot then uses the recommended modality, switching between one and another dynamically. Experimental validation with a SegwayRMPbased platform in an office environment shows that our approach enables failure mitigation while maintaining the safety of the platform. The robot is shown to reach its goal in the presence of: 1) unpredicted control errors, 2) unexpected map errors and 3) a large injected localisation fault
Upcroft, B, Makarenko, A, Brooks, A, Moser, M, Alempijevic, A, Donikian, A, Sprinkle, J, Uther, W & Fitch, R 2012, 'Empirical Evaluation Of An Autonomous Vehicle In An Urban Environment' in Experience From The Darpa Urban Challenge, Springer-Verlag Berlin, Berlin, pp. 273-301.View/Download from: UTS OPUS or Publisher's site
Operation in urban environments creates unique challenges for research in autonomous ground vehicles. Due to the presence of tall trees and buildings in close proximity to traversable areas, GPS outage is likely to be frequent and physical hazards pose r
Maleki, B, Alempijevic, A & Vidal Calleja, T 2018, 'Continuous Optimization Framework for Depth Sensor Viewpoint Selection', Workshop on the Algorithmic Foundations of Robotics, Merida, Mexico.View/Download from: UTS OPUS
Distinguishing differences between areas represented with point cloud data is generally approached by choosing a optimal viewpoint. The most informative view of a scene ultimately enables to have the optimal coverage over distinct points both locally and globally while accounting for the distance to the foci of attention. Measures of surface saliency, related to curvature inconsistency, extenuate differences in shape and are coupled with viewpoint selection approaches. As there is no analytical solution for optimal viewpoint selection, candidate viewpoints are generally discretely sampled and evaluated for information and require (near) exhaustive combinatorial searches. We present a consolidated optimization framework for optimal viewpoint selection with a continuous cost function and analytically derived Jacobian that incorporates view angle, vertex normals and measures of task related surface information relative to viewpoint. We provide a mechanism in the cost function to incorporate sensor attributes such as operating range, field of view and angular resolution. The framework is evaluated as competing favorably with the state-of-the-art approaches to viewpoint selection while significantly reducing the number of viewpoints to be evaluated in the process.
Virgona, A, Alempijevic, A & Vidal-Calleja, T 2018, 'Socially constrained tracking in crowded environments using shoulder pose estimates', Proceedings - IEEE International Conference on Robotics and Automation, IEEE International Conference on Robotics and Automation, IEEE, Brisbane, QLD, Australia, pp. 4555-4562.View/Download from: UTS OPUS or Publisher's site
© 2018 IEEE. Detecting and tracking people is a key requirement in the development of robotic technologies intended to operate in human environments. In crowded environments such as train stations this task is particularly challenging due the high numbers of targets and frequent occlusions. In this paper we present a framework for detecting and tracking humans in such crowded environments in terms of 2D pose (x, y, θ). The main contributions are a method for extracting pose from the most visible parts of the body in a crowd, the head and shoulders, and a tracker which leverages social constraints regarding peoples orientation, movement and proximity to one another, to improve robustness in this challenging environment. The framework is evaluated on two datasets: one captured in a lab environment with ground truth obtained using a motion capture system, and the other captured in a busy inner city train station. Pose errors are reported against the ground truth and the tracking results are then compared with a state-of-the-art person tracking framework.
Ulapane, N, Nguyen, LV, Valls Miro, J, Alempijevic, A & Dissanayake, G 2017, 'Designing A Pulsed Eddy Current Sensing Set-up for Cast Iron Thickness Assessment', Proceedings of the 12th IEEE Conference on Industrial Electronics and Applications, IEEE Conference on Industrial Electronics and Applications, IEEE, Siem Reap, Cambodia, pp. 901-906.View/Download from: UTS OPUS
Pulsed Eddy Current (PEC) sensors possess proven functionality in measuring ferromagnetic material thickness. However, most commercial PEC service providers as well as researchers have investigated and claim functionality of sensors on homogeneous structural steels (steel grade Q235 for example). In this paper, we present design steps for a PEC sensing set-up to measure thickness of cast iron, which is unlike steel, is a highly inhomogeneous and non-linear ferromagnetic material. The setup
includes a PEC sensor, sensor excitation and reception circuits, and a unique signal processing method. The signal processing method yields a signal feature which behaves as a function of thickness. The signal feature has a desirable characteristic of being lowly influenced by lift-off. Experimental results show that the set-up is usable for Non-destructive Evaluation (NDE) applications such as cast iron water pipe assessment.
Collart, J, Fitch, R & Alempijevic, A 2017, 'Motion States Inference through 3D Shoulder Gait Analysis and Hierarchical Hidden Markov Models', Australasian Conference on Robotics and Automation 2017, Australasian Conference on Robotics and Automation, ARAA, Sydney, Australia, pp. 1-8.View/Download from: UTS OPUS
Automatically inferring human intention from
walking movements is an important research
concern in robotics and other fields of study.
It is generally derived from temporal motion
of limb position relative to the body. These
changes can also be reflected in the change of
stance and gait. Conventional systems relying
on gait are usually based on tracking the lower
body motion (hip, foot) and are extracted from
monocular camera data. However, such data
can be inaccessible in crowded environments
where occlusions of the lower body are prevalent.
This paper proposes a novel approach to
utilize upper body 3D-motion and Hierarchical
Hidden Markov Models to estimate human ambulatory
states, such as quietly standing, starting
to walk (gait initiation), walking (gait cycle),
or stopping (gait termination). Methods
have been tested on real data acquired through
a motion capture system where foot measurements
(heels and toes) were used as ground
truth data for labeling the states to train and
test the models. Current results demonstrate
the feasibility of using such a system to infer
lower-body motion states and sub-states
through observations of 3D shoulder motion online.
Our results enable applications in situations
where only upper body motion is readily
Quin, PD, Paul, G, Alempijevic, A & Liu, D 2016, 'Exploring in 3D with a Climbing Robot: Selecting the Next Best Base Position on Arbitrarily-Oriented Surfaces', Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on, IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, Daejeon, Korea, pp. 5770-5775.View/Download from: UTS OPUS or Publisher's site
This paper presents an approach for selecting the next best base position for a climbing robot so as to observe the highest information gain about the environment. The robot is capable of adhering to and moving along and transitioning to surfaces with arbitrary orientations. This approach samples known surfaces, and takes into account the robot kinematics, to generate a graph of valid attachment points from which the robot can either move to other positions or make observations of the environment. The information value of nodes in this graph are estimated and a variant of A* is used to traverse the graph and discover the most worthwhile node that is reachable by the robot. This approach is demonstrated in simulation and shown to allow a 7 degree-of-freedom inchworm-inspired climbing robot to move to positions in the environment from which new information can be gathered about the environment.
Colborne-Veel, P, Kirchner, N & Alempijevic, A 2015, 'Towards more train paths through early passenger intention inference', 2015 ATRF Conference Proceeding, Australasian Transport Research Forum, ATRF, Sydney, Australia, pp. 1-14.View/Download from: UTS OPUS
In public train stations, the designed way finding tends to induce individuals to conform to specific egress patterns. Whilst this is desirable for a number of reasons, it can cumulate into congestion at specific points in the station. Which, in turn, can increase dwell time; for example, loading and unloading time increases with concentrations of people trying to load/unload onto the same carriage. Clearly, an influencing strategy that is more responsive to the current station situation could have advantages.
Our prior research studies in Perth Station demonstrated the feasibility of reliably and predictably influencing passengers egress patterns in real time during operations. This capability suggests the possibility of active counterbalancing of the egress-alternatives while maintaining way finding. However, the prerequisite for such capability is the availability of knowledge of passenger's intention at a point in their journey where viable egress-alternatives to their destination exist.
This work details an approach towards an early (in the passenger journey) passenger intention inference system necessary to enable active egress-alternative influencing. Our contextually grounded approach infers intention through reasoning upon observed system and passenger cues in conjunction with a-priori knowledge of how train stations are used. The empirical validation of our intention inference system, which was conducted with data acquired during operations on a platform in Brisbane's Central train station in Queensland, is presented and discussed. The findings are then employed to argue the feasibility of an influencing system to reduce passenger congestion and the potential service impacts.
Virgona, A, Kirchner, N & Alempijevic, A 2015, 'Sensing and Perception Technology to Enable Real Time Monitoring of Passenger Movement Behaviours Through Congested Rail Stations', 2015 ATRF Conference Proceeding, Australasian Transport Research Forum, ATRF, Sydney.View/Download from: UTS OPUS
The real time monitoring of passenger movement and behaviour through public transport environments including precincts, concourses, platforms and train vestibules would enable operators to more effectively manage congestion at a whole-of-station level. While existing crowd monitoring technologies allow operators to monitor crowd densities at critical locations and react to overcrowding incidents, they do not necessarily provide an understanding of the cause of such issues. Congestion is a complex phenomenon involving the movements of many people though a set of spaces and monitoring these spaces requires tracking large numbers of individuals. To do this, traditional surveillance technologies might be used but at the expense of introducing privacy concerns. Scalability is also a problem, as complete sensor coverage of entire rail station precinct, concourse and platform areas potentially requires a high number of sensors, increasing costs. In light of this, there is a need for sensing technology that collects data from a set of 'sparse sensors', each with a limited field of view, but which is capable of forming a network that can track the movement and behaviour of high numbers of associated individuals in a privacy sensitive manner. This paper presents work towards the core crowd sensing and perception technology needed to enable such a capability. Building on previous research using three-dimensional (3D) depth camera data for person detection, a privacy friendly approach to tracking and recognising individuals is discussed. The use of a head-to-shoulder signature is proposed to enable association between sensors. Our efforts to improve the reliability of this measure for this task are outlined and validated using data captured at Brisbane Central rail station.
Collart, J, Alempijevic, A, Kirchner, N & Zeibots, M 2015, 'Foundation technology for developing an autonomous Complex Dwell-time Diagnostics (CDD) Tool', Australasian Transport Research Forum 2015 Proceedings, Australasian Transport Research Forum, ATRF, Sydney, Australia, pp. 1-13.View/Download from: UTS OPUS
As the demand for rail services grows, intense pressure is placed on stations at the centre of rail networks where large crowds of rail passengers alight and board trains during peak periods. The time it takes for this to occur — the dwell-time — can become extended when high numbers of people congest and cross paths. Where a track section is operating at short headways, extended dwell-times can cause delays to scheduled services that can in turn cause a cascade of delays that eventually affect entire networks. Where networks are operating at close to their ceiling capacity, dwell-time management is essential and in most cases requires the introduction of special operating procedures.
This paper details our work towards developing an autonomous Complex Dwell-time Diagnostics (CDD) Tool — a low cost technology, capable of providing information on multiple dwell events in real time. At present, rail operators are not able to access reliable and detailed enough data on train dwell operations and passenger behaviour. This is because much of the necessary data has to be collected manually. The lack of rich data means train crews and platform staff are not empowered to do all they could to potentially stabilise and reduce dwell-times. By better supporting service providers with high quality data analysis, the number of viable train paths can be increased, potentially delaying the need to invest in high cost hard infrastructures such as additional tracks.
The foundation technology needed to create CDD discussed in this paper comprises a 3D image data based autonomous system capable of detecting dwell events during operations and then create business information that can be accessed by service providers in real time during rail operations. Initial tests of the technology have been carried out at Brisbane Central rail station. A discussion of the results to date is provided and their implications for next steps.
Ulapane, N, Alempijevic, A, Vidal-Calleja, T, Valls Miro, J, Rudd, J & Roubal, M 2014, 'Gaussian process for interpreting pulsed eddy current signals for ferromagnetic pipe profiling', Industrial Electronics and Applications (ICIEA), 2014 IEEE 9th Conference on, IEEE Conference on Industrial Electronics and Applications, IEEE, Hangzhou, PEOPLES R CHINA, pp. 1762-1767.View/Download from: UTS OPUS or Publisher's site
Quin, PD, Alempijevic, A, Paul, G & Liu, D 2014, 'Expanding Wavefront Frontier Detection: An Approach for Efficiently Detecting Frontier Cells', https://ssl.linklings.net/conferences/acra/acra2014_proceedings/views/b…, Australasian Conference on Robotics and Automation, Australasian Robotics and Automation Association, Melbourne, pp. 1-10.View/Download from: UTS OPUS
Frontier detection is a key step in many robot exploration algorithms. The more quickly frontiers can be detected, the more efficiently and rapidly exploration can be completed. This paper proposes a new frontier detection algorithm called Expanding Wavefront Frontier Detection (EWFD), which uses the frontier cells from the previous timestep as a starting point for detecting the frontiers in the current timestep. As an alternative to simply comparing against the naive frontier detection approach of evaluating all cells in a map, a new benchmark algorithm for frontier detection is also presented, called Naive Active Area frontier detection, which operates in bounded constant time. EWFD and NaiveAA are evaluated in simulations and the results compared against existing state-of-the-art frontier detection algorithms, such as Wavefront Frontier Detection and Incremental-Wavefront Frontier Detection.
Kirchner, N, Alempijevic, A, Virgona, A, Dai, X, Ploger, PG & Venkat, RK 2014, 'A robust people detection, tracking, and counting system', Proceedings of the Australasian Conference on Robotics and Automation - A robust people detection, tracking, and counting system, Australasian Conference on Robotics and Automation, Australasian Robotics and Automation Association, Melbourne, Australia, pp. 1-8.View/Download from: UTS OPUS
The ability to track moving people is a key aspect of autonomous robot system in real-world environments. Whilst for many tasks knowing the approximate positions of people may be sufficient, the ability to identify unique people is needed to accurately count the people in real world. To accomplish the people counting task, a robust system in people detection, tracking and identification is needed.
This paper presents our approach for robust real world people detection, tracking and counting using a PrimeSense RGBD camera. Our past research, upon which we built, is highlighted and novel methods to solve the problems of sensors self localization, false negatives due to persons physically interacting with the environment, and track misassociation due to crowdedness are presented.
An empirical evaluation of our approach in a major Sydney public train station N=420 was conducted, and results demonstrating our methods in the complexities of this challenging environment are presented.
Quin, PD, Paul, G, Alempijevic, A, Liu, D & Dissanayake, G 2013, 'Efficient Neighbourhood-Based Information Gain Approach for Exploration of Complex 3D Environments', 2013 IEEE International Conference on Robotics and Automation (ICRA), IEEE International Conference on Robotics and Automation, IEEE, Karlsruhe, Germany, pp. 1343-1348.View/Download from: UTS OPUS or Publisher's site
This paper presents an approach for exploring a complex 3D environment with a sensor mounted on the end effector of a robot manipulator. In contrast to many current approaches which plan as far ahead as possible using as much environment information as is available, our approach considers only a small set of poses (vector of joint angles) neighbouring the robot's current pose in configuration space. Our approach is compared to an existing exploration strategy for a similar robot. Our results demonstrate a significant decrease in the number of information gain estimation calculations that need to be performed, while still gathering an equivalent or increased amount of information about the environment.
Quin, PD, Paul, G, Liu, D & Alempijevic, A 2013, 'Nearest Neighbour Exploration with Backtracking for Robotic Exploration of Complex 3D Environments', Proceedings of Australasian Conference on Robotics and Automation, Australasian Conference on Robotics and Automation, Australian Robotics & Automation Association, Sydney, Australia, pp. 1-8.View/Download from: UTS OPUS
Australasian Conference on Robotics and Automation
Kodagoda, S, Alempijevic, A, Huang, S, De La Villefromoy, MJ, Diponio, M & Cogar, LJ 2013, 'Innovative Assessment of Mechatronic Subjects Using Remote Laboratories', International Conference on Information Technology Based Higher Education and Training, IEEE, Antalya, Turkey, pp. 1-5.
In response to the rapid growth of online teaching and learning, University of Technology, Sydney (UTS) has been developing a number of remotely accessible laboratories. In this paper, we present our newly developed remote lab robotic rig that uniquely addresses challenges in Mechatronic courses. The rig contains a mobile robotic platform equipped with various sensory modules placed in a maze with a pantograph power system enabling continuous use of the platform. The software architecture employed allows users to develop their simulations using the Player/Stage simulator and subsequently upload the code in the robotic rig for real-time testing. This paper presents the motivation, design concepts and analysis of students' feedback responses to their use of the remote lab robotics rig. Survey results of a pilot study shows the participants highly agreeing that the remote lab contributes to, deeper understanding of the subject matter, flexible learning process and inspire research in robotics.
Kodagoda, S, Alempijevic, A, Huang, S, De La Villefromoy, MJ, Diponio, M & Cogar, LJ 2013, 'Moving Away from Simulations: Innovative Assessment of Mechatronic Subjects Using Remote Laboratories', 2013 International Conference on Information Technology Based Higher Education and Training, International Conference on Information Technology Based Higher Education and Training, IEEE, Antalya, Turkey, pp. 1-5.View/Download from: UTS OPUS or Publisher's site
In response to the rapid growth of online teaching and learning, University of Technology, Sydney (UTS) has been developing a number of remotely accessible laboratories. In this paper, we present our newly developed remote lab robotic rig that uniquely addresses challenges in Mechatronic courses. The rig contains a mobile robotic platform equipped with various sensory modules placed in a maze with a pantograph power system enabling continuous use of the platform. The software architecture employed allows users to develop their simulations using the Player/Stage simulator and subsequently upload the code in the robotic rig for real-time testing. This paper presents the motivation, design concepts and analysis of students' feedback responses to their use of the remote lab robotics rig. Survey results of a pilot study shows the participants highly agreeing that the remote lab contributes to, deeper understanding of the subject matter, flexible learning process and inspire research in robotics
Alempijevic, A, Fitch, R & Kirchner, NG 2013, 'Bootstrapping Navigation and Path Planning Using Human Positional Traces', IEEE International Conference on Robotics and Automation, IEEE International Conference on Robotics and Automation, IEEE, Karlsruhe, Germany, pp. 1234-1239.View/Download from: UTS OPUS or Publisher's site
Navigating and path planning in environments with limited a priori knowledge is a fundamental challenge for mobile robots. Robots operating in human-occupied environments must also respect sociocontextual boundaries such as personal workspaces. There is a need for robots to be able to navigate in such environments without having to explore and build an intricate representation of the world. In this paper, a method for supplementing directly observed environmental information with indirect observations of occupied space is presented. The proposed approach enables the online inclusion of novel human positional traces and environment information into a probabilistic framework for path planning. Encapsulation of sociocontextual information, such as identifying areas that people tend to use to move through the environment, is inherently achieved without supervised learning or labelling. Our method bootstraps navigation with indirectly observed sensor data, and leverages the flexibility of the Gaussian process (GP) for producing a navigational map that sampling based path planers such as Probabilistic Roadmaps (PRM) can effectively utilise. Empirical results on a mobile platform demonstrate that a robot can efficiently and socially-appropriately reach a desired goal by exploiting the navigational map in our Bayesian statistical framework.
Himstedt, M, Alempijevic, A, Zhao, L, Huang, S & Boehme, H 2012, 'Towards robust vision-based self-localization of vehicles in dense urban environments', 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, Algarve, Portugal, pp. 3152-3157.View/Download from: UTS OPUS or Publisher's site
Self-localization of ground vehicles in densely populated urban environments poses a significant challenge. The presence of tall buildings in close proximity to traversable areas limits the use of GPS-based positioning techniques in such environments. This paper presents an approach to global localization on a hybrid metric-topological map using a monocular camera and wheel odometry. The global topology is built upon spatially separated reference places represented by local image features. In contrast to other approaches we employ a feature selection scheme ensuring a more discriminative representation of reference places while simultaneously rejecting a multitude of features caused by dynamic objects. Through fusion with additional local cues the reference places are assigned discrete map positions allowing metric localization within the map. The self-localization is carried out by associating observed visual features with those stored for each reference place. Comprehensive experiments in a dense urban environment covering a time difference of about 9 months are carried out. This demonstrates the robustness of our approach in environments subjected to high dynamic and environmental changes.
Hu, G, Huang, S, Zhao, L, Alempijevic, A & Dissanayake, G 2012, 'A Robust RGB-D SLAM algorithm', 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, Algarve, Portugal, pp. 1174-1179.View/Download from: UTS OPUS or Publisher's site
Recently RGB-D sensors have become very popular in the area of Simultaneous Localisation and Mapping (SLAM). The major advantage of these sensors is that they provide a rich source of 3D information at relatively low cost. Unfortunately, these sensors in their current forms only have a range accuracy of up to 4 metres. Many techniques which perform SLAM using RGB-D cameras rely heavily on the depth and are restrained to office type and geometrically structured environments. In this paper, a switching based algorithm is proposed to heuristically choose between RGB-BA and RGBD-BA based local maps building. Furthermore, a low cost and consistent optimisation approach is used to join these maps. Thus the potential of both RGB and depth image information are exploited to perform robust SLAM in more general indoor cases. Validation of the proposed algorithm is performed by mapping a large scale indoor scene where traditional RGB-D mapping techniques are not possible.
Kirchner, NG, Alempijevic, A & Virgona, A 2012, 'Head-To-Shoulder Signature for Person Recognition', Robotics and Automation (ICRA), 2012 IEEE International Conference on, IEEE International Conference on Robotics and Automation, IEEE, St Paul, MN, USA, pp. 1226-1231.View/Download from: UTS OPUS or Publisher's site
Ensuring that an interaction is initiated with a particular and unsuspecting member of a group is a complex task. As a first step the robot must effectively, expediently and reliably recognise the humans as they carry on with their typical behaviours (in situ). A method for constructing a scale and viewing angle robust feature vector (from analysing a 3D pointcloud) designed to encapsulate the inter-person variations in the size and shape of the people's head to shoulder region (Head-to-shoulder signature - HSS) is presented. Furthermore, a method for utilising said feature vector as the basis of person recognition via a Support-Vector Machine is detailed. An empirical study was performed in which person recognition was attempted on in situ data collected from 25 participants over 5 days in a office environment. The results report a mean accuracy over the 5 days of 78.15% and a peak accuracy 100% for 9 participants. Further, the results show a considerably better-than-random (1/23 = 4.5%) result for when the participants were: in motion and unaware they were being scanned (52.11%), in motion and face directly away from the sensor (36.04%), and post variations in their general appearance. Finally, the results show the HSS has considerable ability to accommodate for a person's head, shoulder and body rotation relative to the sensor - even in cases where the person is faced directly away from the robot.
Kirchner, NG, Alempijevic, A & Dissanayake, G 2011, 'Nonverbal Robot-Group Interaction Using an Imitated Gaze Cue', Proceedings of the 6th international conference on Human-robot interaction (HRI'11), Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), ACM, Lausanne, Switzerland, pp. 497-504.View/Download from: UTS OPUS or Publisher's site
Ensuring that a particular and unsuspecting member of a group is the recipient of a salient-item hand-over is a complicated interaction. The robot must effectively, expediently and reliably communicate its intentions to advert any tendency within the group towards antinormative behaviour. In this paper, we study how a robot can establish the participant roles of such an interaction using imitated social and contextual cues. We designed two gaze cues, the first was designed to discourage antinormative behaviour through individualising a particular member of the group and the other to the contrary. We designed and conducted a feld experiment (456 participants in 64 trials) in which small groups of people (between 3 and 20 people) assembled in front of the robot, which then attempted to pass a salient object to a particular group member by presenting a physical cue, followed by one of two variations of a gaze cue. Our re-sults showed that presenting the individualising cue had a significant (z=3.733, p=0.0002 ) effect on the robot's ability to ensure that an arbitrary group member did not take the salient object and that the selected participant did.
O'Callaghan, ST, Singh, SP, Alempijevic, A & Ramos, FT 2011, 'Learning Navigational Maps by Observing Human Motion Patterns', IEEE International Conference on Robotics and Automation (ICRA'11), IEEE International Conference on Robotics and Automation, IEEE, Shanghai, China, pp. 4333-4340.View/Download from: UTS OPUS or Publisher's site
Observing human motion patterns is informative for social robots that share the environment with people. This paper presents a methodology to allow a robot to navigate in a complex environment by observing pedestrian positional traces. A continuous probabilistic function is determined using Gaussian process learning and used to infer the direction a robot should take in different parts of the environment. The approach learns and filters noise in the data producing a smooth underlying function that yields more natural movements. Our method combines prior conventional planning strategies with most probable trajectories followed by people in a principled statistical manner, and adapts itself online as more observations become available. The use of learning methods are automatic and require minimal tuning as compared to potential fields or spline function regression. This approach is demonstrated testing in cluttered office and open forum environments using laser and vision sensing modalities. It yields paths that are similar to the expected human behaviour without any a priori knowledge of the environment or explicit programming.
Kirchner, NG, Alempijevic, A, Caraian, SA, Fitch, R, Hordern, DL, Hu, G, Paul, G, Richards, D, Singh, SP & Webb, SS 2010, 'RobotAssist - a Platform for Human Robot Interaction Research', Proceedings of the Australasian Conference on Robotics and Automation 2010 (ACRA 2010), Proceedings of the Australasian Conference on Robotics and Automation, Australasian Conference on Robotics and Automation, Brisbane, pp. 1-10.View/Download from: UTS OPUS
This paper presents RobotAssist, a robotic platform designed for use in human robot interaction research and for entry into Robocup@Home competition. The core autonomy of the system is implemented as a component based software framework that allows for integration of operating system independent components, is designed to be expandable and integrates several layers of reasoning. The approaches taken to develop the core capabilities of the platform are described, namely: path planning in a social context, Simultaneous Localisation and Mapping (SLAM), human cue sensing and perception, manipulatable object detection and manipulation.
Alempijevic, A, Kodagoda, S & Dissanayake, G 2009, 'Cross-Modal Localization Through Mutual Information', IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2009), International Conference on Intelligent Robots and Systems, IEEE, St Louis, Missouri, pp. 5596-5602.View/Download from: UTS OPUS or Publisher's site
Relating information originating from disparate sensors observing a given scene is a challenging task, particularly when an appropriate model of the environment or the behaviour of any particular object within it is not available. One possible strategy to address this task is to examine whether the sensor outputs contain information which can be attributed to a common cause. In this paper, we present an approach to localise this embedded common information through an indirect method of estimating mutual information between all signal sources. Ability of L1 regularization to enforce sparseness of the solution is exploited to identify a subset of signals that are related to each other, from among a large number of sensor outputs. As opposed to the conventional L2 regularization, the proposed method leads to faster convergence with much reduced spurious associations. Simulation and experimental results are presented to validate the findings.
Alempijevic, A, Kodagoda, S & Dissanayake, G 2009, 'Mutual Information Based Data Association', Fifth International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP 2009)Fifth International Conference on Intelligent Sensors, Sensor Networks and Information Processing, International Conference on Intelligent Sensors, Sensor Networks and Information Processing, IEEE, Melbourne, Australia, pp. 97-102.View/Download from: UTS OPUS or Publisher's site
Relating information originating from disparate sensors without any attempt to model the environment or the behaviour of any particular object within it is a challenging task. Inspired by human perception, the focus of this paper will be on observing objects moving in space using sensors that operate based on different physical principles and the fact that motion has in principle, greater power to specify properties of an object than purely spatial information captured as a single observation in time. The contribution of this paper include the development of a novel strategy for detecting a set of signals that are statistically dependent and correspond to each other related by a common cause. Mutual Information is proposed as a measure of statistical dependence. The algorithm is evaluated through simulations and three application domains, which includes, (1.) Grouping problem in images, (2.) Data association problem in moving observers with dynamic targets, and (3.) Multi-modal sensor fusion.
Sehestedt, SA, Kodagoda, S, Alempijevic, A & Dissanayake, G 2009, 'Efficient Learning of Motion Patterns for Robots', Proceedings of the 2009 Australasian Conference on Robotics and Automation, Australasian Conference on Robotics and Automation, Australian Robotics and Automation Association Inc., Sydney, Australia, pp. 1-7.View/Download from: UTS OPUS
In this work we present a novel approach to learning dynamics of an environment perceived by a mobile robot. More precisely, we are interested in general motion patterns occurring in the environment rather than object dependent ones. A sampling algorithm is used to update a sample set, which represents observed dynamics, using the Bayes rule. From this set of samples a Hidden Markov Model is learnt online, which allows fast and efficient matching and prediction in the learnt model. Such models are useful for a number of tasks such as path planning, localisation and compliant motion. The approach is validated through simulation as well as experiments.
Alempijevic, A, Kodagoda, S & Dissanayake, G 2007, 'Sensor Registration for Robotic Applications', Springer Tracts in Advanced Robotics: Volume 42: Proceedings of the 6th International Conference on Field and Service Robotics, International Conference on Field and Service Robotics, Springer, France, pp. 233-242.View/Download from: UTS OPUS or Publisher's site
Multi-sensor data fusion plays an essential role in most robotic applications. Appropriate registration of information from different sensors is a fundamental requirement in multi-sensor data fusion. Registration requires significant effort particularly when sensor signals do not have direct geometric interpretations, observer dynamics are unknown and occlusions are present. In this paper, we propose Mutual Information (MI) based sensor registration which exploits the effect of a common cause in the observed space on the sensor outputs that does not require any prior knowledge of relative poses of the observers. Simulation results are presented to substantiate the claim that the algorithm is capable of registering the sensors in the presence of substantial observer dynamics.
Kodagoda, S, Alempijevic, A, Sehestedt, SA & Dissanayake, G 2007, 'Towards Improving Driver Situation Awareness at Intersections', the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2007), IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, San Diego, California, pp. 3739-3744.View/Download from: UTS OPUS or Publisher's site
Providing safety critical information to the driver is vital in reducing road accidents, especially at intersections. Intersections are complex to deal with due to the presence of large number of vehicle and pedestrian activities, and possible occlusions. Information available from only the sensors on-board a vehicle has limited value in this scenario. In this paper, we propose to utilize sensors on-board the vehicle of interest as well as the sensors that are mounted on nearby vehicles to enhance the driver situation awareness. The resulting major research challenge of sensor registration with moving observers is solved using a mutual information based technique. The response of the sensors to common causes are identified and exploited for computing their unknown relative locations. Experimental results, for a mock up traffic intersection in which mobile robots equipped with laser range finders are used, are presented to demonstrate the efficacy of the proposed technique.
Kodagoda, S, Sehestedt, SA, Alempijevic, A, Zhang, Z, Donikian, A & Dissanayake, G 2007, 'Towards an Enhanced Driver Situation Awareness System', Second International Conference on Industrial and Information Systems, IEEE International Conference on Industrial and Information Systems, IEEE, Sri Lanka, pp. 295-300.View/Download from: UTS OPUS or Publisher's site
This paper outlines our current research agenda to achieve enhanced driver situation awareness. A novel approach that incorporates information gathered from sensors mounted on the neighboring vehicles, in the road infrastructure as well as onboard sensory information is proposed. A solution to the fundamental issue of registering data into a common reference frame when the relative locations of the sensors themselves are changing is outlined. A description of the vehicle test bed, experimental results from information gathered from various onboard sensors, and preliminary results from the sensor registration algorithm are presented.
Sehestedt, SA, Kodagoda, S, Alempijevic, A & Dissanayake, G 2007, 'Efficient Lane Detection and Tracking in Urban Environments', third European Conference on Mobile Robots, European Conference on Mobile Robots, ECMR, Germany, pp. 78-83.View/Download from: UTS OPUS
Sehestedt, SA, Kodagoda, S, Alempijevic, A & Dissanayake, G 2007, 'Robust Lane Detection in Urban Environments', Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, USA, pp. 123-128.View/Download from: UTS OPUS or Publisher's site
Most of the lane marking detection algorithms reported in the literature are suitable for highway scenarios. This paper presents a novel clustered particle filter based approach to lane detection, which is suitable for urban streets in normal traffic conditions. Furthermore, a quality measure for the detection is calculated as a measure of reliability. The core of this approach is the usage of weak models, i.e. the avoidance of strong assumptions about the road geometry. Experiments were carried out in Sydney urban areas with a vehicle mounted laser range scanner and a ccd camera. Through experimentations, we have shown that a clustered particle filter can be used to efficiently extract lane markings.
Alempijevic, A, Kodagoda, S, Underwood, J, Kumar, S & Dissanayake, G 2006, 'Mutual Information Based Sensor Registration and Calibration', Proceedings of the 2006 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, Beijing, China, pp. 25-30.View/Download from: UTS OPUS or Publisher's site
Knowledge of calibration, that defines the location of sensors relative to each other, and registration, that relates sensor response due to the same physical phenomena, are essential in order to be able to fuse information from multiple sensors. In this paper, a mutual information (MI) based approach for automatic sensor registration and calibration is presented. Unsupervised learning of a nonparametric sensing model by maximizing mutual information between signal streams is used to relate information from different sensors, allowing unknown sensor registration and calibration to be determined. Experiments conducted in an office environment are used to illustrate the effectiveness of the proposed technique. Two laser sensors are used to capture people mobbing in an arbitrarily manner in the environment and MI from a number of attributes of the motion are used for relating the signal streams from the sensors. Thus the sensor registration and calibration is achieved without using artificial patterns or pre-specified motions
Kodagoda, S, Alempijevic, A, Underwood, J, Kumar, S & Dissanayake, G 2006, 'Sensor Registration and Calibration Using Moving Targets', Proceedings of the 9th International Conference on Control, Automation, Robotics and Vision, International Conference on Control, Automation, Robotics and Vision, IEEE, Singapore, pp. 830-835.View/Download from: UTS OPUS or Publisher's site
Multimodal sensor registration and calibration are crucially important aspects in distributed sensor fusion. Unknown relationships of sensors and joint probability distribution between sensory signals make the sensor fusion nontrivial. In this paper, we adopt a Mutual Information (MI) based approach for sensor registration and calibration. It is based on unsupervised learning of a nonparametric sensing model by maximizing mutual information between signal streams. Experiments were carried out in an office like environment with two laser sensors capturing arbitrarily moving people. Attributes of the moving targets are used. Problems due to target occlusions are alleviated by the multiple model tracker. The registration and calibration methodology does not require any artificially generated patterns or motions unlike other calibration methodologies
Alempijevic, A & Dissanayake, G 2004, 'An Efficient Algorithm for Line Extraction from Laser Scans', Proceedings of the 2004 IEEE Conference on Robotics, Automation and Mechatronics (RAM), IEEE Conference on Robotics, Automation and Mechatronics, IEEE R&A Society Singapore Chapter, Singapore, pp. 970-974.View/Download from: UTS OPUS
In this paper, an algorithm for extracting line segments from information gathered by a laser rangefinder is presented. The range scan is processed to compute a parameter that is invariant to the position and orientation of straight lines present. This parameter is then used to identify observations that potentially belong to straight lines and compute the slope of these lines. Log-Hough transform, that only explores a small region of the Hough space identified by the slopes computed, is then used to rind the equations of the lines present. The proposed method thus combines robustness of the Hough transform technique with the inherent efficiency of line fitting strategies while carrying out all computation in the sensor coordinate frame yielding a fast and robust algorithm for line extraction from laser range scans. Two practical examples are presented to demonstrate the efficacy of the algorithm and compare its performance to the traditional techniques.
Alempijevic, A 2004, 'High-Speed Feature Extraction in Sensor Coordinates for Laser Rangefinders', Conference Proceedings, Australasian Conference on Robotics and Automation (ACRA 2004), Australasian Conference on Robotics and Automation, Australian Robotics & Automation Association, Canberra, Australia, pp. 1-6.View/Download from: UTS OPUS
Current external partners with active Research and Development projects
- Pempek Pty Ltd in collaboration with Elgór Hansen S.A. and Famur
- Meat and Livestock Australia in collaboration with Department of Primary Industries NSW, Beef Centre
- Rock Solid Group (via Critical Pipes project)
- Downer EDI