Knabe, C., Griffin, R., Burton, J., Cantor-Cooke, G., Dantanarayana, L., Day, G., Ebeling-Koning, O., Hahn, E., Hopkins, M., Neal, J., Newton, J., Nogales, C., Orekhov, V., Peterson, J., Rouleau, M., Seminatore, J., Sung, Y., Webb, J., Wittenstein, N., Ziglar, J., Leonessa, A., Lattimer, B. & Furukawa, T. 2017, 'Team VALOR's ESCHER: A Novel Electromechanical Biped for the DARPA Robotics Challenge', Journal of Field Robotics, vol. 34, no. 5, pp. 912-939.View/Download from: UTS OPUS or Publisher's site
© 2017 Wiley Periodicals, Inc. The Electric Series Compliant Humanoid for Emergency Response (ESCHER) platform represents the culmination of four years of development at Virginia Tech to produce a full-sized force-controlled humanoid robot capable of operating in unstructured environments. ESCHER's locomotion capability was demonstrated at the DARPA Robotics Challenge (DRC) Finals when it successfully navigated the 61 m loose dirt course. Team VALOR, a Track A team, developed ESCHER leveraging and improving upon bipedal humanoid technologies implemented in previous research efforts, specifically for traversing uneven terrain and sustained untethered operation. This paper presents the hardware platform, software, and control systems developed to field ESCHER at the DRC Finals. ESCHER's unique features include custom linear series elastic actuators in both single and dual actuator configurations and a whole-body control framework supporting compliant locomotion across variable and shifting terrain. A high-level software system designed using the robot operating system integrated various open-source packages and interfaced with the existing whole-body motion controller. The paper discusses a detailed analysis of challenges encountered during the competition, along with lessons learned that are critical for transitioning research contributions to a fielded robot. Empirical data collected before, during, and after the DRC Finals validate ESCHER's performance in fielded environments.
Ryu, K., Dantanarayana, L., Furukawa, T. & Dissanayake, G. 2016, 'Grid-based Scan-to-Map Matching for Accurate 2D Map Building', Advanced Robotics, vol. 30, no. 7, pp. 431-448.View/Download from: UTS OPUS or Publisher's site
This paper presents a grid-based scan-to-map matching technique for accurate 2D map building. At every acquisition of a new scan, the proposed technique matches the new scan to the previous scan similarly to the conventional techniques, but further corrects the error by matching the new scan to the globally defined map. In order to achieve best scan-to-map matching at each acquisition, the map is represented as a grid map with multiple normal distributions (NDs) in each cell, which is one contribution of this paper. Additionally, the new scan is also represented by NDs, developing a novel ND-to-ND matching technique. This ND-to-ND matching technique has significant potential in the enhancement of the global matching as well as the computational efficiency. Experimental results first show that the proposed technique accumulates very small errors after consecutive matchings and identifies that the scans are matched better to the map with the multi-ND representation than one ND representation. The proposed t...
Takami, K., Furukawa, T., Kumon, M., Kimoto, D. & Dissanayake, G. 2016, 'Estimation of a nonvisible field-of-view mobile target incorporating optical and acoustic sensors', Autonomous Robots, vol. 40, no. 2, pp. 343-359.View/Download from: UTS OPUS or Publisher's site
© 2015, Springer Science+Business Media New York. This paper presents a nonvisible field-of-view (NFOV) target estimation approach that incorporates optical and acoustic sensors. An optical sensor can accurately localize a target in its field-of-view whereas the acoustic sensor could estimate the target location over a much larger space, but only with limited accuracy. A recursive Bayesian estimation framework where observations of the optical and acoustic sensors are probabilistically treated and fused is proposed in this paper. A technique to construct the observation likelihood when two microphones are used as the acoustic sensor is also described. The proposed technique derives and stores the interaural level difference of observations from the two microphones for different target positions in advance and constructs the likelihood through correlation. A parametric study of the proposed acoustic sensing technique in a controlled test environment, and experiments with an NFOV target in an actual indoor environment are presented to demonstrate the capability of the proposed technique.
Webb, S.S. & Furukawa, T. 2008, 'Belief-Driven Manipulator Visual Servoing for Less Controlled Environments', Advanced Robotics, vol. 22, no. 5, pp. 547-572.View/Download from: UTS OPUS or Publisher's site
This paper presents the architecture of a feedforward manipulator control strategy based on a belief function that may be appropriate for less controlled environments. In this architecture, the belief about the environmental state, as described by a probability density function, is maintained by a recursive Bayesian estimation process. A likelihood is derived from each observation regardless of whether the targeted features of the environmental state have been detected or not. This provides continuously evolving information to the controller and allows an inaccurate belief to evolve into an accurate belief. Control actions are determined by maximizing objective functions using non-linear optimization. Forward models are used to transform control actions to a predicted state so that objective functions may be expressed in task space. The first set of examples numerically investigates the validity of the proposed strategy by demonstrating control in a two dimensional scenario. Then a more realistic application is presented where a robotic manipulator executes a searching and tracking task using an eye-in-hand vision sensor.
Furukawa, T., Dissanayake, G. & Durrant-Whyte, H. 2004, 'An Application of Multi-objective Evolutionary Algorithms in Autonomous Vehicle Navigation' in Coello, C.A.C. (ed), Applications of Multi-Objective Evolutionary Algorithms, World Scientific, Singapore, pp. 125-153.View/Download from: UTS OPUS
Takami, K., Furukawa, T., Kumon, M. & Dissanayake, G. 2015, 'Non-field-of-view acoustic target estimation in complex indoor environment', Proceedings of the 10th Field and Service Robotics (FSR), International Conference on Field and Service Robotics, Springer, Toronto, Canada, pp. 577-592.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing Switzerland 2016.This paper presents a new approach which acoustically localizes a mobile target outside the Field-of-View (FOV), or the Non-Field-of-View (NFOV), of an optical sensor, and its implementation to complex indoor environments. In this approach, microphones are fixed sparsely in the indoor environment of concern. In a prior process, the Interaural Level Difference IID of observations acquired by each set of two microphones is derived for different sound target positions and stored as an acoustic cue. When a newsound is observed in the environment, a joint acoustic observation likelihood is derived by fusing likelihoods computed from the correlation of the IID of the new observation to the stored acoustic cues. The location of the NFOVtarget is finally estimated within the recursive Bayesian estimation framework. After the experimental parametric studies, the potential of the proposed approach for practical implementation has been demonstrated by the successful tracking of an elderly person needing health care service in a home environment.
Takami, K., Liu, H., Furukawa, T., Kumon, M. & Dissanayake, G. 2016, 'Non-Field-of-View Sound Source Localization Using Diffraction and Reflection Signals', Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on, IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, Daejeon, Korea, pp. 157-162.View/Download from: UTS OPUS or Publisher's site
This paper describes a non-field-of-view (NFOV) localization approach for a mobile robot in an unknown environment based on an acoustic signal combined with the geometrical information from an optical sensor. The approach estimates the location of a target through the mobile robot's sensor observation frame, which consists of a combination of diffraction and reflection acoustic signals and a 3-D environment geometrical description. This fusion of audio-visual sensor observation likelihoods allows the robot to estimate the NFOV target. The diffraction and reflection observations from the microphone array generate the acoustic joint observation likelihood. The observed geometry also determines far-field or near-field acoustic conditions to improve the estimation of the sound direction of arrival. A mobile robot equipped with a microphone array and an RGB-D sensor was tested in a controlled environment, an anechoic chamber, to demonstrate the NFOV localization capabilities. This resulted in +/- 18 degrees, and less than 0.75 m error in angle and distance estimation, respectively.
Takami, K., Liu, H., Makoto, K., Furukawa, T. & Dissanayake, G. 2016, 'Recursive Bayesian estimation of NFOV target using diffraction and reflection signals', FUSION 2016 - 19th International Conference on Information Fusion, Proceedings, International Conference on Information Fusion, IEEE, Heidelberg, Germany, pp. 1923-1930.View/Download from: UTS OPUS
© 2016 ISIF.This paper presents an approach to the recursive Bayesian estimation of non-field-of-view (NFOV) sound source tracking based on reflection and diffraction signals with an incorporation of optical sensors. The approach takes multi-modal sensoy fusion of a mobile robot, which combines an optical 3D environment geometrical description with a microphone array acoustic signal to estimate the target location. The robot estimates target location either in the field-of-view (FOV) or in the NFOV by fusion of sensor observation likelihoods. For the NFOV case, the microphone array provides reflection and diffraction observations to generate a joint acoustic observation likelihood. With the data fusion between the 3D description and the acoustic observation, the target estimation is performed in an unknown environment. Finally, the sensor observation combined with the motion model of the target iteratively performs tracking within a recursive Bayesian estimation framework. The proposed approach was tested with a microphone array with an RGB-D sensor in a controlled anechoic chamber to demonstrate the NFOV tracking capabilities for a moving target.
Takami, K., Furukawa, T., Kumon, M. & Mak, L.C. 2015, 'Non-field-of-view indoor sound source localization based on reflection and diffraction', Multisensor Fusion and Integration for Intelligent Systems (MFI), 2015 IEEE International Conference on, IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, IEEE, San Diego, CA, pp. 59-64.View/Download from: Publisher's site
Dantanarayana, L., Dissanayake, G., Ranasinghe, R. & Furukawa, T. 2015, 'An extended Kalman filter for localisation in occupancy grid maps', 2015 IEEE 10th International Conference on Industrial and Information Systems (ICIIS), IEEE International Conference on Industrial and Information Systems, IEEE, Peradeniya, Sri Lanka, pp. 419-424.View/Download from: UTS OPUS or Publisher's site
Furukawa, T., Dantanarayana, L.I., Ziglar, J., Ranasinghe, R. & Dissanayake, G. 2015, 'Fast Global Scan Matching for High-Speed Vehicle Navigation', IEEE Xplore, IEEE International conference on Multisensor Fusion and Integration for Intelligent Systems, IEEE, San Diego, CA, USA, pp. 37-42.View/Download from: UTS OPUS or Publisher's site
Furukawa, T., Takami, K., Tong, X., Watman, D., Hamed, A., Ranasinghe, R. & Dissanayake, G. 2015, 'Map-based navigation of an autonomous car using grid-based scan-to-map matching', Proceedings of the ASME Design Engineering Technical Conference, ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference (IDETC/CIE), ASME, Boston, Massachusetts, USA, pp. 1-10.View/Download from: UTS OPUS or Publisher's site
© Copyright 2015 by ASME. This paper presents the map-based navigation of a car with autonomous capabilities using grid-based scan-to-map matching. The autonomous car used for demonstration is built based on Toyota Prius and can control the throttle, the brake and the steering by a computer. The proposed grid-based scan-to-map matching method represents a map with a finite number of grid cells, represents a scan and the map with scan points at each grid as normal distributions (NDs) and constructs a map by matching the scan NDs to the map NDs. The proposed method enables scan-based mapping at high speed while maintaining high accuracy. The representation of a grid cell of a map in terms of multiple NDs further enhances speed and accuracy. The accuracy analysis of the proposed method shows that a small robot with a wheel diameter of 8cm had yielded no loop closure error after the travel of 186m while the terminal position error by the GMapping was approximately 1m with the error growth of 1%. The application of the proposed method with the autonomous car has then demonstrated the ability of the proposed method for autonomous driving with varying and high speed and has also quantified the significance of speed for successful mapping in autonomous driving.
Ryu, K., Furukawa, T., Antol, S. & Dissanayake, G. 2013, 'Grid-based scan-to-map matching for accurate simultaneous localization and mapping: Theory and preliminary numerical study', Proceedings of the ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference (IDETC/CIE), ASME, Portland, Oregon, USA.View/Download from: UTS OPUS or Publisher's site
This paper presents a grid-based scan-to-map matching technique for accurate simultaneous localization and mapping (SLAM). At every acquisition of a new scan, the proposed technique estimates the relative position from which the previous scan was taken, and further corrects its estimation error by matching the new scan to the globally defined map. In order to achieve best scan-to-map matching at each acquisition, the map to match is represented as a grid map with multiple normal distributions (NDs) in each cell. Additionally, the new scan is also represented by NDs, developing a novel ND-to-ND matching technique. The ND-to-ND matching technique has significant potential in the enhancement of the global matching as well as the computational efficiency. Experimental results first show that the proposed technique successfully matches new scans to the map generating very small position and orientation errors, and then demonstrates the effectiveness of the multi-ND representation in comparison to the single-ND representation. Copyright © 2013 by ASME.
Furukawa, T., Tong, X., Dissanayake, G. & Durrant-Whyte, H.F. 2011, 'Parallel grid-based method and belief fusion Real-time cooperative non-Gaussian estimation', 2011 6th International Conference on Industrial and Information Systems, ICIIS 2011 - Conference Proceedings, IEEE International Conference on Industrial and Information Systems, IEEE, Kandy, Sri Lanka, pp. 370-375.View/Download from: UTS OPUS or Publisher's site
This paper presents a parallel grid-based method and belief fusion for real-time cooperative Bayesian estimation. The grid-based recursive Bayesian estimation (RBE) method effectively maintains the belief of objects even with no detection event but requires large computation for its prediction and correction processes as well as fusion process in cooperative estimation. In order for real-time estimation, the belief fusion proposed in the paper carries out the fusion of belief outside the RBE loop. The parallelization of the entire grid-based method and belief fusion further accelerates the RBE so that real-time estimation is possible even in highly dynamical environments. Numerical examples have first demonstrated the validity of the proposed approach through parametric studies. The proposed approach was then applied to the cooperative search by autonomous unmanned ground vehicles (UGVs), and its real-time capability has been demonstrated. © 2011 IEEE.
Webb, S.S. & Furukawa, T. 2006, 'Belief Driven Manipulator Control for Integrated Searching and Tracking', Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, Beijing, China, pp. 4983-4988.View/Download from: UTS OPUS or Publisher's site
This paper presents a feedforward control strategy for a robotic manipulator based on a belief function. The belief about a target's next location, as described by a probability density function, is maintained by a recursive Bayesian process that fuses observations with a target motion model. A sensor model that incorporates positive and negative sensor readings allows the single belief function to be used to deliver both searching and tracking behaviors. Constrained non-linear optimization is used to search configuration space for the control action that maximizes the subsequent probability of detection. To demonstrate application of the technique, a simple example is elaborated for a searching and tracking task with an eye-in-hand sensor
Leung, C., Huang, S., Dissanayake, G. & Furukawa, T. 2005, 'Trajectory Planning for Multiple Robots in Bearing Only Target Localisation', 2005 IEE/RSJ International Conference on Intelligent Robots and Systems, IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, Edmonton, Canada, pp. 2312-2317.View/Download from: UTS OPUS or Publisher's site
This paper provides a solution to the optimal trajectory planning problem in target localisation for multiple heterogeneous robots with bearing-only sensors. The objective here is to find robot trajectories that maximise the accuracy of the locations of the targets at a prescribed terminal time. The trajectory planning is formulated as an optimal control problem for a nonlinear system with a gradually identified model and then solved using nonlinear model predictive control (MPC). The solution to the MPC optimisation problem is computed through exhaustive expansion tree search (EETS) plus sequential quadratic programming (SQP). Simulations were conducted using the proposed methods. Results show that EETS alone performs considerably faster than EETS+SQP with only minor differences in information gain, and that a centralised approach outperforms a decentralised one in terms of information gain. We show that a centralised EETS provides a near optimal solution. We also demonstrate the significance of using a matrix to represent the information gathered.
Furukawa, T., Bourgault, F., Durrant-Whyte, H. & Dissanayake, G. 2004, 'Dynamic allocation and control of coordinated UAVs to engage Multiple targets in a time-optimal manner', Proceedings of the IEEE International Conference on Robotics and Automation - Vol 3, IEEE International Conference on Robots and Automation, IEEE, New Orleans, pp. 2353-2358.View/Download from: UTS OPUS or Publisher's site
This paper presents the real-time control of cooperative unmanned air vehicles (UAV) that dynamically engage multiple targets in a time-optimal manner. Techniques to dynamically allocate vehicles to targets and to subsequently find the time-optimal control actions are proposed. The decentralization of the proposed control strategy is further presented such that the vehicles can be controlled in real-time without significant time delay. The proposed strategy is men applied to various practical battlefield problems, and numerical results show the efficiency of the proposed strategy.
Lim, S.H., Furukawa, T., Durrant-Whyte, H. & Dissanayake, G. 2004, 'A time-optimal control strategy for pursuit-evasion games', Proceedings of IEEE International Conference on Robotics and Automation, IEEE International Conference on Robots and Automation, IEEE, New Orleans, USA, pp. 3962-3967.View/Download from: UTS OPUS or Publisher's site
This paper presents a control strategy for the pursuer in the pursuit-evasion game problem when the evader behaves intelligently. The pursuer in the proposed technique does not try to react to the evader's behavior instantaneously. The proposed technique therefore does not yield instantaneous optimality but capture the evader in a time-efficient and robust fashion even when the evader is intelligent. The proposed technique was applied to two numerical examples and the results were compared to those by the conventional motion tracking algorithms. The results and comparison show that the proposed technique could capture the evader faster than the conventional motion tracking algorithms in both the examples.
Furukawa, T., Durrant-Whyte, H., Bourgault, F. & Dissanayake, G. 2003, 'Time-Optimal Coordinated Control of the Relative Formation of Multiple Vehicles', Proceedings of 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation, IEEE International Symposium on Computational Intelligence in Robotics and Automation, IEEE Operations Centre, Kobe, Japan, pp. 259-264.View/Download from: UTS OPUS
This paper presents a solution to the time-optimal control of the relative formation of multiple vehicles. This is a problem in cooperative time-optimal control with a free terminal state constraint. In this paper, a canonical formulation of the problem is first derived. Then, a numerical technique to solve this class of problem is proposed. Numerical results demonstrate the efficacy of the proposed formulation and solution to the problem of expeditiously building and controlling formations of cooperative autonomous vehicles.
Furukawa, T., Durrant-Whyte, H., Dissanayake, G. & Sukkarieh, S. 2003, 'The Coordination of Multiple UAVs for Engaging Multiple Targets in a Time-Optimal Manner', Proceedings of the 2003 IEEE/RSJ International Symposium on Intelligence Robotics and Systems (IROS2003), IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE Service Centre, Las Vegas, USA, pp. 36-41.View/Download from: UTS OPUS or Publisher's site
This paper presents a solution to the real-time control of cooperative unmanned air vehicles (UAVs) that engage multiple targets in a time-optimal manner. Techniques to dynamically allocate vehicles to targets and to find the time-optimal control actions of vehicles are proposed. The effectiveness of the time-optimal control technique is first demonstrated through numerical examples. The proposed strategy is then applied to a practical battlefield problem where ten vehicles are required to engage four targets, and numerical results show the efficiency of the proposed strategy.
Goktogan, A.H., Furukawa, T., Mathews, G., Sukkarieh, S. & Dissanayake, G. 2003, 'Time-Optimal Cooperation of Multiple UAVs in Real-Time Simulation', Proceedings of the 2nd Computational Intelligence, Robotics and Autonomous Systems (CIRAS 2003), International Conference on Computational Intelligence, Robotics and Autonomous Systems, National University of Singapore, Singapore, pp. 1-6.View/Download from: UTS OPUS
Dissanayake, G. & Furukawa, T. 2001, 'Model parameter identification of autonomous vehicles', In Proc. of the 2001 Australian Conf. on Robotics and Automation, Australasian Conference on Robotics and Automation, Australia, pp. 32-37.