To, W., paul, G. & liu, D. 2018, 'A comprehensive approach to real-time fault diagnosis during automatic grit-blasting operation by autonomous industrial robots', Robotics and Computer Integrated Manufacturing, vol. 49, pp. 13-23.View/Download from: UTS OPUS or Publisher's site
This paper presents a comprehensive approach to diagnose for faults that may occur during a robotic grit-blasting operation. The approach proposes the use of information collected from multiple sensors (RGB-D camera, audio and pressure transducers) to detect for 1) the real-time position of the grit-blasting spot and 2) the real-time state within the lasting line (i.e. compressed air only). The outcome of this approach will enable a grit-blasting robot to autonomous diagnose for faults and take corrective actions during the blasting operation. Experiments are conducted in a laboratory and in a grit-blasting chamber during real grit-blasting to demonstrate the proposed approach. Accuracy of 95% and above has been achieved in the experiments.
To, W., Paul, G. & Liu, D. 2016, 'An approach for identifying classifiable regions of an image captured by autonomous robots in structural environments', Robotics and Computer Integrated Manufacturing, vol. 37, pp. 90-102.View/Download from: UTS OPUS or Publisher's site
When an autonomous robot is deployed in a structural environment to visually inspect surfaces, the capture conditions of images (e.g. camera's viewing distance and angle to surfaces) may vary due to un-ideal robot poses selected to position the camera in a collision-free manner. Given that surface inspection is conducted by using a classifier trained with surface samples captured with limited changes to the viewing distance and angle, the inspection performance can be affected if the capture conditions are changed. This paper presents an approach to calculate a value that represents the likelihood of a pixel being classifiable by a classifier trained with a limited dataset. The likelihood value is calculated for each pixel in an image to form a likelihood map that can be used to identify classifiable regions of the image. The information necessary for calculating the likelihood values is obtained by collecting additional depth data that maps to each pixel in an image (collectively referred to as a RGB-D image). Experiments to test the approach are conducted in a laboratory environment using a RGB-D sensor package mounted onto the end-effector of a robot manipulator. A naive Bayes classifier trained with texture features extracted from Gray Level Co-occurrence Matrices is used to demonstrate the effect of image capture conditions on surface classification accuracy. Experimental results show that the classifiable regions identified using a likelihood map are up to 99.0% accurate, and the identified region has up to 19.9% higher classification accuracy when compared against the overall accuracy of the same image.
To, A.W., Paul, G. & Liu, D. 2014, 'Surface-type classification using RGB-D', IEEE Transactions on Automation Science and Engineering, vol. 11, no. 2, pp. 359-366.View/Download from: UTS OPUS or Publisher's site
This paper proposes an approach to improve surface-type classification of images containing inconsistently illuminated surfaces. When a mobile inspection robot is visually inspecting surface-types in a dark environment and a directional light source is used to illuminate the surfaces, the images captured may exhibit illumination variance that can be caused by the orientation and distance of the light source relative to the surfaces. In order to accurately classify the surface-types in these images, either the training image dataset needs to completely incorporate the illumination variance or a way to extract color features that can provide high classification accuracy needs to be identified. In this paper diffused reflectance values are extracted as new color features to classifying surface-types. In this approach, Red, Green, Blue-Depth (RGB-D) data is collected from the environment, and a reflectance model is used to calculate a diffused reflectance value for a pixel in each Red, Green, Blue (RGB) color channel. The diffused reflectance values can be used to train a multiclass support vector machine classifier to classify surface-types. Experiments are conducted in a mock bridge maintenance environment using a portable RGB-Depth sensor package with an attached light source to collect surface-type data. The performance of a classifier trained with diffused reflectance values is compared against classifiers trained with other color features including RGB and L*a*b* color spaces. Results show that the classifier trained with the diffused reflectance values can achieve consistently higher classification accuracy than the classifiers trained with RGB and L*a*b* features. For test images containing a single surface plane, diffused reflectance values consistently provide greater than 90% classification accuracy; and for test images containing a complex scene with multiple surface-types and surface planes, diffused reflectance values are shown to provide an increase in...
Paul, G., Quin, P., To, A. & Liu, D. 2015, 'A Sliding Window Approach to Exploration for 3D Map Building Using a Biologically Inspired Bridge Inspection Robot', Proceedings of the IEEE International Conference on CYBER Technology in Automation, Control, and Intelligent Systems, IEEE International Conference on CYBER Technology in Automation, Control, and Intelligent Systems, IEEE, Shenyang, China, pp. 1097-1102.View/Download from: UTS OPUS or Publisher's site
This paper presents a Sliding Window approach to viewpoint selection when exploring an environment using a RGB-D sensor mounted to the end-effector of an inchworm climbing robot for inspecting areas inside steel bridge archways which cannot be easily accessed by workers. The proposed exploration approach uses a kinematic chain robot model and information theory-based next best view calculations to predict poses which are safe and are able to reduce the information remaining in an environment. At each exploration step, a viewpoint is selected by analysing the Pareto efficiency of the predicted information gain and the required movement for a set of candidate poses. In contrast to previous approaches, a sliding window is used to determine candidate poses so as to avoid the costly operation of assessing the set of candidates in its entirety. Experimental results in simulation and on a prototype climbing robot platform show the approach requires fewer gain calculations and less robot movement, and therefore is more efficient than other approaches when exploring a complex 3D steel bridge structure.
To, A.W., Paul, G., Rushton-Smith, D., Liu, D. & Dissanayake, G. 2012, 'Automated and Frequent Calibration of a Robot Manipulator-mounted IR Range camera for Steel Bridge Maintenance', Field and Service Robotics Vol 92 - Results of the 8th International Conference on Field and Service Robotics, International Conference on Field and Service Robotics, Springer-Verlag, Matsushima, Miyagi, Japan, pp. 205-218.View/Download from: UTS OPUS or Publisher's site
This paper presents an automated and cost-effective approach to frequent hand-eye calibration of an IR range camera mounted to the end-effector of a robot manipulator for use in a field environment. A set of three reflector discs arranged in a structured pattern attached to the robot platform is used to provide high contrast image features with corresponding range readings for accurate calculation of the camera-to-robot base transform. Using this approach the hand-eye transform between the IR range camera and robot end-effector can be determined by considering the robot manipulator model. Experimental results show that a structured lightingbased IR range camera can be reliably hand-eye calibrated to a 6DOF robot manipulator using the presented automated approach. Once calibrated, the IR range camera can be positioned with the manipulator so as to generate a high resolution geometric map of the surrounding environment suitable for performing the grit-blasting task.
Rushton-Smith, D., To, A.W., Paul, G. & Liu, D. 2013, 'An Accurate and Reliable Approach to Calibration of a Robot Manipulator-Mounted IR Range Camera for Field Applications', International Symposium on Robotics and Mechatronics, International Symposium on Robotics and Mechatronics, Research Publishing, Singapore, pp. 335-344.View/Download from: UTS OPUS or Publisher's site
To, A.W., Paul, G. & Liu, D. 2010, 'Image Segmentation for Surface Material-type Classification using 3D Geometry Information', Proceedings of the 2010 IEEE International Conference on Information and Automation (ICIA2010), IEEE International Conference on Information and Automation, IEEE, Harbin, China, pp. 1717-1722.View/Download from: UTS OPUS or Publisher's site
This paper describes a novel approach for the segmentation of complex images to determine candidates for accurate material-type classification. The proposed approach identifies classification candidates based on image quality calculated from viewing distance and angle information. The required viewing distance and angle information is extracted from 3D fused images constructed from laser range data and image data. This approach sees application in material-type classification of images captured with varying degrees of image quality attributed to geometric uncertainty of the environment typical for autonomous robotic exploration. The proposed segmentation approach is demonstrated on an autonomous bridge maintenance system and validated using gray level cooccurrence matrix (GLCM) features combined with a naive Bayes classifier. Experimental results demonstrate the effects of viewing distance and angle on classification accuracy and the benefits of segmenting images using 3D geometry information to identify candidates for accurate material-type classification.
To, A.W., Paul, G., Kwok, N. & Liu, D. 2008, 'An integrated approach to planning for autonomous grit-blasting robot in complex bridge environments', Proceedings of 2008 Fourth I*PROMS Virtual Conference International Conference on Innovative Production Machines and Systems, International Conference on Innovative Production Machines and Systems, Whittles Publishing, Cardiff University, Wales, UK, pp. 313-318.View/Download from: UTS OPUS
This paper describes an integrated approach to robot manipulator path and motion planning in complex bridge environments. It incorporates grit-blasting specific considerations including blasting effect, coverage, path length and robot arm joint movement. A genetic algorithm is implemented for path planning with the use of environment data to increase planning efficiency. A customized gradient based method is applied in selecting collision free joint configurations for the identified path. A grit-blast coverage model is also developed for discrete non-planar 3D coverage determination to verify the performance of the planned path and motion.