JingSong Xu is a research fellow with the Global Big Data Technologies Center at University of Technology, Sydney. He received the B.Eng and the Ph.D. degree from the School of Computer Science and Engineering, Nanjing University of Science and Technology, China in 2007 and 2014 respectively. He was an exchanged student at University of New South Wales (UNSW), Sydney and National Information and Communications Technology Australia (NICTA), Sydney from Sep. 2010 to Sep. 2012. He also visited University of Technology, Sydney (UTS) from Sep. 2011 to Dec. 2014.
J. Xu, Z. Ni, Q. Wu, J. Zhang, H. LIU, P. Zhang, W. Chen, "Systems and Methods for Pedestrian Detection in Images", U.S. Patent: US9008365B2, issued date 2015-04-14 J. Xu, Y. Cui, Q. Wu, J. Zhang, C.X. Zhang, H. Liu, K. Fang, "System and Method for Virtual Clothes Fitting Based on Video Augmented Reality in Mobile Phone", U.S. Patent: US20170018024A1, issued date 2017-01-19 J. Xu, Y. Cui, Q. Wu, J. Zhang, C.X. Zhang, H. Liu, K. Fang, "Apparatus and method for neck and shoulder landmark detection", U.S. Patent: US9569661B2, issued date 2017-02-14
International Journals/Transactions/Conferences Reviewer:
·IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT)
·Pattern Recognition Letters (PRL)
·Signal Processing Letters (SPL)
·MultiMedia Tools and Applications
·International Journal of Image and Video Processing
·International Conference on Digital Image Computing: Techniques and Applications (DICTA2012, 2013, 2014)
·Multimedia Signal Processing (MSP2011)
·IEEE Visual Communications and Image Processing (VCIP2013, VCIP2014)
·International Conference on Image Processing (ICIP2014)
Can supervise: YES
- Computer Vision
- Image Processing
- Pattern Recognition
- Multimedia Processing
- Machine Learning
31270 Network Essentials
31277 Routering and Switching Essentials
42904 Cloud Computing
32520 Unix System Administration
31338 Network Servers
Huang, L, Yang, Q, Wu, J, Huang, Y, Wu, Q & Xu, J 2020, 'Generated Data With Sparse Regularized Multi-Pseudo Label for Person Re-Identification', IEEE SIGNAL PROCESSING LETTERS, vol. 27, pp. 391-395.View/Download from: Publisher's site
Huang, Y, Xu, J, Wu, Q, Zheng, Z, Zhang, Z & Zhang, J 2019, 'Multi-pseudo Regularized Label for Generated Data in Person Re-Identification.', IEEE Transactions on Image Processing, vol. 28, no. 3, pp. 1391-1403.View/Download from: Publisher's site
Sufficient training data normally is required to train deeply learned models. However, due to the expensive manual process for labelling large number of images (i.e., annotation), the amount of available training data (i.e., real data) is always limited. To produce more data for training a deep network, Generative Adversarial Network (GAN) can be used to generate artificial sample data (i.e., generated data). However, the generated data usually does not have annotation labels. To solve this problem, in this paper, we propose a virtual label called Multi-pseudo Regularized Label (MpRL) and assign it to the generated data. With MpRL, the generated data will be used as the supplementary of real training data to train a deep neural network in a semi-supervised learning fashion. To build the corresponding relationship between the real data and generated data, MpRL assigns each generated data a proper virtual label which reflects the likelihood of the affiliation of the generated data to predefined training classes in the real data domain. Unlike the traditional label which usually is a single integral number, the virtual label proposed in this work is a set of weight-based values each individual of which is a number in (0,1] called multi-pseudo label and reflects the degree of relation between each generated data to every pre-defined class of real data. A comprehensive evaluation is carried out by adopting two state-of-the-art convolutional neural networks (CNNs) in our experiments to verify the effectiveness of MpRL. Experiments demonstrate that by assigning MpRL to generated data, we can further improve the person re-ID performance on five re-ID datasets, i.e., Market-1501, DukeMTMC-reID, CUHK03, VIPeR, and CUHK01. The proposed method obtains +6.29%, +6.30%, +5.58%, +5.84%, and +3.48% improvements in rank-1 accuracy over a strong CNN baseline on the five datasets respectively, and outperforms state-of-the-art methods.
© 2017.The goal of this work is to automatically collect a large number of highly relevant natural images from Internet for given queries. A novel automatic image dataset construction framework is proposed by employing multiple query expansions. In specific, the given queries are first expanded by searching in the Google Books Ngrams Corpora to obtain a richer semantic descriptions, from which the visually non-salient and less relevant expansions are then filtered. After retrieving images from the Internet with filtered expansions, we further filter noisy images by clustering and progressively Convolutional Neural Networks (CNN) based methods. To evaluate the performance of our proposed method for image dataset construction, we build an image dataset with 10 categories. We then run object detections on our image dataset with three other image datasets which were constructed by weak supervised, web supervised and full supervised learning, the experimental results indicated the effectiveness of our method is superior to weak supervised and web supervised state-of-the-art methods. In addition, we do a cross-dataset classification to evaluate the performance of our dataset with two publically available manual labelled dataset STL-10 and CIFAR-10.
Yao, Y, Zhang, J, Shen, F, Hua, X, Xu, J & Tang, Z 2017, 'Exploiting Web Images for Dataset Construction: A Domain Robust Approach', IEEE TRANSACTIONS ON MULTIMEDIA, vol. 19, no. 8, pp. 1771-1784.View/Download from: Publisher's site
© 2017 Mengyu Xu et al. Due to the variations of viewpoint, pose, and illumination, a given individual may appear considerably different across different camera views. Tracking individuals across camera networks with no overlapping fields is still a challenging problem. Previous works mainly focus on feature representation and metric learning individually which tend to have a suboptimal solution. To address this issue, in this work, we propose a novel framework to do the feature representation learning and metric learning jointly. Different from previous works, we represent the pairs of pedestrian images as new resized input and use linear Support Vector Machine to replace softmax activation function for similarity learning. Particularly, dropout and data augmentation techniques are also employed in this model to prevent the network from overfitting. Extensive experiments on two publically available datasets VIPeR and CUHK01 demonstrate the effectiveness of our proposed approach.
Guo, D, Xu, J, Zhang, J, Xu, M, Cui, Y & He, X 2017, 'User relationship strength modeling for friend recommendation on Instagram', Neurocomputing, vol. 239, pp. 9-18.View/Download from: Publisher's site
© 2017 Elsevier B.V.Social strength modeling in the social media community has attracted increasing research interest. Different from Flickr, which has been explored by many researchers, Instagram is more popular for mobile users and is conducive to likes and comments but seldom investigated. On Instagram, a user can post photos/videos, follow other users, comment and like other users' posts. These actions generate diverse forms of data that result in multiple user relationship views. In this paper, we propose a new framework to discover the underlying social relationship strength. User relationship learning under multiple views and the relationship strength modeling are coupled into one process framework. In addition, given the learned relationship strength, a coarse-to-fine method is proposed for friend recommendation. Experiments on friend recommendations for Instagram are presented to show the effectiveness and efficiency of the proposed framework. As exhibited by our experimental results, it can obtain better performance over other related methods. Although our method has been proposed for Instagram, it can be easily extended to any other social media communities.
Xu, J, Wu, Q, Zhang, J, Shen, F & Tang, Z 2014, 'Boosting Separability in Semisupervised Learning for Object Classification', IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, no. 7, pp. 1197-1208.View/Download from: Publisher's site
Xu, J, Wu, Q, Zhang, J & Tang, Z 2012, 'Fast and Accurate Human Detection Using a Cascade of Boosted MS-LBP Features', IEEE Signal Processing Letters, vol. 19, no. 10, pp. 676-679.View/Download from: Publisher's site
In this letter, a new scheme for generating local binary patterns (LBP) is presented. This Modi?ed Symmetric LBP (MS-LBP) feature takes advantage of LBP and gradient features. It is then applied into a boosted cascade framework for human detection. By combining MS-LBP with Haar-like feature into the boosted framework, the performances of heterogeneous features based detectors are evaluated for the best trade-off between accuracy and speed. Two feature training schemes, namely Single AdaBoost Training Scheme (SATS) and Dual AdaBoost Training Scheme (DATS) are proposed and compared. On the top of AdaBoost, two multidimensional feature projection methods are described. A comprehensive experiment is presented. Apart from obtaining higher detection accuracy, the detection speed based on DATS is 17 times faster than HOG method.
Huang, Y, Wu, Q, Xu, J & Zhong, Y 2019, 'SBSGAN: Suppression of Inter-Domain Background Shift for Person Re-Identification', 2019 IEEE/CVF International Conference on Computer Vision (ICCV), IEEE International Conference on Computer Vision, IEEE, Seoul, South Korea.View/Download from: Publisher's site
Cross-domain person re-identification (re-ID) is challenging due to the bias between training and testing domains. We observe that if backgrounds in the training and testing datasets are very different, it dramatically introduces difficulties to extract robust pedestrian features, and thus compromises the cross-domain person re-ID performance. In this paper, we formulate such problems as a background shift problem. A Suppression of Background Shift Generative Adversarial Network (SBSGAN) is proposed to generate images with suppressed backgrounds. Unlike simply removing backgrounds using binary masks, SBSGAN allows the generator to decide whether pixels should be preserved or suppressed to reduce segmentation errors caused by noisy foreground masks. Additionally, we take ID-related cues, such as vehicles and companions into consideration. With high-quality generated images, a Densely Associated 2-Stream (DA-2S) network is introduced with Inter Stream Densely Connection (ISDC) modules to strengthen the complementarity of the generated data and ID-related cues. The experiments show that the proposed method achieves competitive performance on three re-ID datasets, i.e., Market-1501, DukeMTMC-reID, and CUHK03, under the cross-domain person re-ID scenario.
Wu, J, Yao, L, Huang, Y, Xu, J, Wu, Q & Huang, L 2019, 'Improving Person Re-Identification Performance Using Body Mask Via Cross-Learning Strategy', 2019 IEEE International Conference on Visual Communications and Image Processing, VCIP 2019, IEEE Visual Communications and Image Processing, IEEE, Sydney, Australia.View/Download from: Publisher's site
© 2019 IEEE. The task of person re-identification (re-id) is to find the same pedestrian across non-overlapping cameras. Normally, the performance of person re-id can be affected by background clutters. However, existing segmentation algorithms are hard to obtain perfect foreground person images. To effectively leverage the body (foreground) cue, and in the meantime pay attention to discriminative information in the background (e.g., companion or vehicle), we propose to use a cross-learning strategy to take both foreground and other discriminative information into account. In addition, since currently existing foreground segmentation result always involves noise, we use Label Smoothing Regularization (LSR) to strengthen the generalization capability during our learning process. In experiments, we pick up two state-of-The-Art person re-id methods to verify the effectiveness of our proposed cross-learning strategy. Our experiments are carried out on two publicly available person re-id datasets. Obvious performance improvements can be observed on both datasets.
Huang, H, Xu, J, Zhang, J, Wu, Q & Kirsch, C 2018, 'Railway Infrastructure Defects Recognition using Fine-grained Deep Convolutional Neural Networks', 2018 Digital Image Computing: Techniques and Applications (DICTA), Digital Image Computing: Techniques and Applications, IEEE, Canberra, Australia.View/Download from: Publisher's site
Railway power supply infrastructure is one of the most important components of railway transportation. As the key step of railway maintenance system, power supply infrastructure defects recognition plays a vital role in the whole defects inspection sub-system. Traditional defects recognition task is performed manually, which is time-consuming and high-labor costing. Inspired by the great success of deep neural networks in dealing with different vision tasks, this paper presents an end-to-end deep network to solve the railway infrastructure defects detection problem. More importantly, this paper is the first work that adopts the idea of deep fine-grained classification to do railway defects detection. We propose a new bilinear deep network named Spatial Transformer And Bilinear Low-Rank (STABLR) model and apply it to railway infrastructure defects detection. The experimental results demonstrate that the proposed method outperforms both hand-craft features based machine learning methods and classic deep neural network methods.
Huang, H, Zheng, J, Zhang, J, Wu, Q & Xu, J 2019, 'Compare more nuanced: Pairwise alignment bilinear network for few-shot fine-grained learning', Proceedings - IEEE International Conference on Multimedia and Expo, IEEE International Conference on Multimedia and Expo, IEEE, Shanghai, China, pp. 91-96.View/Download from: Publisher's site
© 2019 IEEE. The recognition ability of human beings is developed in a progressive way. Usually, children learn to discriminate various objects from coarse to fine-grained with limited supervision. Inspired by this learning process, we propose a simple yet effective model for the Few-Shot Fine-Grained (FSFG) recognition, which tries to tackle the challenging fine-grained recognition task using meta-learning. The proposed method, named Pairwise Alignment Bilinear Network (PABN), is an end-to-end deep neural network. Unlike traditional deep bilinear networks for fine-grained classification, which adopt the self-bilinear pooling to capture the subtle features of images, the proposed model uses a novel pairwise bilinear pooling to compare the nuanced differences between base images and query images for learning a deep distance metric. In order to match base image features with query image features, we design feature alignment losses before the proposed pairwise bilinear pooling. Experiment results on four fine-grained classification datasets and one generic few-shot dataset demonstrate that the proposed model outperforms both the state-of-the-art few-shot fine-grained and general few-shot methods.
Zhang, L, Xu, J, Zhang, J & Gong, Y 2018, 'Information Enhancement for Travelogues via a Hybrid Clustering Model', 2018 Digital Image Computing: Techniques and Applications (DICTA), Digital Image Computing: Techniques and Applications, IEEE, Canberra, ACT, Australia, pp. 1-8.View/Download from: Publisher's site
Travelogues consist of textual information shared by tourists through web forums or other social media which often lack illustrations (images). In image sharing websites like Flicker, users can post images with rich textual information: `title', `tag' and `description'. The topics of travelogues usually revolve around beautiful sceneries. Corresponding landscape images recommended to these travelogues can enhance the vividness of reading. However, it is difficult to fuse such information because the text attached to each image has diverse meanings/views. In this paper, we propose an unsupervised Hybrid Multiple Kernel K-means (HMKKM) model to link images and travelogues through multiple views. Multi-view matrices are built to reveal the correlations between several respects. For further improving the performance, we add a regularisation based on textual similarity. To evaluate the effectiveness of the proposed method, a dataset is constructed from TripAdvisor and Flicker to find the related images for each travelogue. Experiment results demonstrate the superiority of the proposed model by comparison with other baselines.
Huang, Y, Wu, Q, Xu, J & Zhong, Y 2019, 'Celebrities-ReID: A Benchmark for Clothes Variation in Long-Term Person Re-Identification', Proceedings of the International Joint Conference on Neural Networks, International Joint Conference on Neural Networks, IEEE, Budapest, Hungary.View/Download from: Publisher's site
© 2019 IEEE. This paper considers person re-identification (re-ID) in the case of long-time gap (i.e., long-term re-ID) that concentrates on the challenge of clothes variation of each person. We introduce a new dataset, named Celebrities-reID to handle that challenge. Compared with current datasets, the proposed Celebrities-reID dataset is featured in two aspects. First, it contains 590 persons with 10,842 images, and each person does not wear the same clothing twice, making it the largest clothes variation person re-ID dataset to date. Second, a comprehensive evaluation using state of the arts is carried out to verify the feasibility and new challenge exposed by this dataset. In addition, we propose a benchmark approach to the dataset where a two-step fine-tuning strategy on human body parts is introduced to tackle the challenge of clothes variation. In experiments, we evaluate the feasibility and quality of the proposed Celebrities-reID dataset. The experimental results demonstrate that the proposed benchmark approach is not only able to best tackle clothes variation shown in our dataset but also achieves competitive performance on a widely used person re-ID dataset Market1501, which further proves the reliability of the proposed benchmark approach.
Zhang, P, Wu, Q & Xu, J 2019, 'VN-GAN: Identity-preserved Variation Normalizing GAN for Gait Recognition', Proceedings of the International Joint Conference on Neural Networks, International Joint Conference on Neural Networks, IEEE, Budapest, Hungary.View/Download from: Publisher's site
© 2019 IEEE. Gait is recognized as a unique biometric characteristic to identify a walking person remotely across surveillance networks. However, the performance of gait recognition severely suffers challenges from view angle diversity. To address the problem, an identity-preserved Variation Normalizing Generative Adversarial Network (VN-GAN) is proposed for learning purely identity-related representations. It adopts a coarse-to-fine manner which firstly generates initial coarse images by normalizing view to an identical one and then refines the coarse images by injecting identity-related information. In specific, Siamese structure with discriminators for both camera view angles and human identities is utilized to achieve variation normalization and identity preservation of two stages, respectively. In addition to discriminators, reconstruction loss and identity-preserving loss are integrated, which forces the generated images to be the same in view and to be discriminative in identity. This ensures to generate identity-related images in an identical view of good visual effect for gait recognition. Extensive experiments on benchmark datasets demonstrate that the proposed VN-GAN can generate visually interpretable results and achieve promising performance for gait recognition.
Zhang, P, Wu, Q & Xu, J 2019, 'VT-GAN: View Transformation GAN for Gait Recognition across Views', Proceedings of the International Joint Conference on Neural Networks, International Joint Conference on Neural Networks, IEEE, Budapest, Hungary.View/Download from: Publisher's site
© 2019 IEEE. Recognizing gaits without human cooperation is of importance in surveillance and forensics because of the benefits that gait is unique and collected remotely. However, change of camera view angle severely degrades the performance of gait recognition. To address the problem, previous methods usually learn mappings for each pair of views which incurs abundant independently built models. In this paper, we proposed a View Transformation Generative Adversarial Networks (VT-GAN) to achieve view transformation of gaits across two arbitrary views using only one uniform model. In specific, we generated gaits in target view conditioned on input images from any views and the corresponding target view indicator. In addition to the classical discriminator in GAN which makes the generated images look realistic, a view classifier is imposed. This controls the consistency of generated images and conditioned target view indicator and ensures to generate gaits in the specified target view. On the other hand, retaining identity information while performing view transformation is another challenge. To solve the issue, an identity distilling module with triplet loss is integrated, which constrains the generated images inheriting identity information from inputs and yields discriminative feature embeddings. The proposed VT-GAN generates visually promising gaits and achieves promising performances for cross-view gait recognition, which exhibits great effectiveness of the proposed VT-GAN.
Zhang, J, Zhang, J, Wu, Q, Wu, Q, Xu, J, Lu, J, Phua, R, Curr, K & Tang, Z 2017, 'Historical image annotation by exploring the tag relevance', Proceedings - 4th Asian Conference on Pattern Recognition, ACPR 2017, IAPR Asian Conference on Pattern Recognition, IEEE, Nanjing, China, pp. 646-651.View/Download from: Publisher's site
© 2017 IEEE. Historical images usually contain enormous historical research value and are highly related to the history objects, events and background stories etc. Therefore, annotating these images always requires selecting tags within a large set. In this paper, we propose to annotate historical images by exploring the tag relevance. We measure the tag relevance from three different perspectives, including its visual relevance, its dependencies with other tags and its relationship with location based meta-data. By using tag relevance as guidance, we generate three tag sub-sets and use them to fulfill the annotation. Experimental results on the benchmark dataset indicate the significance of exploring the tag relevance by comparing with the baseline experiments.
Zhang, P, Wu, Q, Xu, J & Jian, Z 2018, 'Long-Term Person Re-identification Using True Motion from Videos', Winter Conference on Applications of Computer Vision, IEEE, Lake Tahoe, NV, USA, pp. 494-502.View/Download from: Publisher's site
Cho, N, Wu, Q, Xu, J & Zhang, J 2016, 'Content Authoring Using Single Image in Urban Environments for Augmented Reality', Proceedings of the 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Digital Image Computing Techniques and Applications, IEEE, Gold Coast, Australia, pp. 1-7.View/Download from: Publisher's site
Content authoring is one of essentials of Augmented Reality (AR), which is to emplace an augmented content on a true part of a real scene in order to enhance users' visual experience. For the case of street view single 2D images, the challenge emerges because of clutter environments and unknown position and orientation related to camera pose. Although existing methods based on 2D feature point matching or vanishing point registration may recover the camera pose, the robustness is always challenging because of the uncertainty of feature point detection on texture-less region and displacement of vanishing point detection caused by irregular lines detected on the scene. By taking the advantages of characteristics of the man-made object (e.g. building) widely seen on the street view, this paper proposes a simple yet efficient content authoring approach. In this approach, the building dominant plane where the virtual object will be emplaced is detected and then projected to the frontal-parallel view on which the virtual object can be reliably emplaced. Once the virtual object and the true scene are embedded to each other on the frontal-parallel view, they are able to be converted back to the original view using inverse projection without any distortion. Experiments on public databases show that the proposed method can recover camera pose and implement content emplacement with promising performance.
Yao, Y, Zhang, J, Shen, F, Hua, X, Xu, J & Tang, Z 2016, 'Automatic image dataset construction with multiple textual metadata', Proceedings - IEEE International Conference on Multimedia and Expo, IEEE International Conference on Multimedia and Expo, IEEE, Seattle, Washington, USA.View/Download from: Publisher's site
© 2016 IEEE.The goal of this work is to automatically collect a large number of highly relevant images from the Internet for given queries. A novel image dataset construction framework is proposed by employing multiple textual metadata. In specific, the given queries are first expanded by searching in the Google Books Ngrams Corpora to obtain a richer semantic description, from which the visually non-salient and less relevant expansions are then filtered. After retrieving images from the Internet with filtered expansions, we further filter noisy images by clustering and progressively Convolutional Neural Networks (CNN). To verify the effectiveness of our proposed method, we construct a dataset with 10 categories, which is not only much larger than but also have comparable cross-dataset generalization ability with manually labeled dataset STL-10 and CIFAR-10.
Xu, J, Wu, Q, Zhang, J, Silk, B, Ngo, GT & Tang, Z 2014, 'Efficient People Counting With Limited Manual Interfaces', 2014 International Conference on Digital lmage Computing: Techniques and Applications (DlCTA), Digital Image Computing Techniques and Applications, IEEE, Wollongong, NSW, Australia.
People counting is a topic with various practical
applications. Over the last decade, two general approaches have
been proposed to tackle this problem: a) counting based on
individual human detection; b) counting by measuring regression
relation between the crowd density and number of people.
Because the regression based method can avoid explicit people
detection which faces several well-known challenges, it has been
considered as a robust method particularly on a complicated
environments. An efficient regression based method is proposed
in this paper, which can be well adopted into any existing video
surveillance system. It adopts color based segmentation to extract
foreground regions in images. Regression is established based on
the foreground density and the number of people. This method
is fast and can deal with lighting condition changes. Experiments
on public datasets and one captured dataset have shown the
effectiveness and robustness of the method.
Xu, J, Wu, Q, Zhang, J, Shen, F & Tang, Z 2013, 'Training boosting-like algorithms with semi-supervised subspace learning', 2013 IEEE International Conference on Image Processing, IEEE International Conference on Image Processing, IEEE, Melbourne, Australia, pp. 4302-4306.View/Download from: Publisher's site
Boosting algorithms have attracted great attention since the first real-time face detector by Viola & Jones through feature selection and strong classifier learning simultaneously. On the other hand, researchers have proposed to decouple such two procedures to improve the performance of Boosting algorithms. Motivated by this, we propose a boosting-like algorithm framework by embedding semi-supervised subspace learning methods. It selects weak classifiers based on class-separability. Combination weights of selected weak classifiers can be obtained by subspace learning. Three typical algorithms are proposed under this framework and evaluated on public data sets. As shown by our experimental results, the proposed methods obtain superior performances over their supervised counterparts and AdaBoost.
Xu, J, Wu, Q, Zhang, J & Tang, Z 2013, 'Object Detection Based on Co-Ocurrence GMuLBP Features', 2012 IEEE International Conference on Multimedia and Expo, IEEE International Conference on Multimedia and Expo, IEEE Computer Society, 2012 IEEE International Conference on Multimedia and Expo, pp. 943-948.View/Download from: Publisher's site
Image co-occurrence has shown great powers on object classification because it captures the characteristic of individual features and spatial relationship between them simultaneously. For example, Co-occurrence Histogram of Oriented Gradients (CoHOG) has achieved great success on human detection task. However, the gradient orientation in CoHOG is sensitive to noise. In addition, CoHOG does not take gradient magnitude into account which is a key component to reinforce the feature detection. In this paper, we propose a new LBP feature detector based image co-occurrence. Building on uniform Local Binary Patterns, the new feature detector detects Co-occurrence Orientation through Gradient Magnitude calculation. It is known as CoGMuLBP. An extension version of the GoGMuLBP is also presented. The experimental results on the UIUC car data set show that the proposed features outperform state-of-the-art methods.