Ben, X, Gong, C, Zhang, P, Yan, R, Wu, Q & Meng, W 2020, 'Coupled Bilinear Discriminant Projection for Cross-View Gait Recognition', IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 3, pp. 734-747.View/Download from: Publisher's site
© 1991-2012 IEEE. A problem that hinders good performance of general gait recognition systems is that the appearance features of gaits are more affected-prone by views than identities, especially when the walking direction of the probe gait is different from the register gait. This problem cannot be solved by traditional projection learning methods because these methods can learn only one projection matrix, and thus for the same subject, it cannot transfer cross-view gait features into similar ones. This paper presents an innovative method to overcome this problem by aligning gait energy images (GEIs) across views with the coupled bilinear discriminant projection (CBDP). Specifically, the CBDP generates the aligned gait matrix features for two views with two sets of bilinear transformation matrices, so that the original GEIs' spatial structure information can be preserved. By iteratively maximizing the ratio of inter-class distance metric to intra-class distance metric, the CBDP can learn the optimal matrix subspace where the GEIs across views are aligned in both horizontal and vertical coordinates. Therefore, the CBDP is also able to avoid the under-sample problem. We also theoretically prove that the upper and lower bounds of the objective function sequence of the CBDP are both monotonically increasing, so the convergence of the CBDP is demonstrated. In the terms of accuracy, the comparative experiments on the CASIA (B) and OU-ISIR gait databases show that our method is superior to the state-of-the-art cross-view gait recognition methods. More impressively, encouraging performance is obtained by our method even in matching a lateral-view gait with a frontal-view gait.
Huang, Y, Xu, J, Wu, Q, Zhong, Y, Zhang, P & Zhang, Z 2020, 'Beyond Scalar Neuron: Adopting Vector-Neuron Capsules for Long-Term Person Re-Identification', IEEE Transactions on Circuits and Systems for Video Technology, pp. 1-1.View/Download from: Publisher's site
Zhang, P, Xu, J, Wu, Q, Huang, Y & Zhanga, J 2020, 'Top-Push Constrained Modality-Adaptive Dictionary Learning for Cross-Modality Person Re-Identification', IEEE Transactions on Circuits and Systems for Video Technology, pp. 1-1.View/Download from: Publisher's site
Ben, X, Zhang, P, Lai, Z, Yan, R, Zhai, X & Meng, W 2019, 'A general tensor representation framework for cross-view gait recognition', Pattern Recognition, vol. 90, pp. 87-98.View/Download from: Publisher's site
© 2019 Elsevier Ltd Tensor analysis methods have played an important role in identifying human gaits using high dimensional data. However, when view angles change, it becomes more and more difficult to recognize cross-view gait by learning only a set of multi-linear projection matrices. To address this problem, a general tensor representation framework for cross-view gait recognition is proposed in this paper. There are three criteria of tensorial coupled mappings in the proposed framework. (1) Coupled multi-linear locality-preserved criterion (CMLP) aims to detect the essential tensorial manifold structure via preserving local information. (2) Coupled multi-linear marginal fisher criterion (CMMF) aims to encode the intra-class compactness and inter-class separability with local relationships. (3) Coupled multi-linear discriminant analysis criterion (CMDA) aims to minimize the intra-class scatter and maximize the inter-class scatter. For the three tensor algorithms for cross-view gaits, two sets of multi-linear projection matrices are iteratively learned using alternating projection optimization procedures. The proposed methods are compared with the recently published cross-view gait recognition approaches on CASIA(B) and OU-ISIR gait database. The results demonstrate that the performances of the proposed methods are superior to existing state-of-the-art cross-view gait recognition approaches.
Ben, X, Gong, C, Zhang, P, Jia, X, Wu, Q & Meng, W 2019, 'Coupled Patch Alignment for Matching Cross-View Gaits', IEEE Transactions on Image Processing, vol. 28, no. 6, pp. 3142-3157.View/Download from: Publisher's site
© 1992-2012 IEEE. Gait recognition has attracted growing attention in recent years, as the gait of humans has a strong discriminative ability even under low resolution at a distance. Unfortunately, the performance of gait recognition can be largely affected by view change. To address this problem, we propose a coupled patch alignment (CPA) algorithm that effectively matches a pair of gaits across different views. To realize CPA, we first build a certain amount of patches, and each of them is made up of a sample as well as its intra-class and inter-class nearest neighbors. Then, we design an objective function for each patch to balance the cross-view intra-class compactness and the cross-view inter-class separability. Finally, all the local-independent patches are combined to render a unified objective function. Theoretically, we show that the proposed CPA has a close relationship with canonical correlation analysis. Algorithmically, we extend CPA to 'multi-dimensional patch alignment' that can handle an arbitrary number of views. Comprehensive experiments on CASIA(B), USF, and OU-ISIR gait databases firmly demonstrate the effectiveness of our methods over other existing popular methods in terms of cross-view gait recognition.
Yao, X, Wu, Q, Zhang, P & Bao, F 2019, 'Adaptive rational fractal interpolation function for image super-resolution via local fractal analysis', Image and Vision Computing, vol. 82, pp. 39-49.View/Download from: Publisher's site
© 2019 Elsevier B.V. Image super-resolution aims to generate high-resolution image based on the given low-resolution image and to recover the details of the image. The common approaches include reconstruction-based methods and interpolation-based methods. However, these existing methods show difficulty in processing the regions of an image with complicated texture. To tackle such problems, fractal geometry is applied on image super-resolution, which demonstrates its advantages when describing the complicated details in an image. The common fractal-based method regards the whole image as a single fractal set. That is, it does not distinguish the complexity difference of texture across all regions of an image regardless of smooth regions or texture rich regions. Due to such strong presumption, it causes artificial errors while recovering smooth area and texture blurring at the regions with rich texture. In this paper, the proposed method produces rational fractal interpolation model with various setting at different regions to adapt to the local texture complexity. In order to facilitate such mechanism, the proposed method is able to segment the image region according to its complexity which is determined by its local fractal dimension. Thus, the image super-resolution process is cast to an optimization problem where local fractal dimension in each region is further optimized until the optimization convergence is reached. During the optimization (i.e. super-resolution), the overall image complexity (determined by local fractal dimension) is maintained. Compared with state-of-the-art method, the proposed method shows promising performance according to qualitative evaluation and quantitative evaluation.
Ben, X, Zhang, P, Meng, W, Yan, R, Yang, M, Liu, W & Zhang, H 2016, 'On the distance metric learning between cross-domain gaits', Neurocomputing, vol. 208, pp. 153-164.View/Download from: Publisher's site
© 2016 Elsevier B.V. Gait recognition degrades dramatically when gaits are captured from different directions or at different distances due to the low similarity between the registration and the query. This paper addresses the distance metric learning problem in matching between dual cross-domain gaits. Most existing distance metric learning algorithms are only able to match among a set of single domain gaits, but fail to measure the similarity of cross-domain gaits. Traditional gait recognition faces serious challenges, such as various low resolution images, which is caused by acquisition at different distances or different sampling devices, and various body shapes captured by different direction cameras. This paper presents a novel nonlinear coupled mappings (NCMs) algorithm to successfully match between the cross-domain gaits. The relationships within the training data as nodes in a graph are modeled in the kernel space and the constraint is designed to make the difference minimized between cross-domain gaits for an identical subject. Meanwhile, it makes the cross-domain gaits for different subjects disperse more separately with a large margin by using the supervised similarity matrix. Comprehensive experiments show that the proposed algorithm obtains higher accuracy than state-of-the-art algorithms.
Zhang, P, Wu, Q & Xu, J 2019, 'VN-GAN: Identity-preserved Variation Normalizing GAN for Gait Recognition', Proceedings of the International Joint Conference on Neural Networks, International Joint Conference on Neural Networks, IEEE, Budapest, Hungary.View/Download from: Publisher's site
© 2019 IEEE. Gait is recognized as a unique biometric characteristic to identify a walking person remotely across surveillance networks. However, the performance of gait recognition severely suffers challenges from view angle diversity. To address the problem, an identity-preserved Variation Normalizing Generative Adversarial Network (VN-GAN) is proposed for learning purely identity-related representations. It adopts a coarse-to-fine manner which firstly generates initial coarse images by normalizing view to an identical one and then refines the coarse images by injecting identity-related information. In specific, Siamese structure with discriminators for both camera view angles and human identities is utilized to achieve variation normalization and identity preservation of two stages, respectively. In addition to discriminators, reconstruction loss and identity-preserving loss are integrated, which forces the generated images to be the same in view and to be discriminative in identity. This ensures to generate identity-related images in an identical view of good visual effect for gait recognition. Extensive experiments on benchmark datasets demonstrate that the proposed VN-GAN can generate visually interpretable results and achieve promising performance for gait recognition.
Zhang, P, Wu, Q & Xu, J 2019, 'VT-GAN: View Transformation GAN for Gait Recognition across Views', Proceedings of the International Joint Conference on Neural Networks, International Joint Conference on Neural Networks, IEEE, Budapest, Hungary.View/Download from: Publisher's site
© 2019 IEEE. Recognizing gaits without human cooperation is of importance in surveillance and forensics because of the benefits that gait is unique and collected remotely. However, change of camera view angle severely degrades the performance of gait recognition. To address the problem, previous methods usually learn mappings for each pair of views which incurs abundant independently built models. In this paper, we proposed a View Transformation Generative Adversarial Networks (VT-GAN) to achieve view transformation of gaits across two arbitrary views using only one uniform model. In specific, we generated gaits in target view conditioned on input images from any views and the corresponding target view indicator. In addition to the classical discriminator in GAN which makes the generated images look realistic, a view classifier is imposed. This controls the consistency of generated images and conditioned target view indicator and ensures to generate gaits in the specified target view. On the other hand, retaining identity information while performing view transformation is another challenge. To solve the issue, an identity distilling module with triplet loss is integrated, which constrains the generated images inheriting identity information from inputs and yields discriminative feature embeddings. The proposed VT-GAN generates visually promising gaits and achieves promising performances for cross-view gait recognition, which exhibits great effectiveness of the proposed VT-GAN.
Zhang, P, Wu, Q, Xu, J & Jian, Z 2018, 'Long-Term Person Re-identification Using True Motion from Videos', Winter Conference on Applications of Computer Vision, IEEE, Lake Tahoe, NV, USA, pp. 494-502.View/Download from: Publisher's site
Song, H, Dong, H & Zhang, P 2017, 'A virtual instrument for diagnosis to substation grounding grids in harsh electromagnetic environment', Proceedings of the I2MTC 2017 - 2017 IEEE International Instrumentation and Measurement Technology Conference, Proceedings, IEEE International Instrumentation and Measurement Technology Conference, IEEE, Turin, Italy, pp. 1-6.View/Download from: Publisher's site
© 2017 IEEE. To improve the traditional electromagnetic induction methods for grounding grid diagnosis, a virtual instrument is proposed to detect weak signals in the harsh and noisy substation electromagnetic environment. The induced electric signal in the coil of wire, containing multi-source and intense noises, is collected by the NI DAQ device. Then virtual phase intensive detectors and digital filters constitute the lock-in amplifier to suppress noise. In the experimental test, a small model of grounding grid is diagnosed in a man-made break situation. The visualized result shows the feasibility and accuracy of the proposed virtual instrument.