Zhong, Z, Zheng, L, Zheng, Z, Li, S & Yang, Y 2019, 'CamStyle: A Novel Data Augmentation Method for Person Re-Identification.', IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, vol. 28, no. 3, pp. 1176-1190.View/Download from: UTS OPUS or Publisher's site
Person re-identification (re-ID) is a cross-camera retrieval task that suffers from image style variations caused by different cameras. The art implicitly addresses this problem by learning a camera-invariant descriptor subspace. In this paper, we explicitly consider this challenge by introducing camera style (CamStyle). CamStyle can serve as a data augmentation approach that reduces the risk of deep network overfitting and that smooths the CamStyle disparities. Specifically, with a style transfer model, labeled training images can be style transferred to each camera, and along with the original training samples, form the augmented training set. This method, while increasing data diversity against overfitting, also incurs a considerable level of noise. In the effort to alleviate the impact of noise, the label smooth regularization (LSR) is adopted. The vanilla version of our method (without LSR) performs reasonably well on few camera systems in which overfitting often occurs. With LSR, we demonstrate consistent improvement in all systems regardless of the extent of overfitting. We also report competitive accuracy compared with the state of the art on Market-1501 and DukeMTMC-re-ID. Importantly, CamStyle can be employed to the challenging problems of one view learning and unsupervised domain adaptation (UDA) in person re-identification (re-ID), both of which have critical research and application significance. The former only has labeled data in one camera view and the latter only has labeled data in the source domain. Experimental results show that CamStyle significantly improves the performance of the baseline in the two problems. Specially, for UDA, CamStyle achieves state-of-the-art accuracy based on a baseline deep re-ID model on Market-1501 and DukeMTMC-reID. Our code is available at: https://github.com/zhunzhong07/CamStyle .
Zhong, Z, Zheng, L, Li, S & Yang, Y 2018, 'Generalizing a person retrieval model hetero- and homogeneously', Computer Vision – ECCV 2018 (LNCS 11217), European Conference on Computer Vision, Springer, Germany, pp. 176-192.View/Download from: UTS OPUS or Publisher's site
© Springer Nature Switzerland AG 2018. Person re-identification (re-ID) poses unique challenges for unsupervised domain adaptation (UDA) in that classes in the source and target sets (domains) are entirely different and that image variations are largely caused by cameras. Given a labeled source training set and an unlabeled target training set, we aim to improve the generalization ability of re-ID models on the target testing set. To this end, we introduce a Hetero-Homogeneous Learning (HHL) method. Our method enforces two properties simultaneously: (1) camera invariance, learned via positive pairs formed by unlabeled target images and their camera style transferred counterparts; (2) domain connectedness, by regarding source/target images as negative matching pairs to the target/source images. The first property is implemented by homogeneous learning because training pairs are collected from the same domain. The second property is achieved by heterogeneous learning because we sample training pairs from both the source and target domains. On Market-1501, DukeMTMC-reID and CUHK03, we show that the two properties contribute indispensably and that very competitive re-ID UDA accuracy is achieved. Code is available at: https://github.com/zhunzhong07/HHL.
Zhong, Z, Zheng, L, Zheng, Z, Li, S & Yang, Y 2018, 'Camera Style Adaptation for Person Re-identification', Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, UT, USA.View/Download from: UTS OPUS or Publisher's site
Zhong, Z, Zheng, L, Cao, D & Li, S 2017, 'Re-ranking Person Re-identification with k-reciprocal Encoding', Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Honolulu, HI, USA.View/Download from: UTS OPUS or Publisher's site
When considering person re-identification (re-ID) as a retrieval process, re-ranking is a critical step to improve its accuracy. Yet in the re-ID community, limited effort has been devoted to re-ranking, especially those fully automatic, unsupervised solutions. In this paper, we propose a k-reciprocal encoding method to re-rank the re-ID results. Our hypothesis is that if a gallery image is similar to the probe in the k-reciprocal nearest neighbors, it is more likely to be a true match. Specifically, given an image, a k-reciprocal feature is calculated by encoding its k-reciprocal nearest neighbors into a single vector, which is used for re-ranking under the Jaccard distance. The final distance is computed as the combination of the original distance and the Jaccard distance. Our re-ranking method does not require any human interaction or any labeled data, so it is applicable to large-scale datasets. Experiments on the large-scale Market-1501, CUHK03, MARS, and PRW datasets confirm the effectiveness of our method.