Dr Wei Liu is a Senior Lecturer and Data Science Research Leader at the School of Software and Advanced Analytics Institute in the University of Technology Sydney (UTS). Before joining UTS, he was a Machine Learning Researcher and Project Manager at the National ICT Australia (now Data61). He obtained his PhD degree in Data Mining Research from the University of Sydney (USYD). His research outputs are published in top-prestige journals and conferences. He has received 3 Best Paper Awards.
Dr Liu has been leading industry-driven data analytics research that makes real-world impacts. He has led a number of significant research projects funded by government agencies and industrial organisations, spanning internet security, insurance, trading, transportation, and infrastructure sectors. He has developed advanced data mining models and software tools that accurately identify causes of network incidents. He has also designed cutting-edge predictive models for problems including rare event prediction, fraud/intrusion detection, and emerging trends detection.
Dr Liu is a committee member in world-leading international Data Mining and Artificial Intelligence conferences, such as KDD (The ACM SIGKDD Conference on Knowledge Discovery and Data Mining), AAAI (The AAAI Conference on Artificial Intelligence), and ICDM (The IEEE International Conference on Data Mining). He has also been a reviewer for top journals such as TKDE, TNNLS, TKDD and TPAMI.
Can supervise: YES
Main Research Interests:
- Tensor and matrix factorization
- Deep learning, Representation learning
- Causal inference, Granger causality
- Game theoretical modeling, Adversarial learning
- Graph mining, Dynamic network analysis
- Anomaly and Outlier detection
Data Mining and Knowledge Discovery; Data Analytics; Databases
Yang, P, Ormerod, JT, Liu, W, Ma, C, Zomaya, AY & Yang, JYH 2019, 'AdaSampling for Positive-Unlabeled and Label Noise Learning With Bioinformatics Applications.', IEEE Transactions on Cybernetics, vol. 49, no. 5, pp. 1932-1943.View/Download from: UTS OPUS or Publisher's site
Class labels are required for supervised learning but may be corrupted or missing in various applications. In binary classification, for example, when only a subset of positive instances is labeled whereas the remaining are unlabeled, positive-unlabeled (PU) learning is required to model from both positive and unlabeled data. Similarly, when class labels are corrupted by mislabeled instances, methods are needed for learning in the presence of class label noise (LN). Here we propose adaptive sampling (AdaSampling), a framework for both PU learning and learning with class LN. By iteratively estimating the class mislabeling probability with an adaptive sampling procedure, the proposed method progressively reduces the risk of selecting mislabeled instances for model training and subsequently constructs highly generalizable models even when a large proportion of mislabeled instances is present in the data. We demonstrate the utilities of proposed methods using simulation and benchmark data, and compare them to alternative approaches that are commonly used for PU learning and/or learning with LN. We then introduce two novel bioinformatics applications where AdaSampling is used to: 1) identify kinase-substrates from mass spectrometry-based phosphoproteomics data and 2) predict transcription factor target genes by integrating various next-generation sequencing data.
Yin, Z, Wang, F, Liu, W & Chawla, S 2018, 'Sparse Feature Attacks in Adversarial Learning', IEEE Transactions on Knowledge and Data Engineering, vol. 30, no. 6, pp. 1164-1177.View/Download from: UTS OPUS or Publisher's site
© 2018 IEEE. Adversarial learning is the study of machine learning techniques deployed in non-benign environments. Example applications include classification for detecting spam, network intrusion detection, and credit card scoring. In fact, as the use of machine learning grows in diverse application domains, the possibility for adversarial behavior is likely to increase. When adversarial learning is modelled in a game-theoretic setup, the standard assumption about the adversary (player) behavior is the ability to change all features of the classifiers (the opponent player) at will. The adversary pays a cost proportional to the size of the 'attack'. We refer to this form of adversarial behavior as a dense feature attack. However, the aim of an adversary is not just to subvert a classifier but carry out data transformation in a way such that spam continues to remain effective. We demonstrate that an adversary could potentially achieve this objective by carrying out a sparse feature attack. We design an algorithm to show how a classifier should be designed to be robust against sparse adversarial attacks. Our main insight is that sparse feature attacks are best defended by designing classifiers which use 1 regularizers.
Liu, W & Huang, J 2017, 'Adaptive leader-following consensus for a class of higher-order nonlinear multi-agent systems with directed switching networks', Automatica, vol. 79, pp. 84-92.View/Download from: UTS OPUS or Publisher's site
Nguyen, H, Liu, W & Chen, F 2017, 'Discovering Congestion Propagation Patterns in Spatio-Temporal Traffic Data', IEEE Transactions on Big Data, vol. 3, no. 2, pp. 169-180.View/Download from: UTS OPUS or Publisher's site
Traffic congestion is a condition of a segment in the road network where the traffic demand is greater than the available
road capacity. The detection of unusual traffic patterns including congestions is a significant research problem in the data mining and
knowledge discovery community. However, to the best of our knowledge, the discovery of propagations, or causal interactions among
detected traffic congestions has not been appropriately investigated before. In this research, we introduce algorithms which construct
causality trees from congestions and estimate their propagation probabilities based on temporal and spatial information of the
congestions. Frequent sub-structures of these causality trees reveal not only recurring interactions among spatio-temporal
congestions, but potential bottlenecks or flaws in the designs of existing traffic networks. Our algorithms have been validated by
experiments on a travel time data set recorded from an urban road network.
Liu, W, Wang, Z, Dai, H & Naz, M 2016, 'Dynamic output feedback control for fast sampling discrete-time singularly perturbed systems', IET Control Theory and Applications, vol. 10, no. 15, pp. 1782-1788.View/Download from: Publisher's site
© The Institution of Engineering and Technology 2016. This study is concerned with the dynamic output feedback control problem for fast sampling discrete-time singularly perturbed systems using the singular perturbation approach. Sufficient conditions in terms of linear matrix inequalities (LMIs) are presented to guarantee the existence of a dynamic output feedback controller for the corresponding slow and fast subsystems, respectively. The controller gains and the corresponding coefficient matrices can be obtained via solving the proposed LMIs. Thus, not only the high dimensionality and the ill condition are alleviated, but also the regularity restrictions attached to the Riccati-based solutions are avoided. The theoretical result demonstrates that the composite dynamic output feedback control designed through those of the slow and fast subsystems can stabilise the full-order discrete-time singularly perturbed systems. Finally, two real world practical examples are provided to show the effectiveness of the obtained results.
© 2016 ISA This paper is concerned with the problem of robust observer-based absolute stabilization for Lur'e singularly perturbed time-delay systems. The aim is to design a suitable observer-based feedback control law such that the resulting closed-loop system is absolutely stable. First, a full-order state observer is constructed. Based on the linear matrix inequality (LMI) technique, a delay-dependent sufficient condition is presented such that the observer error system is absolutely stable. Then, for observer-based feedback control, by introducing some slack matrices, a sufficient condition for input-to-state stability (ISS) of the closed-loop system with regard to the observer error is presented. Thus, the absolute stabilization of the closed-loop system can be guaranteed based on the ISS property. In addition, the criteria presented are both independent of the small parameter and the upper bound for the absolute stability can be obtained in a workable algorithm. Finally, two numerical examples are provided to illustrate the effectiveness of the developed methods.
Liu, W, Wang, Y & Wang, Z 2016, 'H observer-based sliding mode control for singularly perturbed systems with input nonlinearity', Nonlinear Dynamics, vol. 85, no. 1, pp. 573-582.View/Download from: Publisher's site
© 2016, Springer Science+Business Media Dordrecht. This paper considers the problem of H observer-based sliding mode control for singularly perturbed systems with input nonlinearities. First, a proper observer is designed such that the observer error system with disturbance attenuation level is asymptotically stable. Then, an observer-based sliding surface is constructed under which a criterion for the input-to-state stability (ISS) of the sliding mode dynamics with respect to the observer error is obtained via linear matrix inequality. The criterion presented is independent of the small parameter, and the upper bound for ISS can be obtained efficiently. In addition, a sliding mode control law is synthesized to guarantee the reachability of the sliding surface in the state estimation space. Finally, a numerical example is presented to demonstrate the effectiveness of the proposed theoretical results.
Ben, X, Zhang, P, Meng, W, Yan, R, Yang, M, Liu, W & Zhang, H 2016, 'On the distance metric learning between cross-domain gaits', Neurocomputing, vol. 208, pp. 153-164.View/Download from: UTS OPUS or Publisher's site
© 2016 Elsevier B.V. Gait recognition degrades dramatically when gaits are captured from different directions or at different distances due to the low similarity between the registration and the query. This paper addresses the distance metric learning problem in matching between dual cross-domain gaits. Most existing distance metric learning algorithms are only able to match among a set of single domain gaits, but fail to measure the similarity of cross-domain gaits. Traditional gait recognition faces serious challenges, such as various low resolution images, which is caused by acquisition at different distances or different sampling devices, and various body shapes captured by different direction cameras. This paper presents a novel nonlinear coupled mappings (NCMs) algorithm to successfully match between the cross-domain gaits. The relationships within the training data as nodes in a graph are modeled in the kernel space and the constraint is designed to make the difference minimized between cross-domain gaits for an identical subject. Meanwhile, it makes the cross-domain gaits for different subjects disperse more separately with a large margin by using the supervised similarity matrix. Comprehensive experiments show that the proposed algorithm obtains higher accuracy than state-of-the-art algorithms.
Li, J, Liu, D, Hua, R, Zhang, J, Liu, W, Huo, Y, Cheng, Y, Hong, J & Sun, Y 2014, 'Long non-coding RNAs expressed in pancreatic ductal adenocarcinoma and lncRNA BC008363 an independent prognostic factor in PDAC.', Pancreatology : official journal of the International Association of Pancreatology (IAP) ... [et al.], vol. 14, no. 5, pp. 385-390.View/Download from: Publisher's site
BACKGROUND: Long non-coding RNAs (lncRNAs) are a novel class of mRNA-like transcripts with no protein coding capacity, but with a variety of functions including roles in epigenetics and gene regulation. In recent reports, the aberrant expression of lncRNAs has been associated with human cancers, suggesting a critical role in tumorigenesis. METHOD: In the present study, we analyzed mRNA and lncRNA expression in pancreatic ductal adenocarcinoma (PDAC) by microarray platform, and analyzed lncRNA expression from 30 patients who underwent pancreatectomy for pancreatic ductal adenocarcinoma by real-time PCR. We also investigated the relationship between the expression of BC008363 and the prognosis of PDAC patients by Kaplan-Meier analyses. RESULTS: Based on the microarray data, 1881 lncRNAs were upregulated and 3369 lncRNAs were downregulated (fold change 4.0 or 0.4, p < 0.01) in the PDAC group compared with the control group. We found that the expression level of lncRNA BC008363 was significantly lower (23-fold) in PDAC tissues compared to corresponding nontumor pancreatic tissues, and patients with high levels of lncRNA BC008363 expression had significantly better survival rates than those with low levels of lncRNA BC008363 expression. CONCLUSIONS: These data indicate that there are different lncRNAs expressed in pancreatic cancer biology and that lncRNA BC008363 may be a novel biomarker for the prognosis of pancreatic cancer.
Network methods have had profound influence in many domains and disciplines in the past decade. Community structure is a very important property of complex networks, but the accurate definition of a community remains an open problem. Here we defined community based on three properties, and then propose a simple and novel framework to detect communities based on network topology. We analyzed 16 different types of networks, and compared our partitions with Infomap, LPA, Fastgreedy and Walktrap, which are popular algorithms for community detection. Most of the partitions generated using our approach compare favorably to those generated by these other algorithms. Furthermore, we define overlapping nodes that combine community structure with shortest paths. We also analyzed the E. Coli. transcriptional regulatory network in detail, and identified modules with strong functional coherence.
Zhou, X, Liu, W, Niu, Q, Wang, P & Jiang, K 2014, 'Locator layout optimization for checking fixture design of thin-walled parts', Key Engineering Materials, vol. 572, no. 1, pp. 593-596.View/Download from: Publisher's site
The location of a checked part is one of the most critical and complicated problems in the checking fixture design for thin-walled parts with less stiffness. The unreasonable layout of locators will give rise to the high deflection of the checked part and affect measure accuracy. Based on the "N-2-1" locating principle, an optimization method is presented to determine the locator layout, where the finite element method is used to calculate the maximal deformation of the work piece which is used to be an objective function of optimization, and the Particle Swarm Optimization with fine performance of global convergence is adopted as an optimization solver to seek for the optima. Finally, a case study is presented to verify the proposed method. © 2014 Trans Tech Publications, Switzerland.
Liu, W, Wang, Z & Ni, M 2013, 'Controlled synchronization for chaotic systems via limited information with data packet dropout', Automatica, vol. 49, no. 8, pp. 2576-2579.View/Download from: Publisher's site
This note addresses a controlled synchronization problem for chaotic systems involving a communication network with data packet dropout. A chaotic master system and its slave system are connected with a controller via a limited channel and data packet dropout is modeled as Bernoulli process. Then, a coder-decoder pair is designed with the controller such that the master system and the slave system are completely synchronized, not necessarily bounded for synchronization. Necessary data capacity of a channel is explicitly stated. Finally, a numerical example for the 3-double-scroll chaotic system is applied to illustrate the effectiveness of the obtained result. © 2013 Elsevier Ltd. All rights reserved.
Qian, C, Zhang, M, Luo, HW & Liu, W 2013, 'Joint Tomlinson-Harashima source and linear relay precoder design in amplify-and-forward multiple-input multiple-output two-way relay systems', Journal of Shanghai Jiaotong University (Science), vol. 18, no. 2, pp. 180-185.View/Download from: Publisher's site
Existing minimum-mean-squared-error (MMSE) transceiver designs in amplified-and-forward (AF) multiple-input multiple-output (MIMO) two-way relay systems all assume a linear precoder at the sources. Nonlinear source precoders in such a system have not been considered yet. In this paper, we study the joint design of source Tomlinson-Harashima precoders (THPs), relay linear precoder and MMSE receivers in two-way relay systems. This joint design problem is a highly nonconvex optimization problem. By dividing the original problem into three sub-problems, we propose an iterative algorithm to optimize precoders and receivers. The convergence of the algorithm is ensured since the updated solution is optimal to each sub-problem. Numerical simulation results show that the proposed iterative algorithm outperforms other algorithms in the high signal-to-noise ratio (SNR) region. © 2013 Shanghai Jiaotong University and Springer-Verlag Berlin Heidelberg.
Pang, LX, Chawla, S, Liu, W & Zheng, Y 2013, 'On detection of emerging anomalous traffic patterns using GPS data', DATA & KNOWLEDGE ENGINEERING, vol. 87, pp. 357-373.View/Download from: UTS OPUS or Publisher's site
Lu, BS, Liu, W, Yu, H, Luo, HW & Wang, HL 2012, 'A low complexity soft-output MIMO sphere decoding algorithm', Shanghai Jiaotong Daxue Xuebao/Journal of Shanghai Jiaotong University, vol. 46, no. 11, pp. 1833-1837.
This paper proposed a novel low complexity soft output sphere decoding algorithm for multiple-input multiple-output (MIMO) system. Based on the traditional Dijkstra sphere decoding algorithm, the paper uses look-up table and single tree-search to update soft value (LLR) mechanism, improving enumeration of points and in or out of stack method in Dijkstra sphere decoding, reducing the cost of storage. Without reducing the performance of the system, the proposed algorithm can reduce the complexity of the receiver efficiently. The simulation results show that the proposed sphere decoding algorithm and maximum likelihood(ML) decoding algorithm are almost of the same performance with different modulation mode .Meanwhile the complexity of algorithm is reduced sharply.
Liu, W, Wang, YY, Wang, ZM & Li, JH 2012, 'Quantized synchronization for a class of chaotic systems', Kongzhi Lilun Yu Yingyong/Control Theory and Applications, vol. 29, no. 9, pp. 1227-1231.
This paper investigates synchronization for a class of continuous chaotic systems with information constraints. A general chaotic master system and its observer-based response system are connected through a limited-capacity communication channel. A proper quantization scheme is designed such that the synchronization error caused by the transmission error is input-to-state stable (ISS). Meanwhile, the transmission error decays to zero exponentially. This indicates that the synchronization error converges to zero asymptotically in a communication channel of limited capacity. A simulation example is presented to show the effectiveness of the proposed approach.
Li, XN, Luo, HW, Ding, M, Liu, W & Ma, JP 2012, 'An adaptive precoding technology for MIMO cluster system', Shanghai Jiaotong Daxue Xuebao/Journal of Shanghai Jiaotong University, vol. 46, no. 6.
An efficient precoding technology was proposed for multi-cell multiple-input multiple-output (MIMO) system consisting of several clusters. According to the variation of channel state information (CSI) in two contiguous slots, the central processor determines users' precoding codeword between two precoding schemes and uses extra 1 bit overhead to indicate the selection scheme. The simulation results show that the proposed scheme can achieve better throughput performance and cut down the sum backhaul overhead than the traditional schemes.
Liu, W, Jiang, WY, Luo, H & Ding, M 2012, 'A novel user pairing algorithm for uplink virtual MIMO systems', IEICE Transactions on Communications, vol. E95-B, no. 7, pp. 2485-2488.View/Download from: Publisher's site
The conventional semi-orthogonal user pairing algorithm in uplink virtual MIMO systems can be used to improve the total system throughput but it usually fails to maintain good throughput performance for users experiencing relatively poor channel conditions. A novel user paring algorithm is presented in this paper to solve this fairness issue. Based on our analysis of the MMSE receiver, a new criterion called "inverse selection" is proposed for use in conjunction with the semi-orthogonal user selection. Simulation results show that the proposed algorithm can significantly improve the throughput of users with poor channel condition at only a small reduction of the overall throughput. © 2012 The Institute of Electronics, Information and Communication Engineers.
Liu, W, Wang, Z & Zhang, W 2012, 'Controlled synchronization of discrete-time chaotic systems under communication constraints', Nonlinear Dynamics, vol. 69, no. 1-2, pp. 223-230.View/Download from: Publisher's site
This paper investigates the controlled synchronization problem for a class of nonlinear discrete-time chaotic systems subject to limited communication capacity. A general chaotic master system and its slave system with a controller are connected via a limited capacity channel. In this case, the effect of quantization errors is considered. A practical quantized scheme is proposed so that the synchronization error is input-to-state stable with respect to the transmission error. Meanwhile, the transmission error decays to zero exponentially. This implies that the synchronization error converges to zero under a limited communication channel. A simulation example for the Fold chaotic system is presented to illustrate the effectiveness of the proposed method. © 2011 Springer Science+Business Media B.V.
Liu, W, Zou, J, Luo, HW & Ma, JP 2011, 'Joint user and antenna selection for multiuser MIMO downlink with block diagonalization', Journal of Shanghai Jiaotong University (Science), vol. 16, no. 6, pp. 691-695.View/Download from: Publisher's site
User selection is necessary for multiuser multiple-input multiple-output (MIMO) downlink systems with block diagonalization (BD) due to the limited free spatial transmit dimensions. The pure user selection algorithms can be improved by performing receive antenna selection (RAS) to increase sum rate. In this paper, a joint user and antenna selection algorithm, which performs user selection for sum rate maximization in the first stage and then performs antenna selection in the second stage, is proposed. The antenna selection process alternately drops one antenna with the poorest channel quality based on maximum determinant ranking (MDR) from the users selected during the first stage and activates one antenna with the maximum norm of projected channel from the remaining users. Simulation results show that the proposed algorithm significantly outperforms the algorithm only performing user selection as well as the algorithm combining user selection with MDR receive antenna selection in terms of sum rate. © 2011 Shanghai Jiaotong University and Springer-Verlag Berlin Heidelberg.
Liu, W, Lin, QD & Wang, SJ 2011, 'Expression of hypoxia-inducible factor prolyl 4-hydroxylase in placentas of normal pregnant women and patients with pre-eclampsia', Journal of Shanghai Jiaotong University (Medical Science), vol. 31, no. 11, pp. 1616-1620.View/Download from: Publisher's site
Objective To investigate the expression of hypoxia-inducible factor prolyl 4-hydroxylase (HPH) in placentas of normal pregnant women and patients with pre-eclampsia, and explore the relationship between oxygen sensitivity of trophoblast and hypoxia in preeclamptic placenta. Methods Sixty-six pregnant women undergoing cesarean section or family planning surgery were divided into early pregnancy group (n = 13), mid-pregnancy group (n =9), late pregnancy group (n = 13, controls for pre-eclampsia group and gestational hypertension group), pre-eclampsia group (n =20) and gestational hypertension group (n=11). The expression of HPH-1, HPH-2 and HPH-3 mRNA in placentas and villous tissues was determined by in situ hybridization and Real-Time PCR. Results HPH-1, HPH-2 and HPH-3 mRNA was mainly expressed in cytoplasm of trophoblast, and HPH-1 mRNA was significantly expressed in cytoplasm of extravillous trophoblast. With the progress of pregnancy, the expression of HPH-1 mRNA significantly increased (r = 0.616, P < 0.001). The expression of HPH-1 mRNA in pre-eclampsia group was significantly lower than that in late pregnancy group (P <0.05). The expression of HPH-1 mRNA in gestational hypertension group was also lower than that in late pregnancy group, while there was no significant difference between them (P >0.05). The weight of placentas in pre-eclampsia group was significantly related to the expression of HPH-1 mRNA (r = 0.457, P<0.05). Conclusion Low oxygen sensitivity of trophoblast (low expression of HPH-1 mRNA) may be an important cause for placenta hypoxia (overactivation of hypoxia reaction pathway) in patients with pre-eclampsia.
Jiang, WY, Liu, W, Ding, M & Luo, HW 2011, 'A user pairing algorithm for virtual MIMO systems', Shanghai Jiaotong Daxue Xuebao/Journal of Shanghai Jiaotong University, vol. 45, no. 3, pp. 350-353.
A novel user pairing algorithm for virtual MIMO systems was proposed based on semi-orthogonal user selection (SUS) with partial inverse selection. To solve the problem that users with bad channel condition suffer from low throughput in the traditional SUS algorithm, this paper proposed a mechanism called 'inverse selection', which is derived from the analysis of the MMSE receiver. The simulation results show that the new algorithm largely improves the throughput of users with poor SINR at a reasonable cost of the overall throughput decrease, thus achieving a profitable trade-off between system throughput and user fairness.
Wu, YL, Luo, HW, Liu, W, Ding, M & Wang, H 2011, 'Adaptive limited feedback for MU-MIMO system', Shanghai Jiaotong Daxue Xuebao/Journal of Shanghai Jiaotong University, vol. 45, no. 3, pp. 313-316.
The limited feedback in multi-user Multiple-Input Multiple-Output (MU-MIMO) systems is addressed in this paper. In order to reduce the feedback overhead in the uplink channel, we analyze the relation between user's feedback bits and rate, deduce the optimal solution for adaptive limited feedback, and propose a suboptimal feedback scheme which is more practical. In this scheme, users choose the feedback bits according to both their channel gains and the average gain of the system. Users with high SNR are assigned with more feedback bits than the low-SNR users so as to improve the feedback efficiency. Simulation results and analysis show that the proposed scheme is easy-to-implement and can enhance the throughput with no additional feedback overhead.
Liu, W, Zhou, XH & Niu, Q 2011, 'Evaluation and optimization of intersection process between voxelized mesh models and homocentric spheres', Journal of Shanghai Jiaotong University (Science), vol. 16, no. 4, pp. 474-478.View/Download from: Publisher's site
In the field of 3D model matching and retrieval, an effective method for feature extraction is spherical harmonic or its mutations, and is acted on the spherical images. But the obtainment of spherical images from 3D models is very time-consuming, which greatly restrains the responding speed of such systems. In this paper, we propose a quantitative evaluation of the whole process and give a detailed two-sided analysis based on the comparative size between pixels and voxels. The experiments show that the resultant optimized parameters are fit for the practical application and exhibit a satisfactory performance. © Shanghai Jiaotong University and Springer-Verlag Berlin Heidelberg 2011.
Wu, YL, Luo, HW, Liu, W, Wang, HL & Zhou, XL 2010, 'Resource allocation in relay-based OFDM networks with proportional fairness', Shanghai Jiaotong Daxue Xuebao/Journal of Shanghai Jiaotong University, vol. 44, no. 9, pp. 1256-1260.
A new scheme was proposed which adaptively allocates the resource in the downlink scenario of a cellular cooperative frequency-division multiplexing (OFDM) networks. The allocation scheme mainly contains the subcarrier and power allocation both in base station and relay nodes. Two algorithms, strict rate proportional fairness and relaxed subcarrier proportional fairness, for subcarrier allocation were proposed. The first one aims to achieve perfect rate proportional among users, allocates the best subcarrier to the user with worst rate proportional fairness; the second one relaxes the rate proportional fairness into subcarrier proportional fairness so that it can avoid too many subcarriers being allocated to the user with bad channel gains. After the subcarrier allocation, power allocation based on water-filling algorithm was employed to improve the throughput. The simulation results and analysis show that, both of the proposed algorithms can substantially increase the throughput, and achieve a fine proportional fairness.
Lü, J, Luo, HW, Liu, W & Zhang, J 2010, 'A low-complexity user selection algorithm for multicell MIMO employing orthogonal space-division multiplexing', Shanghai Jiaotong Daxue Xuebao/Journal of Shanghai Jiaotong University, vol. 44, no. 9, pp. 1266-1270.
A low-complexity user selection algorithm was proposed for multicell multiple-input multiple-output (MIMO) system employing orthogonal space-division multiplexing (OSDM) with out-of-cooperative-cells interference suppression. The algorithm iteratively selects users through greedy search. In each user selection step, the user who contributes most to the average cell capacity is added to the selected user set, and the number of activated users is adaptively adjusted to maximize the average cell capacity. The simulation results show that compared with the user selection algorithm in MIMO system employing block diagonalization (BD) with out-of-cooperative-cells interference suppression, the algorithm proposed can achieve higher average cell capacity, and support more users with the same signal-to-noise raitio (SNR) and interference-to-noise raitio (INR).
Liu, W, He, Y & Zhou, X 2010, 'Spherical binary images retrieval with wavelet analysis', Journal of Information and Computational Science, vol. 7, no. 1, pp. 219-225.
Matching and retrieval of spherical binary images (SBIs) doesn't belong to the mainstream fields in digital image processing. While at certain peculiar occasions, it is also necessary to obtain the differences among them. In this paper, we will propose an effectual algorithm based on spherical wavelet scheme, which is derived from the planar one. The feature vector of a SBI is constructed using the resultant coefficients of wavelet decomposition. Then the comparison result of two SBIs can be achieved by the accumulative distance between their corresponding feature vectors. Later experiments show that the algorithm exhibits a satisfactory performance in comparatively accurate comparison of the SBIs. Copyright © 2010 Binary Information Press.
A novel algorithm to voxelize 3D mesh models with gray levels is presented in this paper. The key innovation of our method is to decide the gray level of a voxel according to the total area of all surfaces contained by it. During the preprocessing stage, a set of voxels in the extended bounding box of each triangle is established. Then we travel each triangle and compute the areas between it and its set of voxels one by one. Finally, each voxel is arranged a discrete gray level from 0 to 255. Experiments show that our algorithm gets a comparatively perfect result compared with the prevenient ones and approaches the original models in a more accurate way.
Liu, W, He, Y & Zhou, X 2009, 'Bridge the gap between PCA and isotropy in the application of 3D model retrieval', Journal of Computational Information Systems, vol. 5, no. 3, pp. 1269-1277.
PCA and isotropy are two important preprocess stages in the application of 3D model retrieval as they have great effects on its performance. In this paper, we investigate the two preprocesses from a novel view and then explore the essential relation between them. Based on the analysis, we get a way to bridge the gap between PCA and isotropy. As a result, we can consider them as an organic unit and apply them as a whole in 3D model retrieval system. Later experiments show that our method can remarkably enhance the retrieving speed and performance for certain existing 3D descriptors. 1553-9105/ Copyright © 2009 Binary Information Press.
Fang, W, Tang, H & Liu, W 2007, 'Modeling and analyzing an inductive contactless power transfer system for artificial hearts using the generalized state space averaging method', Journal of Computational and Theoretical Nanoscience, vol. 4, no. 7-8, pp. 1412-1416.View/Download from: Publisher's site
This paper presents an approach for the modeling and simulation of Inductive Contactless Power Transfer System (ICPTS) for artificial hearts based on the generalized state space averaging method. By using the proposed method, a dynamic model can be realized by a linear model. After introducing the proposed approach, the result of applying this method is compared to the result of the circuit topology model, which verifies that the generalized state space averaging method can accurately present the ICPTS, and can be used to design and develop the controller of ICPTS. Copyright © 2007 American Scientific Publishers. All rights reserved.
Liu, W & He, Y 2007, 'Survey of 3D model retrieval and mining system', Journal of Computational Information Systems, vol. 3, no. 3, pp. 925-934.
Recently the explosion of 3D models arouses the necessity to develop effective 3D model retrieval systems. From 1990s, a lot of researches have been carries on and various matching and retrieval algorithms have been proposed. In this paper, we summarize the up-to-date methods for content-based 3D retrieval and mining system. After introducing the system framework, we emphasize on the descriptors which are most vital to the performance before other interrelated technologies, including pose normalization, semantic function and relevance feedback, are discussed. Finally we give our conclusion and some challenges, together with the potential future directions.
A 3D model retrieval method that employs multi-level spherical moment analysis and relies on voxelization and spherical mapping of the 3D models is proposed. For a given polygon-soup 3D model, first a pose normalization step is done to align the model into a canonical coordinate frame so as to define the shape representation with respect to this orientation. Afterward we rasterize its exterior surface into cubical voxel grids, then a series of homocentric spheres with their center superposing the center of the voxel grids cut the voxel grids into several spherical images. Finally moments belonging to each sphere are computed and the moments of all spheres constitute the descriptor of the model. Experiments showed that Euclidean distance based on this kind of feature vector can distinguish different 3D models well and that the 3D model retrieval system based on this arithmetic yields satisfactory performance.
Liu, W, Yin, Z & Chawla, S 2019, 'Adversarial Attack, Defense, and Applications with Deep Learning Frameworks' in Alazab, M & Tang, M (eds), Deep Learning Applications for Cyber Security, Springer.
This book addresses questions of how deep learning methods can be used to advance cyber security objectives, including detection, modeling, monitoring and analysis of as well as defense against various threats to sensitive data and security ...
Liu, W & Quan, D 2018, 'Scalable Multimodal Factorization for Learning from Very Big Data' in Seng, KP, Ang, L-M, Gao, J & Liew, AW-C (eds), Multimodal Analytics for Next-Generation Big Data Technologies and Applications, Spinger, Switzerland.View/Download from: UTS OPUS or Publisher's site
Recent technology advances in data acquisition bring to re-
search communities new opportunities as well as new challenges. They
enable researchers to acquire multiple modes of information about the real
world. This multimodal data can be naturally and e ciently represented
by a multi-way structure, so-called tensors, which can be analyzed to ex-
tract the underlying core patterns of the observed data. Multiple datasets
obtained from di erent acquisition methods and sensors are increasingly
available. The increasing availability of multiple modalities, captured in
correlated tensors, provides a complete picture of the whole data patterns.
Given large-scale datasets, existing distributed methods for joint analy-
sis of multi-dimensional data generated from multiple sources decompose
them on several computing nodes following Map-Reduce paradigm. How
to improve the performance of Map-Reduce based factorization algorithms
as observed data gets bigger is still an open problem. This requires an even
more e cient solution that not only reduces communication overhead but
also optimizes factors faster.
In this book chapter, we provide readers knowledge about Tensor Factor-
ization and joint analysis of several correlated Tensors. We propose a
Scalable Multimodal Factorization (SMF) algorithm for analyzing corre-
lated big multimodal data. It has two key features to enable big multimodal
data analysis. Firstly, SMF's design, based on Apache Spark, enables it
to have the smallest communication cost. Secondly, its optimized solver
converges faster. These key advantages reduce factorization's time com-
plexity. As a result, SMF's performance is extremely e cient as the data
increases. Con rmed by our experiments with 1 billion known entries,
SMF outperforms the currently fastest Coupled Tensor Factorization and
Tensor Factorization by 17.8 and 3.8 times, respectively. Compellingly,
SMF achieves this speed with the highest accuracy.
Gong, Y, Li, Z, Zhang, J, Liu, W, Zheng, Y & Kirsch, C 2018, 'Network-wide Crowd Flow Prediction of Sydney Trains via customized Online Non-negative Matrix Factorization', The 27th ACM International Conference on Information and Knowledge Management, Turin, Italy.View/Download from: UTS OPUS
Verma, S, Liu, W, Wang, C & Zhu, L 2018, 'Hybrid networks: Improving deep learning networks via integrating two views of images', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 46-58.View/Download from: UTS OPUS or Publisher's site
© 2018, Springer Nature Switzerland AG. The principal component analysis network (PCANet) is an unsupervised parsimonious deep network, utilizing principal components as filters in the layers. It creates an amalgamated view of the data by transforming it into column vectors which destroys its spatial structure while obtaining the principal components. In this research, we first propose a tensor-factorization based method referred as the Tensor Factorization Networks (TFNet). The TFNet retains the spatial structure of the data by preserving its individual modes. This presentation provides a minutiae view of the data while extracting matrix factors. However, the above methods are restricted to extract a single representation and thus incurs information loss. To alleviate this information loss with the above methods we propose Hybrid Network (HybridNet) to simultaneously learn filters from both the views of the data. Comprehensive results on multiple benchmark datasets validate the superiority of integrating both the views of the data in our proposed HybridNet.
Wang, S, Hu, L, Cao, L, Huang, X & Liu, W 2018, 'Attention-based transactional context embedding for next-item recommendation', Proceedings of the ... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence, 32nd AAAI Conference on Artificial Intelligence 2018, New Orleans, United States, pp. 2532-2539.View/Download from: UTS OPUS
To recommend the next item to a user in a transactional context is practical yet challenging in applications such as marketing campaigns. Transactional context refers to the items that are observable in a transaction. Most existing transaction-based recommender systems (TBRSs) make recommendations by mainly considering recently occurring items instead of all the ones observed in the current context. Moreover, they often assume a rigid order between items within a transaction, which is not always practical. More importantly, a long transaction often contains many items irreverent to the next choice, which tends to overwhelm the influence of a few truely relevant ones. Therefore, we posit that a good TBRS should not only consider all the observed items in the current transaction but also weight them with different relevance to build an attentive context that outputs the proper next item with a high probability. To this end, we design an effective attention-based transaction embedding model (ATEM) for context embedding to weight each observed item in a transaction without assuming order. The empirical study on real-world transaction datasets proves that ATEM significantly outperforms the state-of-the-art methods in terms of both accuracy and novelty.
Liu, W & Chivukula, A 2018, 'Discovering Granger-causal Features from Deep Learning Networks', The 31st Australasian Joint Conference on Artificial Intelligence, Springer, Wellington, New Zealand, pp. 692-692.View/Download from: UTS OPUS or Publisher's site
Do, Q, Liu, W & Chen, F 2017, 'Discovering both explicit and implicit similarities for cross-domain recommendation', Advances in Knowledge Discovery and Data Mining (LNAI), Pacific Asia Conference on Advances in Knowledge Discovery and Data Mining, Springer, Jeju, South Korea, pp. 618-630.View/Download from: UTS OPUS or Publisher's site
© 2017, Springer International Publishing AG. Recommender System has become one of the most important techniques for businesses today. Improving its performance requires a thorough understanding of latent similarities among users and items. This issue is addressable given recent abundance of datasets across domains. However, the question of how to utilize this cross-domain rich information to improve recommendation performance is still an open problem. In this paper, we propose a cross-domain recommender as the first algorithm utilizing both explicit and implicit similarities between datasets across sources for performance improvement. Validated on real-world datasets, our proposed idea outperforms the current cross-domain recommendation methods by more than 2 times. Yet, the more interesting observation is that both explicit and implicit similarities between datasets help to better suggest unknown information from cross-domain sources.
Braytee, A, Liu, W & Kennedy, PJ 2017, 'Supervised context-aware non-negative matrix factorization to handle high-dimensional high-correlated imbalanced biomedical data', Proceedings of the International Joint Conference on Neural Networks, 2017 International Joint Conference on Neural Networks, Anchorage, AK, USA, pp. 4512-4519.View/Download from: UTS OPUS or Publisher's site
© 2017 IEEE. Traditional feature selection techniques are used to identify a subset of the most useful features, and consider the rest as unimportant, redundant or noisy. In the presence of highly correlated features, many variable selection methods consider correlated features as redundant and need to be removed. In this paper, a novel supervised feature selection algorithm SCANMF is proposed by jointly integrating correlation analysis and structural analysis of the balanced supervised non-negative matrix factorization (NMF). Furthermore, 2,1-norm minimization constraint is incorporated into the objective function to guarantee sparsity in the feature matrix rows and reduce noisy features. Our algorithm exploits the discriminative information, feature combinations, and the original features in the context of a supervised NMF method which can be beneficial for both classification and interpretation. An efficient iterative algorithm is designed to solve the constrained optimization problem with guaranteed convergence. Finally, a series of extensive experiments are conducted on 8 complex datasets. Promising results using multiple classifiers demonstrate the effectiveness and efficiency of our algorithm over state-of-the-art methods.
Chivukula, AS & Liu, W 2017, 'Adversarial learning games with deep learning models', Proceedings of the International Joint Conference on Neural Networks, International Joint Conference on Neural Networks, IEEE, Anchorage, AK, USA, pp. 2758-2767.View/Download from: UTS OPUS or Publisher's site
© 2017 IEEE. Deep learning has been found to be vulnerable to changes in the data distribution. This means that inputs that have an imperceptibly and immeasurably small difference from training data correspond to a completely different class label in deep learning. Thus an existing deep learning network like a Convolutional Neural Network (CNN) is vulnerable to adversarial examples. We design an adversarial learning algorithm for supervised learning in general and CNNs in particular. Adversarial examples are generated by a game theoretic formulation on the performance of deep learning. In the game, the interaction between an intelligent adversary and deep learning model is a two-person sequential noncooperative Stackelberg game with stochastic payoff functions. The Stackelberg game is solved by the Nash equilibrium which is a pair of strategies (learner weights and genetic operations) from which there is no incentive for either learner or adversary to deviate. The algorithm performance is evaluated under different strategy spaces on MNIST handwritten digits data. We show that the Nash equilibrium leads to solutions robust to subsequent adversarial data manipulations. Results suggest that game theory and stochastic optimization algorithms can be used to study performance vulnerabilities in deep learning models.
Verma, S, Liu, W, Wang, C & Zhu, L 2017, 'Extracting highly effective features for supervised learning via simultaneous tensor factorization', Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, AAAI, San Francisco, USA, pp. 4995-4996.View/Download from: UTS OPUS
Copyright © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Real world data is usually generated over multiple time periods associated with multiple labels, which can be represented as multiple labeled tensor sequences. These sequences are linked together, sharing some common features while exhibiting their own unique features. Conventional tensor factorization techniques are limited to extract either common or unique features, but not both simultaneously. However, both types of these features are important in many machine learning systems as they inherently affect the systems' performance. In this paper, we propose a novel supervised tensor factorization technique which simultaneously extracts ordered common and unique features. Classification results using features extracted by our method on CIFAR-10 database achieves significantly better performance over other factorization methods, illustrating the effectiveness of the proposed technique.
Luo, L, Liu, W, Koprinska, I & Chen, F 2015, 'DAAR: A discrimination-aware association rule classifier for decision support', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Conference on Big Data Analytics and Knowledge Discovery (DAWAK), Springer, Spain, pp. 47-68.View/Download from: UTS OPUS or Publisher's site
© Springer-Verlag GmbH Germany 2017. Undesirable correlations between sensitive attributes (such as race, gender or personal status) and the class label (such as recruitment decision and approval of credit card), may lead to biased decision in data analytics. In this paper, we investigate how to build discrimination-aware models even when the available training set is intrinsically discriminating based on the sensitive attributes. We propose a new classification method called Discrimination-Aware Association Rule classifier (DAAR), which integrates a new discrimination-aware measure and an association rule mining algorithm. We evaluate the performance of DAAR on three real datasets from different domains and compare DAAR with two non-discrimination-aware classifiers (a standard association rule classification algorithm and the state-of-the-art association rule algorithm SPARCCC), and also with a recently proposed discrimination-aware decision tree method. Our comprehensive evaluation is based on three measures: predictive accuracy, discrimination score and inclusion score. The results show that DAAR is able to effectively filter out the discriminatory rules and decrease the discrimination severity on all datasets with insignificant impact on the predictive accuracy. We also find that DAAR generates a small set of rules that are easy to understand and applied by users, to help them make discrimination-free decisions.
Braytee, A, Liu, W, Catchpoole, DR & Kennedy, PJ 2017, 'Multi-label feature selection using correlation information', International Conference on Information and Knowledge Management, Proceedings, ACM on Conference on Information and Knowledge Management, ACM, Singapore, Singapore, pp. 1649-1656.View/Download from: UTS OPUS or Publisher's site
© 2017 ACM. High-dimensional multi-labeled data contain instances, where each instance is associated with a set of class labels and has a large number of noisy and irrelevant features. Feature selection has been shown to have great benefits in improving the classification performance in machine learning. In multi-label learning, to select the discriminative features among multiple labels, several challenges should be considered: interdependent labels, different instances may share different label correlations, correlated features, and missing and .awed labels. This work is part of a project at .e Children's Hospital at Westmead (TB-CHW), Australia to explore the genomics of childhood leukaemia. In this paper, we propose a CMFS (Correlated-and Multi-label Feature Selection method), based on non-negative matrix factorization (NMF) for simultaneously performing feature selection and addressing the aforementioned challenges. Significantly, a major advantage of our research is to exploit the correlation information contained in features, labels and instances to select the relevant features among multiple labels. Furthermore, l2;1-norm regularization is incorporated in the objective function to undertake feature selection by imposing sparsity on the feature matrix rows. We employ CMFS to decompose the data and multi-label matrices into a low-dimensional space. To solve the objective function, an efficient iterative optimization algorithm is proposed with guaranteed convergence. Finally, extensive experiments are conducted on high-dimensional multi-labeled datasets. The experimental results demonstrate that our method significantly outperforms state-of-the-art multi-label feature selection methods.
Do, D & Liu, W 2016, 'ASTEN: an Accurate and Scalable Approach to Coupled Tensor Factorization', the International Joint Conference in Neural Networks, The International Joint Conference in Neural Networks, IEEE, Vancouver, Canada, pp. 99-106.View/Download from: UTS OPUS or Publisher's site
Coupled Tensor Factorization (CTF) has become one of the most popular methods for joint analysis of high dimensional data generated from multiple sources. The goal of CTF is to factorize correlated datasets into latent factors efficiently. This research was taken with a particular goal of improving the accuracy of CTF. It is important to optimize the factorization of each single tensor of the coupled tensors. To achieve this, we introduce ASTEN, an Accurate and Scalable Tensor factorization method, where the objective function is optimized with respect to every single tensor and matrix. Differing from algorithms with a traditional objective function which forces shared modes among tensors to have identical factors, ASTEN enables each tensor to have its own discriminative factor on the shared mode and thus is capable of finding the accurate approximation of every tensor. Furthermore, to make it highly scalable in handling big data, we design it to be fully distributed and scalable with respect to the number of tensors, their dimensions, their sizes and the number of data partitions. In addition, we provide our theoretical proof and experimental evidence that our algorithm converges to an optimum. Experiments on both real and synthetic datasets demonstrate that our proposed ASTEN outperforms alternative existing algorithms.
Rashidi, L, Kan, A, Bailey, J, Chan, J, Leckie, C, Liu, W, Rajasegarar, S & Ramamohanarao, K 2016, 'Node re-ordering as a means of anomaly detection in time-evolving graphs', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Springer, Riva del Garda, Italy, pp. 162-178.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing AG 2016.Anomaly detection is a vital task for maintaining and improving any dynamic system. In this paper, we address the problem of anomaly detection in time-evolving graphs, where graphs are a natural representation for data in many types of applications. A key challenge in this context is how to process large volumes of streaming graphs. We propose a pre-processing step before running any further analysis on the data, where we permute the rows and columns of the adjacency matrix. This pre-processing step expedites graph mining techniques such as anomaly detection, PageRank, or graph coloring. In this paper, we focus on detecting anomalies in a sequence of graphs based on rank correlations of the reordered nodes. The merits of our approach lie in its simplicity and resilience to challenges such as unsupervised input, large volumes and high velocities of data. We evaluate the scalability and accuracy of our method on real graphs, where our method facilitates graph processing while producing more deterministic orderings. We show that the proposed approach is capable of revealing anomalies in a more efficient manner based on node rankings. Furthermore, our method can produce visual representations of graphs that are useful for graph compression.
Braytee, A, Liu, W & Kennedy, P 2016, 'A Cost-Sensitive Learning Strategy for Feature Extraction from Imbalanced Data', Springer International Publishing, International Conference on Neural Information Processing, Springer International Publishing, Kyoto, Japan, pp. 78-86.View/Download from: UTS OPUS or Publisher's site
In this paper, novel cost-sensitive principal component analysis (CSPCA) and cost-sensitive non-negative matrix factorization (CSNMF) methods are proposed for handling the problem of feature extraction from imbalanced data. The presence of highly imbalanced data misleads existing feature extraction techniques to produce biased features, which results in poor classification performance especially for the minor class problem. To solve this problem, we propose a cost-sensitive learning strategy for feature extraction techniques that uses the imbalance ratio of classes to discount the majority samples. This strategy is adapted to the popular feature extraction methods such as PCA and NMF. The main advantage of the proposed methods is that they are able to lessen the inherent bias of the extracted features to the majority class in existing PCA and NMF algorithms. Experiments on twelve public datasets with different levels of imbalance ratios show that the proposed methods outperformed the state-of-the-art methods on multiple classifiers.
Do, Q, Pham, T, Liu, W & Ramamohanarao, K 2016, 'WTEN: An Advanced Coupled Tensor Factorization Strategy for Learning from Imbalanced Data', Web Information Systems Engineering – WISE 2016, International Conference on Web Information Systems Engineering, Springer International Publishing, Shanghai, China, pp. 537-552.View/Download from: UTS OPUS or Publisher's site
Learning from imbalanced and sparse data in multi-mode and high-dimensional tensor formats efficiently is a significant problem in data mining research. On one hand, Coupled Tensor Factorization (CTF) has become one of the most popular methods for joint analysis of heterogeneous sparse data generated from different sources. On the other hand, techniques such as sampling, cost-sensitive learning, etc. have been applied to many supervised learning models to handle imbalanced data. This research focuses on studying the effectiveness of combining advantages of both CTF and imbalanced data learning techniques for missing entry prediction, especially for entries with rare class labels. Importantly, we have also investigated the implication of joint analysis of the main tensor and extra information. One of our major goals is to design a robust weighting strategy for CTF to be able to not only effectively recover missing entries but also perform well when the entries are associated with imbalanced labels. Experiments on both real and synthetic datasets show that our approach outperforms existing CTF algorithms on imbalanced data.
Liu, W 2016, 'Factorization of multiple tensors for supervised feature extraction', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Neural Information Processing, Springer, Kyoto, Japan, pp. 406-414.View/Download from: UTS OPUS or Publisher's site
© Springer International Publishing AG 2016.Tensors are effective representations for complex and timevarying networks. The factorization of a tensor provides a high-quality low-rank compact basis for each dimension of the tensor, which facilitates the interpretation of important structures of the represented data. Many existing tensor factorization (TF) methods assume there is one tensor that needs to be decomposed to low-rank factors. However in practice, data are usually generated from different time periods or by different class labels, which are represented by a sequence of multiple tensors associated with different labels. When one needs to analyse and compare multiple tensors, existing TF methods are unsuitable for discovering all potentially useful patterns, as they usually fail to discover either common or unique factors among the tensors: (1) if each tensor is factorized separately, the factor matrices will fail to explicitly capture the common information shared by different tensors, and (2) if tensors are concatenated together to form a larger 'overall' tensor and then factorize this concatenated tensor, the intrinsic unique subspaces that are specific to each tensor will be lost. The cause of such an issue is mainly from the fact that existing tensor factorization methods handle data observations in an unsupervised way, considering only features but not labels of the data. To tackle this problem, we design a novel probabilistic tensor factorization model that takes both features and class labels of tensors into account, and produces informative common and unique factors of all tensors simultaneously. Experiment results on feature extraction in classification problems demonstrate the effectiveness of the factors discovered by our method.
Braytee, A, Catchpoole, DR, Kennedy, PJ & Liu, W 2016, 'Balanced Supervised Non-Negative Matrix Factorization for Childhood Leukaemia Patients', CIKM '16 Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, ACM International Conference on Information and Knowledge Management, ACM, Indianapolis, Indiana, USA.View/Download from: UTS OPUS or Publisher's site
Supervised feature extraction methods have received considerable attention in the data mining community due to their capability to improve the classification performance of the unsupervised dimensionality reduction methods. With increasing dimensionality, several methods based on supervised feature extraction are proposed to achieve a feature ranking especially on microarray gene expression data. This paper proposes a method with twofold objectives: it implements a balanced supervised non-negative matrix factorization (BSNMF) to handle the class imbalance problem in supervised non-negative matrix factorization techniques. Furthermore, it proposes an accurate gene ranking method based on our proposed BSNMF for microarray gene expression datasets. To the best of our knowledge, this is the first work to handle the class imbalance problem in supervised feature extraction methods. This work is part of a Human Genome project at The Children's Hospital at Westmead (TB-CHW), Australia. Our experiments indicate that the factorized components using supervised feature extraction approach have more classification capability than the unsupervised one, but it drastically fails at the presence of class imbalance problem. Our proposed method outperforms the state-of-the-art methods and shows promise in overcoming this concern.
Chen, Q, Hu, L, Xu, J, Liu, W & cao, L 2015, 'Document Similarity Analysis via Involving Both Explicit and Implicit Semantic Couplings', Proceedings of the 2015 IEEE International Conference on Data Science and Advanced Analytics, DSAA 2015, International Conference on Data Science and Advanced Analytics, IEEE, Paris.View/Download from: UTS OPUS or Publisher's site
Nguyen, H, Liu, W, Rivera, P & Chen, F 2016, 'TrafficWatch: Real-time traffic incident detection and monitoring using social media', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer, New Zealand, pp. 540-551.View/Download from: Publisher's site
© Springer International Publishing Switzerland 2016. Social media has become a valuable source of real-time information. Transport Management Centre (TMC) in Australian state government of New South Wales has been collaborating with us to develop TrafficWatch, a system that leverages Twitter as a channel for transport network monitoring, incident and event managements. This system utilises advanced web technologies and state-of-the-art machine learning algorithms. The crawled tweets are first filtered to show incidents in Australia, and then divided into different groups by online clustering and classification algorithms. Findings from the use of TrafficWatch at TMC demonstrated that it has strong potential to report incidents earlier than other data sources, as well as identifying unreported incidents. TrafficWatch also shows its advantages in improving TMC's network monitoring capabilities to assess network impacts of incidents and events.
Cheema, P, Khoa, NLD, Alamdari, MM, Liu, W, Wang, Y, Chen, F & Runcie, P 2016, 'On Structural Health Monitoring Using Tensor Analysis and Support Vector Machine with Artificial Negative Data', CIKM '16 Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, ACM International Conference on Information and Knowledge Management, ACM, Indianapolis, Indiana, USA, pp. 1813-1822.View/Download from: UTS OPUS or Publisher's site
Structural health monitoring is a condition-based technology to monitor infrastructure using sensing systems. Since we usually only have data associated with the healthy state of a structure, one-class approaches are more practical. However, tuning the parameters for one-class techniques (like one-class Support Vector Machines) still remains a relatively open and difficult problem. Moreover, in structural health monitoring, data are usually multi-way, highly redundant and correlated, which a matrix-based two-way approach cannot capture all these relationships and correlations together. Tensor analysis allows us to analyse the multi-way vibration data at the same time. In our approach, we propose the use of tensor learning and support vector machines with artificial negative data generated by density estimation techniques for damage detection, localization and estimation in a one-class manner. The artificial negative data can help tuning SVM parameters and calibrating probabilistic outputs, which is not possible to do with one-class SVM. The proposed method shows promising results using data from laboratory-based structures and also with data collected from the Sydney Harbour Bridge, one of the most iconic structures in Australia. The method works better than the one-class approach and the approach without using tensor analysis.
Wang, S, Liu, W, Wu, J, Cao, L, Meng, Q & Kennedy, PJ 2016, 'Training deep neural networks on imbalanced data sets', Proceedings of the International Joint Conference on Neural Networks, IEEE International Joint Conference on Neural Networks, IEEE, Vancouver, Canada, pp. 4368-4374.View/Download from: UTS OPUS or Publisher's site
© 2016 IEEE.Deep learning has become increasingly popular in both academic and industrial areas in the past years. Various domains including pattern recognition, computer vision, and natural language processing have witnessed the great power of deep networks. However, current studies on deep learning mainly focus on data sets with balanced class labels, while its performance on imbalanced data is not well examined. Imbalanced data sets exist widely in real world and they have been providing great challenges for classification tasks. In this paper, we focus on the problem of classification using deep network on imbalanced data sets. Specifically, a novel loss function called mean false error together with its improved version mean squared false error are proposed for the training of deep networks on imbalanced data sets. The proposed method can effectively capture classification errors from both majority class and minority class equally. Experiments and comparisons demonstrate the superiority of the proposed approach compared with conventional methods in classifying imbalanced data sets on deep neural networks.
Shao, J, Yin, J, Liu, W & Cao, L 2015, 'Mining Actionable Combined Patterns of High Utility and Frequency', Proceedings of the IEEE International Conference on Data Science and Advanced Analytics, IEEE International Conference on Data Science and Advanced Analytics, IEEE, Paris, pp. 1-10.View/Download from: UTS OPUS or Publisher's site
In recent years, the importance of identifying actionable patterns has become increasingly recognized so that decision-support actions can be inspired by the resultant patterns. A typical shift is on identifying high utility rather than highly frequent patterns. Accordingly, High Utility Itemset (HUI) Mining methods have become quite popular as well as faster and more reliable than before. However, the current research focus has been on improving the efficiency while the coupling relationships between items are ignored. It is important to study item and itemset couplings inbuilt in the data. For example, the utility of one itemset might be lower than user-specified threshold until one additional itemset takes part in; and vice versa, an item's utility might be high until another one joins in. In this way, even though some absolutely high utility itemsets can be discovered, sometimes it is easily to find out that quite a lot of redundant itemsets sharing the same item are mined (e.g., if the utility of a diamond is high enough, all its supersets are proved to be HUIs). Such itemsets are not actionable, and sellers cannot make higher profit if marketing strategies are created on top of such findings. To this end, here we introduce a new framework for mining actionable high utility association rules, called Combined Utility-Association Rules (CUAR), which aims to find high utility and strong association of itemset combinations incorporating item/itemset relations. The algorithm is proved to be efficient per experimental outcomes on both real and synthetic datasets.
Luo, L, Liu, W, Koprinska, I & Chen, F 2015, 'Discovering causal structures from time series data via enhanced granger causality', AI 2015: Advances in Artificial Intelligence (LNCS), Australasian Joint Conference on Artificial Intelligence, Springer, Canberra, Australia, pp. 365-378.View/Download from: Publisher's site
© Springer International Publishing Switzerland 2015. Granger causality has been applied to explore predictive causal relations among multiple time series in various fields. However, the existence of nonstationary distributional changes among the time series variables poses significant challenges. By analyzing a real dataset, we observe that factors such as noise, distribution changes and shifts increase the complexity of the modelling, and large errors often occur when the underlying distribution shifts with time. Motivated by this challenge, we propose a new regression model for causal structure discovery – a Linear Model with Weighted Distribution Shift (linear WDS), which improves the prediction accuracy of the Granger causality model by taking into account the weights of the distribution-shift samples and by optimizing a quadratic-mean based objective function. The linear WDS is integrated in the Granger causality model to improve the inference of the predictive causal structure. The performance of the enhanced Granger causality model is evaluated on synthetic datasets and real traffic datasets, and the proposed model is compared with three different regression-based Granger causality models (standard linear regression, robust regression and quadratic-mean-based regression). The results show that the enhanced Granger causality model outperforms the other models especially when there are distribution shifts in the data.
Shao, J, Yin, J, Liu, W & Cao, L 2015, 'Actionable Combined High Utility Itemset Mining', AAAI'15 Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, AAAI Press, Austin, Texas, USA, pp. 4206-4207.View/Download from: UTS OPUS
Luo, L, Liu, W, Koprinska, I & Chen, F 2015, 'Discrimination-aware association rule mining for unbiased data analytics', Big Data Analytics and Knowledge Discovery: 17th International Conference, DaWaK 2015, Valencia, Spain, September 1-4, 2015, Proceedings, International Conference on Big Data Analytics and Knowledge Discovery, Springer International Publishing, Valencia; Spain, pp. 108-120.View/Download from: Publisher's site
A discriminatory dataset refers to a dataset with undesirable correlation between sensitive attributes and the class label, which often leads to biased decision making in data analytics processes. This paper investigates how to build discrimination-aware models even when the available training set is intrinsically discriminating based on some sensitive attributes, such as race, gender or personal status. We propose a new classification method called Discrimination-Aware Association Rule classifier (DAAR), which integrates a new discrimination-aware measure and an association rule mining algorithm. We evaluate the performance of DAAR on three real datasets from different domains and compare it with two non-discrimination-aware classifiers (a standard association rule classification algorithm and the state-of-the-art association rule algorithm SPARCCC), and also with a recently proposed discrimination-aware decision tree method. The results show that DAAR is able to effectively filter out the discriminatory rules and decrease the discrimination on all datasets with insignificant impact on the predictive accuracy.
Khoa, NLD, Zhang, B, Wang, Y, Liu, W, Chen, F, Mustapha, S & Runcie, P 2015, 'On Damage Identification in Civil Structures Using Tensor Analysis', Advances in Knowledge Discovery and Data Mining: 19th Pacific-Asia Conference Proceedings, Part 1, Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer International Publishing, Ho Chi Minh City, Vietnam, pp. 459-471.View/Download from: Publisher's site
Jiang, X, Liu, W, Cao, L & Long, G 2015, 'Coupled Collaborative Filtering for Context-aware Recommendation', AAAI Publications, Twenty-Ninth AAAI Conference on Artificial Intelligence, Student Abstracts, AAAI Conference on Artificial Intelligence, AAAI, Austin Texas, USA, pp. 4172-4173.View/Download from: UTS OPUS
Context-aware features have been widely recognized as important factors in recommender systems. However, as a major technique in recommender systems, traditional Collaborative Filtering (CF) does not provide a straight-forward way of integrating the context-aware information into personal recommendation. We propose a Coupled Collaborative Filtering (CCF) model to measure the contextual information and use it to improve recommendations. In the proposed approach, coupled similarity computation is designed to be calculated by interitem, intra-context and inter-context interactions among item, user and context-ware factors. Experiments based on different types of CF models demonstrate the effectiveness of our design.
The modeling of complex 3D details is a time-consuming task and need enough experience. In this paper, an effectual algorithm is proposed to transfer and reuse 3D geometrical details with cubic B-spline wavelet. Firstly any kind of surfaces both from the source model and the destination one are represented with cubic B-spline wavelet, and then they are decomposed into a base shape and several hierarchies of details. Since both models share the same basis of wavelet, they can be combined into one model in different hierarchies. Hereby the details are transferred onto the new 3D models. Later experiment shows that this technology can obtain desired resultant models. © (2014) Trans Tech Publications, Switzerland.
Liu, W, Zhou, XH & Niu, Q 2014, 'Effective cost estimation model for injection mold design base on time schedule & management system', Advanced Materials Research, pp. 1528-1531.View/Download from: Publisher's site
The cost estimation of mold design is a hard issue because of the complexity and variety of the products. We assume that the cost is largely decided by the time for design since labor salaries are increasingly important nowadays. In this paper, we propose a platform to count all the time consumed in all the design details, from which several statistics and analysis reports are made. Based on the data, the managers can acknowledge all the processes details in the design groups. Moreover, bottlenecks in design can be prevised and corresponding measures can be taken in advance. The existing data can also be borrowed for the later quotation for the similar designs. Certain companies have deployed our system and verify that it greatly help the cost estimation and control. © (2014) Trans Tech Publications, Switzerland.
Liu, W, Lee, D & Rao, K 2014, 'Using local information to significantly improve classification performance', CIKM 2014 - Proceedings of the 2014 ACM International Conference on Information and Knowledge Management, pp. 1947-1950.View/Download from: Publisher's site
Copyright 2014 ACM. In this research we propose to derive new features based on data samples' local information with the aim of improving the performance of general supervised learning algorithms. The creation of new features is inspired by the measure of average precision which is known to be a robust measure that is insensitive to the number of retrieved items in information retrieval. We use the idea of average precision to weight the neighbours of an instance and show that this weighting strategy is insensitive to the number of neighbours in the locality. Information captured in the new features allows a general classifier to learn additional useful peripheral knowledge that are helpful in building effective classification models. We comprehensively evaluate our method on real datasets and the results show substantial improvements in the performance of classifiers including SVM, Bayesian networks, random forest, and C4.5.
Liu, W, Sarda, A, Chen, F & Geers, G 2014, 'Forecasting changes of traffic flow caused by road incidents', 21st World Congress on Intelligent Transport Systems, ITSWC 2014: Reinventing Transportation in Our Connected World.
This paper explores the potential for supervised machine learning techniques in forecasting changes of traffic flow caused by road incidents based on incident features. Data fusion approaches are carried out on a high quality SCATS dataset measuring traffic flow of a major Australian city, and on an incident log data set encompassing a time period of 4 months' road incidents. Based on incident features, a range of both prevalent and advanced machine learning algorithms are applied to these data, and the accuracies of the algorithms are evaluated. We then examine the effectiveness of such models in categorizing changes of traffic flow as either trivial or non-trivial in the extent of their responses to incidents. The models are promising in their capacity and are able to correctly predict with more than 70% accuracy that a change of traffic flow shall be major. This has significant implications for determining the optimal allocation of resources for both road traffic control and incident response units.
Chan, J, Vinh, NX, Liu, W, Bailey, J, Leckie, CA, Ramamohanarao, K & Pei, J 2014, 'Structure-aware distance measures for comparing clusterings in graphs', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 362-373.View/Download from: Publisher's site
Clustering in graphs aims to group vertices with similar patterns of connections. Applications include discovering communities and latent structures in graphs. Many algorithms have been proposed to find graph clusterings, but an open problem is the need for suitable comparison measures to quantitatively validate these algorithms, performing consensus clustering and to track evolving (graph) clusters across time. To date, most comparison measures have focused on comparing the vertex groupings, and completely ignore the difference in the structural approximations in the clusterings, which can lead to counter-intuitive comparisons. In this paper, we propose new measures that account for differences in the approximations. We focus on comparison measures for two important graph clustering approaches, community detection and blockmodelling, and propose comparison measures that work for weighted (and unweighted) graphs. © 2014 Springer International Publishing.
Wang, F, Liu, W & Chawla, S 2014, 'On Sparse Feature Attacks in Adversarial Learning', Proceedings - IEEE International Conference on Data Mining, ICDM, IEEE International Conference on Data Mining, IEEE, Shenzhen, China, pp. 1013-1018.View/Download from: UTS OPUS or Publisher's site
Adversarial learning is the study of machine learning techniques deployed in non-benign environments. Example applications include classifications for detecting spam email, network intrusion detection and credit card scoring. In fact as the gamut of application domains of machine learning grows, the possibility and opportunity for adversarial behavior will only increase. Till now, the standard assumption about modeling adversarial behavior has been to empower an adversary to change all features of the classifier sat will. The adversary pays a cost proportional to the size of 'attack'. We refer to this form of adversarial behavior as a dense feature attack. However, the aim of an adversary is not just to subvert a classifier but carry out data transformation in a way such that spam continues to appear like spam to the user as much as possible. We demonstrate that an adversary achieves this objective by carrying out a sparse feature attack. We design an algorithm to show how a classifier should be designed to be robust against sparse adversarial attacks. Our main insight is that sparse feature attacks are best defended by designing classifiers which use l1 regularizers.
Liu, W, Wang, Z, Chen, G & Sheng, L 2013, 'Passivity-based observer design and robust output feedback control for nonlinear uncertain systems', 2013 9th Asian Control Conference, ASCC 2013.View/Download from: Publisher's site
This paper presents a new method for a passivity-based observer design and robust output feedback control of a class of nonlinear uncertain systems. The uncertainties satisfy the Lipschitz-type constraints. Firstly, the passivity condition, which assures the existence of an observer, is expressed in terms of a linear matrix inequality (LMI). Then, for the output feedback control, a sufficient condition in terms of LMI is given for inputto-state stability (ISS) with regard to the observer error. Meanwhile, it is shown that the observer error decays to zero. Therefore, the asymptotic stability is guaranteed by ISS. This proposed method is much less conservative. Finally, a simulation example is illustrated the effectiveness of the proposed results. © 2013 IEEE.
Yang, P, Liu, W, Zhou, BB, Chawla, S & Zomaya, AY 2013, 'Ensemble-based wrapper methods for feature selection and class imbalance learning', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 544-555.View/Download from: Publisher's site
The wrapper feature selection approach is useful in identifying informative feature subsets from high-dimensional datasets. Typically, an inductive algorithm "wrapped" in a search algorithm is used to evaluate the merit of the selected features. However, significant bias may be introduced when dealing with highly imbalanced dataset. That is, the selected features may favour one class while being less useful to the adverse class. In this paper, we propose an ensemble-based wrapper approach for feature selection from data with highly imbalanced class distribution. The key idea is to create multiple balanced datasets from the original imbalanced dataset via sampling, and subsequently evaluate feature subsets using an ensemble of base classifiers each trained on a balanced dataset. The proposed approach provides a unified framework that incorporates ensemble feature selection and multiple sampling in a mutually beneficial way. The experimental results indicate that, overall, features selected by the ensemble-based wrapper are significantly better than those selected by wrappers with a single inductive algorithm in imbalanced data classification. © Springer-Verlag 2013.
Chan, J, Liu, W, Kan, A, Leckie, C & Bailey, J 2013, 'Discovering latent blockmodels in sparse and noisy graphs using non-negative matrix factorisation', International Conference on Information and Knowledge Management, Proceedings, pp. 811-816.View/Download from: Publisher's site
Blockmodelling is an important technique in social network analysis for discovering the latent structure in graphs. A blockmodel partitions the set of vertices in a graph into groups, where there are either many edges or few edges between any two groups. For example, in the reply graph of a question and answer forum, blockmodelling can identify the group of experts by their many replies to questioners, and the group of questioners by their lack of replies among themselves but many replies from experts. Non-negative matrix factorisation has been successfully applied to many problems, including blockmodelling. However, these existing approaches can fail to discover the true latent structure when the graphs have strong background noise or are sparse, which is typical of most real graphs. In this paper, we propose a new non-negative matrix factorisation approach that can discover blockmodels in sparse and noisy graphs. We use synthetic and real datasets to show that our approaches have much higher accuracy and comparable running times. Copyright 2013 ACM.
Wang, Y, Wang, Z, Ni, M & Liu, W 2012, 'Networked control for a class of singular systems with time-varying transmission period', Proceedings of the 2012 24th Chinese Control and Decision Conference, CCDC 2012, pp. 3575-3579.View/Download from: Publisher's site
In this paper, a networked control problem is addressed for a class of singular systems with time-varying transmission period. The singular networked control system is transformed as an asynchronous dynamical system in the case of no networked-induced delay and no data packet dropout. Meanwhile, time-varying transmission period is regarded as a time-varying uncertain parameter. Further, the feedback stabilization of the asynchronous dynamical system is studied by Lyapunov methods and linear matrix inequality techniques. The design methods of a controller and the sufficient condition for stabilization of the system are presented. Finally, a numerical example illustrates the effectiveness of the proposed method. © 2012 IEEE.
Liu, W, Chawla, S, Bailey, J, Leckie, C & Ramamohanarao, K 2012, 'An efficient adversarial learning strategy for constructing robust classification boundaries', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 649-660.View/Download from: Publisher's site
Traditional classification methods assume that the training and the test data arise from the same underlying distribution. However in some adversarial settings, the test set can be deliberately constructed in order to increase the error rates of a classifier. A prominent example is email spam where words are transformed to avoid word-based features embedded in a spam filter. Recent research has modeled interactions between a data miner and an adversary as a sequential Stackelberg game, and solved its Nash equilibrium to build classifiers that are more robust to subsequent manipulations on training data sets. However in this paper we argue that the iterative algorithm used in the Stackelberg game, which solves an optimization problem at each step of play, is sufficient but not necessary for achieving Nash equilibria in classification problems. Instead, we propose a method that transforms singular vectors of a training data matrix to simulate manipulations by an adversary, and from that perspective a Nash equilibrium can be obtained by solving a novel optimization problem only once. We show that compared with the iterative algorithm used in recent literature, our one-step game significantly reduces computing time while still being able to produce good Nash equilibria results. © 2012 Springer-Verlag.
Liu, W, Kan, A, Chan, J, Bailey, J, Leckie, C, Pei, J & Kotagiri, R 2012, 'On compressing weighted time-evolving graphs', ACM International Conference Proceeding Series, pp. 2319-2322.View/Download from: Publisher's site
Existing graph compression techniquesmostly focus on static graphs. However for many practical graphs such as social networks the edge weights frequently change over time. This phenomenon raises the question of how to compress dynamic graphs while maintaining most of their intrinsic structural patterns at each time snapshot. In this paper we show that the encoding cost of a dynamic graph is proportional to the heterogeneity of a three dimensional tensor that represents the dynamic graph. We propose an effective algorithm that compresses a dynamic graph by reducing the heterogeneity of its tensor representation, and at the same time also maintains a maximum lossy compression error at any time stamp of the dynamic graph. The bounded compression error benefits compressed graphs in that they retain good approximations of the original edge weights, and hence properties of the original graph (such as shortest paths) are well preserved. To the best of our knowledge, this is the first work that compresses weighted dynamic graphs with bounded lossy compression error at any time snapshot of the graph. © 2012 ACM.
Chan, J, Liu, W, Leckie, C, Bailey, J & Ramamohanarao, K 2012, 'SeqiBloc: Mining multi-time spanning blockmodels in dynamic graphs', Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 651-659.View/Download from: Publisher's site
Blockmodelling is an important technique for decomposing graphs into sets of roles. Vertices playing the same role have similar patterns of interactions with vertices in other roles. These roles, along with the role to role interactions, can succinctly summarise the underlying structure of the studied graphs. As the underlying graphs evolve with time, it is important to study how their blockmodels evolve too. This will enable us to detect role changes across time, detect different patterns of interactions, for example, weekday and weekend behaviour, and allow us to study how the structure in the underlying dynamic graph evolves. To date, there has been limited research on studying dynamic blockmodels. They focus on smoothing role changes between adjacent time instances. However, this approach can overfit during stationary periods where the underling structure does not change but there is random noise in the graph. Therefore, an approach to a) find blockmodels across spans of time and b) to find the stationary periods is needed. In this paper, we propose an information theoretic framework, SeqiBloc, combined with a change point detection approach to achieve a) and b). In addition, we propose new vertex equivalence definitions that include time, and show how they relate back to our information theoretic approach. We demonstrate their usefulness and superior accuracy over existing work on synthetic and real datasets. © 2012 ACM.
Dong, M, Jin, H, Sun, G, Wang, X, Liu, W & Wang, X 2011, 'Non-cooperative game based social welfare maximizing bandwidth allocation in WSNs', GLOBECOM - IEEE Global Telecommunications Conference.View/Download from: Publisher's site
In this paper, we deal with possible data transmission congestion on the sink node in wireless sensor networks (WSNs). We consider a scenario in which all the sensor nodes have a certain amount of storage space and acquire data from the surroundings at heterogeneous speed. Because receiving bandwidth of the sink node is limited, a proper bandwidth allocation mechanism should be implemented to avoid possible congestion or data loss due to the overflow of some sensor nodes. To address this problem, we firstly design a novel bandwidth allocation mechanism, SWM, that can maximize the social utility, an indicator of every sensor node's satisfaction degree and the social fairness. Furthermore, we model the allocation process under the SWM as a noncooperative game and figure out the unique Nash Equilibrium. The uniqueness of the equilibrium demonstrates that this network will actually approach to a fair and stable state. © 2011 IEEE.
Zou, J, Liu, W, Ding, M, Luo, H & Yu, H 2011, 'Transceiver design for AF MIMO two-way relay systems with imperfect channel estimation', GLOBECOM - IEEE Global Telecommunications Conference.View/Download from: Publisher's site
We address the transceiver design problem for amplify-and-forward (AF) MIMO two-way relay systems with imperfect channel estimation. Since only estimated channel state information (CSI) is available, the self-interference (SI) cannot be completely canceled at destination nodes. With both channel estimation errors and residual self-interference considered, we propose two robust schemes to minimize the average sum mean squared error (MSE). % averaged over channel uncertainties., In the iterative scheme, the relay precoder and destination receivers are alternately optimized until convergence. In the constrained structure scheme, a specific structure is imposed on the relay precoder to reduce complexity. By solving a relaxed optimization problem, we derive a closed-form solution for the constrained structure precoder. Simulation results show that the proposed iterative scheme provides better robustness against channel uncertainties than the non-robust iterative scheme, and the constrained structure scheme achieves close performance with the proposed iterative scheme. © 2011 IEEE.
Wang, ZM & Liu, W 2011, 'Output feedback networked control of singular perturbation', Proceedings of the World Congress on Intelligent Control and Automation (WCICA), pp. 645-650.View/Download from: Publisher's site
In this paper, an observer-based output feedback control of a plant having singular perturbation structure is investigated, where a feedback loop that is closed with a network with a fixed sampling. In the framework of model based, if the model plant is strong controllable and strong observable, the characterization of singular perturbation for overall systems is preserved. Based on the approximated slow and fast subsystems of the overall system on each sampling interval, a lower order test matrix for stability on the whole time interval is obtained. Then, the globally exponential stability for the overall system via a network is shown for small > 0. Finally, a numerical simulation illustrates the result. © 2011 IEEE.
Large 3D model lightweight is a practical need in CAD developments and applications. In this paper, a lightweight algorithm through intelligent geometrical information reduction for large assembly is proposed to generate simple model by using the assembly structure. This algorithm aims at reducing the size of 3D CAD model from three levels: components, feature and surfacing geometry. Key strategies such as components suppression, surface treatments based on hollow suturation and shell extraction, were used in the processing. Later experiments show that our lightweight algorithm can get satisfactory results with preferable computing performance. The lightweight technology can be widely used in many areas such as collaborative design and movement simulation, etc. © (2011) Trans Tech Publications, Switzerland.
Liu, W, Zhou, X & Niu, Q 2011, 'Embedded graphics generation and display system on aircraft platform', 2011 International Conference on Computer Science and Service System, CSSS 2011 - Proceedings, pp. 3221-3224.View/Download from: Publisher's site
Graphics generation and display play a very important role in all functions of equipments on aircraft platform. Effective, accurate and real-time display of graphics flight parameters is an indispensable requirement for high-performance aircraft. In this paper, a graphics function lib based on OpenGL ES specification is detailedly introduced, which is designed to run on given special embedded graphics hardware. Therewith developers can simply write OpenGL-like C code on the embedded graphics platforms, which will greatly avoid the disadvantages of coding directly with the machine instruction. Experiments show that the graphics lib based on OpenGL ES specification can commendably satisfy the various request of the aircraft platform and it gives an instructive use for reference to analogous embedded graphics systems. © 2011 IEEE.
Wang, Z & Liu, W 2011, 'Stabilization of uncertain networked control system with quantized feedback', Proceedings of the 30th Chinese Control Conference, CCC 2011, pp. 4550-4554.
This paper studies the quantized feedback stabilization problem of uncertain networked control system. The uncertain controlled plant and its minimized system as a model plant are connected by the model-based method through network. Quantization value can be adequately measured by designing an effective quantization algorism. This paper obtains a sufficient condition for a globally exponential stabilization under some certain conditions. A simulation example is presented to illustrate the results. © 2011 Chinese Assoc of Automation.
Pang, LX, Chawla, S, Liu, W & Zheng, Y 2011, 'On mining anomalous patterns in road traffic streams', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 237-251.View/Download from: Publisher's site
Large number of taxicabs in major metropolitan cities are now equipped with a GPS device. Since taxis are on the road nearly twenty four hours a day (with drivers changing shifts), they can now act as reliable sensors to monitor the behavior of traffic. In this paper we use GPS data from taxis to monitor the emergence of unexpected behavior in the Beijing metropolitan area. We adapt likelihood ratio tests (LRT) which have previously been mostly used in epidemiological studies to describe traffic patterns. To the best of our knowledge the use of LRT in traffic domain is not only novel but results in very accurate and rapid detection of anomalous behavior. © 2011 Springer-Verlag.
Liu, W & He, Y 2007, 'Spherical binary images matching', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 146-149.
In this paper a novel algorithm is presented to match the spherical binary images by measuring the maximal superposition degree between them. Experiments show that our method can match spherical binary images in a more accurate way. © Springer-Verlag Berlin Heidelberg 2007.