Luo, X, Yuan, Y, Li, Z, Zhu, M, Xu, Y, Chang, L, Sun, X & Ding, Z 2019, 'FBVA: A Flow-Based Visual Analytics Approach for Citywide Crowd Mobility', IEEE Transactions on Computational Social Systems, vol. 6, no. 2, pp. 277-288.View/Download from: UTS OPUS or Publisher's site
© 2014 IEEE. Analyzing structures of crowd mobility at city level is a challenging task due to the complex crowd mobility and dynamic changes generated by the social activities over time. These structures, defined as high-dimensional mobility structures (HMSs), contain spatiotemporal information and are simultaneously influenced by the geographical distributions and daily activities of citywide crowd. However, few work has been dedicated to depict and analyze these structures, mainly due to the lack of effective models. In this paper, we propose to model the crowd mobility as a dynamical system and characterize the irregular mobility data with a novel local coherence of sparse field (LCSF) algorithm. The proposed algorithm makes it possible to measure the separation behavior of trajectories in an irregular and sparse topology network. Detected HMS, referred as local separation measure of LCSF, divides the geographical urban areas into distinct functional regions over time. We design and implement a visual analytics system to facilitate situation-aware analysis of a huge amount of crowd mobility and their socialized behaviors. Case studies based on a real-world data set demonstrate the effectiveness of the proposed approach.
Gu, T, Zhu, M, Chen, W, Huang, Z, MacIejewski, R & Chang, L 2018, 'Structuring Mobility Transition with an Adaptive Graph Representation', IEEE Transactions on Computational Social Systems, vol. 5, no. 4, pp. 1121-1132.View/Download from: UTS OPUS or Publisher's site
© 2014 IEEE. Modeling human mobility is a critical task in fields such as urban planning, ecology, and epidemiology. Given the current use of mobile phones, there is an abundance of data that can be used to create models of high reliability. Existing techniques can reveal the macropatterns of crowd movement or analyze the trajectory of a person; however, they typically focus on geographical characteristics. This paper presents a graph-based approach for structuring crowd mobility transition over multiple granularities in the context of social behavior. The key to our approach is an adaptive data representation, the adaptive mobility transition graph (AMTG), which is globally generated from citywide human mobility data by defining the temporal trends of human mobility and the interleaved transitions between different mobility patterns. We describe the design, creation, and manipulation of the AMTG and introduce a visual analysis system that supports the multifaceted exploration of citywide human mobility patterns.
Chen, W, Huang, Z, Wu, F, Zhu, M, Guan, H & Maciejewski, R 2018, 'VAUD: A Visual Analysis Approach for Exploring Spatio-Temporal Urban Data', IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 9, pp. 2636-2648.View/Download from: UTS OPUS or Publisher's site
© 1995-2012 IEEE. Urban data is massive, heterogeneous, and spatio-temporal, posing a substantial challenge for visualization and analysis. In this paper, we design and implement a novel visual analytics approach, Visual Analyzer for Urban Data (VAUD), that supports the visualization, querying, and exploration of urban data. Our approach allows for cross-domain correlation from multiple data sources by leveraging spatial-temporal and social inter-connectedness features. Through our approach, the analyst is able to select, filter, aggregate across multiple data sources and extract information that would be hidden to a single data subset. To illustrate the effectiveness of our approach, we provide case studies on a real urban dataset that contains the cyber-, physical-, and social- information of 14 million citizens over 22 days.
Wu, F, Zhu, M, Wang, Q, Zhao, X, Chen, W & Maciejewski, R 2017, 'Spatial-temporal visualization of city-wide crowd movement', JOURNAL OF VISUALIZATION, vol. 20, no. 2, pp. 183-194.View/Download from: Publisher's site
Zhu, M, Pan, P, Chen, W & Yang, Y, 'DM-GAN: Dynamic Memory Generative Adversarial Networks for Text-to-Image Synthesis'.
In this paper, we focus on generating realistic images from text
descriptions. Current methods first generate an initial image with rough shape
and color, and then refine the initial image to a high-resolution one. Most
existing text-to-image synthesis methods have two main problems. (1) These
methods depend heavily on the quality of the initial images. If the initial
image is not well initialized, the following processes can hardly refine the
image to a satisfactory quality. (2) Each word contributes a different level of
importance when depicting different image contents, however, unchanged text
representation is used in existing image refinement processes. In this paper,
we propose the Dynamic Memory Generative Adversarial Network (DM-GAN) to
generate high-quality images. The proposed method introduces a dynamic memory
module to refine fuzzy image contents, when the initial images are not well
generated. A memory writing gate is designed to select the important text
information based on the initial image content, which enables our method to
accurately generate images from the text description. We also utilize a
response gate to adaptively fuse the information read from the memories and the
image features. We evaluate the DM-GAN model on the Caltech-UCSD Birds 200
dataset and the Microsoft Common Objects in Context dataset. Experimental
results demonstrate that our DM-GAN model performs favorably against the