Skip to main content

The ARC Laureate Autonomous learning project offers scholarship opportunities for prospective PhD students through the UTS PhD program.

Explore scholarship opportunities

All applications need to go through an application assessment process and shortlisted applicants will be interviewed. The selection criteria include:

Study with us

Email your CV to jing.zhao@uts.edu.au

  • A Master (by research) or Honorary (first class) degree in Statistics, Computer Science, or similar disciplines;
  • A strong background in machine learning, statistics or optimisation;  
  • Good programming skills and/or drone application experience;
  • Overseas applicants should have an IELTS score of 6.5 (with minimum writing 6) or a TOEFL iBT score of 70 with a minimum writing score of 21. 

Interested applicants should begin by submitting their CV.  

Current PhD Students under the Laureate Project

One objective of the Laureate project is to develop Australia’s AI particularly machine learning research capacity by training the next generation of researchers.

Currently, seven PhD students have started their research under the Laureate project to deal with challenging research issues in the area. Below are their research topics and research methodologies.

Transfer Learning with Imprecise Observations: Theory and Algorithms

Abstract: The purpose of transfer learning is to provide a framework to leverage previously acquired knowledge to improve the efficiency of a new but similar task. Most existing methods share a common assumption that the observations in the source and target domains are precise. Unfortunately, precise observations are unavailable in some real-world cases. In this research, we consider a new realistic problem called transfer learning with imprecise observations (TLIMO), which the source and target domains only contain imprecise observations. This research aims to develop transfer learning theory to guarantee the TLIMO problem, and build models (e.g. multi-source transfer learning) for addressing the TLIMO problem.

Unsupervised Spatiotemporal Sequence Prediction

Abstract: Unsupervised spatiotemporal sequence prediction aims to predict future outcomes based on the observed spatiotemporal data. This task represents a self-supervised representation learning problem with applications in intelligent decision-making systems. The intrinsic challenge of spatiotemporal sequence prediction is to effectively model the complex and often uncertain dynamics. Here the aim is to develop a series of predictive models to predict future outcomes by modelling the complex dynamics that underlie spatiotemporal systems. The proposed predictive models are expected to solve a downstream decision-making application, i.e., the unarmed aerial vehicle.

Heterogeneous Unsupervised Multiple Domain Adaptation: Theory and Algorithms

Abstract: Domain adaptation is a typical method of transfer learning to enable a model from one domain to perform well in another, similar domain. Multiple unsupervised domain adaptation (MUDA) explores domain adaptation among multiple domains in which the target domain is unlabelled. Research using MUDA generally focuses on adaptation from multi-source domains to a single-target domain or a fused multi-target domain. The project aims to build an unsupervised domain-adaptation framework that can transfer knowledge from a multiple-source domain to independent multi-target domains simultaneously. In particular, heterogeneous target domains designed for specific tasks will be considered in this MUDA framework.

Continual Reinforcement Learning in Non-stationary Environments

Abstract: Most current approaches to reinforcement learning (RL) centre around learning in a stationary environment in which the transition and reward functions do not vary over time. In a practical application, AI agents must deal with complex non-stationary environments like self-driving, traffic control, and robotics. The expectation is that RL agents will continuously learn in changing environments. A particular interest is in the drift detection of non-stationary environments and the detection-boosted methods to adapt the training of the current policy with the supporting knowledge of formerly well-trained policies and collected experiences.

Adaptive Learning for Multiple Evolving Data Streams

Abstract: Evolving data streams arise from the arbitrary change of data distribution in non-stationary environments over time, involving concept drift. Addressing concept drift correctly and appropriately will contribute to improving decision-making in data-stream mining. We focus on dealing with concept-drift problems under multiple evolving streams, with hybrids of labelled and unlabelled streams. The aims are to learn how to: adapt non-stationary unlabelled data streams, model drifting dependency, handle the label-evolving issue, and detect and understand drift to enable more efficient adaptation.

Autonomous Learning for Multiple Data Streams under Concept Drift

Abstract: Concept drift is a major challenge in data-stream mining research. Many adaptive learning frameworks for concept drift have been developed to make efficient and accurate predictions on data streams, but most focus on a single data stream. But the drift situation in multi-stream environments is more complicated, and we need to gain a deep understanding of the correlation between data streams and the relationship between various types of drift. We aim to develop a set of autonomous learning algorithms for multiple data streams under concept drift to both improve real-time prediction performance on multiple data streams and reduce computational costs.

Graph Convolutional Neural Networks with Negative Sampling

Abstract: An interesting way to understand graph convolutional networks is to think of them as a message-passing mechanism where each node updates its representation by accepting information from its neighbours (also known as positive samples). But beyond these neighbouring nodes, graphs have a large, all-but-forgotten world in which we find negative samples. The challenge is to learn how to select the appropriate negative samples for each node and to incorporate the negative information into the representation update. We propose a generalised method for fusing negative samples in graphing convolutional neural networks, and a determinantal point process-based method for efficiently obtaining samples.