- Posted on 18 Mar 2026
- 3-minute read
UTS researchers have developed the world’s first Fully Homomorphic Encryption (FHE)‑enabled Deep Reinforcement Learning (DRL) system: AI that can learn and make decisions while users' data stays encrypted.
A milestone for the global artificial intelligence community, UTS researchers have announced a groundbreaking advancement in preserving users’ privacy in the AI era, published in Nature Machine Intelligence. The developed framework allows AI to learn and make decisions in complex environments without ever “seeing” the sensitive data it processes.
The project, led by Associate Professor Hoang Dinh, Associate Professor Diep N. Nguyen and PhD student Hieu Nguyen (the Australia Vietnam Strategic Technologies Centre), was completed in collaboration with Dr. Kristin Lauter (Director of Research Science, North American Labs, Meta AI Research) and Associate Professor Miran Kim (Hanyang University in Korea).
Why privacy matters in the age of GenAI
DRL powers many of the advanced technologies we interact with today, from self-driving cars to the decision-making behind Generative AI. But one challenge has held the field back: DRL systems typically need access to real data to learn, which can expose personal, financial or commercially sensitive information to external platforms.
“For years, the field has struggled to reconcile AI with the strict privacy requirements,” said Associate Professor Hoang Dinh.
“Our breakthrough shows AI can now learn and make decisions while keeping sensitive data completely protected.”
As AI becomes more embedded in our everyday lives, ensuring it can learn without compromising personal or organisational privacy isn’t just a technical challenge – it’s an ethical responsibility.
The breakthrough: encrypted intelligence
The UTS research team created a world-first privacy-preserving framework for DRL using FHE. Unlike traditional encryption, which requires data to be decrypted before it can be processed, FHE allows AI models to learn and make decisions while the data remains fully protected.
The core technical innovation is the development of an HE-compatible Adam optimiser. Modern AI training depends heavily on non-linear mathematical operations, including functions such as inverse square roots, which are extremely difficult to compute on encrypted data and typically break under homomorphic encryption.
The team successfully redesigned these learning algorithms to bypass the need for high-degree polynomial approximations of inverse square roots on encrypted data, and maximised learning efficiency for the DRL agent.
Early results are promising. The encrypted DRL model performs within 10% accuracy of standard, unencrypted techniques – all while maintaining full data confidentiality.
In the proposed system, the user protects its privacy by encrypting all system information before sharing it with an external AI agent. The AI learns how to make decisions directly from this encrypted data, without ever seeing the original information. The decisions it produces also remain encrypted and are only decrypted and applied by the user locally. This process repeats until the system achieves the desired performance, ensuring strong privacy throughout the entire learning and decision-making process.
“This could be a cornerstone for the next generation of AI, including Generative AI,” said Associate Professor Diep Nguyen.
A global collaboration
The breakthrough was made possible through a global collaboration between UTS, Meta and Hanyang University, brought together through the Australia Vietnam Strategic Technologies Centre. By uniting UTS’s theoretical and algorithmic strengths with international academic expertise and industry, the partnership is tackling a long-standing challenge in the space.
“By collaborating with partners like Meta and Hanyang University, we were able to combine theoretical innovation with real-world insights. It’s this kind of international partnership that positions UTS at the forefront of privacy‑preserving AI research,” said Hoang.
“As AI becomes more embedded in our everyday lives, ensuring it can learn without compromising personal or organisational privacy isn’t just a technical challenge – it’s an ethical responsibility.”
“That’s why researchers are focusing not just on what AI can do, but how it can do it responsibly,” he said.
