• Posted on 10 Sep 2021
  • 47-minute read

The tech revolution is underway. AI is a firm fixture in our lives.

AI could be a powerful tool in disrupting disadvantage, but can equally be used to further systemic discrimination or harm communities.

In this session, Ed Santow, Mikaela Jade, Distinguished Professor Fang Chen, and Verity Firth discuss whether Australia is ready to embrace the opportunities of technological innovations in a way that keeps ensures human rights and dignities.

fzbrlNgvFWY

Descriptive transcript

Thank you, everybody, for joining us for today's event. I know there will probably be more people entering the virtual room as I speak, but I will kick off our event so that we start on time.

Firstly, I would like to acknowledge that wherever we are in Australia, wherever you are joining us from, we are all on the traditional lands of First Nations people. These were lands that were never ceded. I want to pay particular respect to Elders past and present of the Gadigal people of the Eora Nation, which is where I currently am, in my home in Glebe, but it's also the land upon which University of Technology Sydney is built. So a special respect to the traditional owners and custodians of knowledge for the land on which our university is built.

I also want to pay respect to the land that all of you are sitting on at the moment and hope that you, too, are paying that respect. My name is Verity Firth, I'm the Executive Director of Social Justice here at UTS, and I also head up our Centre for Social Justice and Inclusion. It's my great pleasure today to be joined by some of UTS's brightest minds, our newly appointed Industry Professor, Ed Santow, our Distinguished Professor Fang Chen, and award-winning alumna Mikaela Jade.

But a couple of pieces of housekeeping first. We do live caption our events, and if you want to view the captions, you can click on the 'CC' button in the bottom of your screen in the Zoom control panel. We're also going to post a link in the chat which will open the captions in a separate window if you prefer to view them that way.

There will be an opportunity to ask questions, so if you do have any questions, please type them into the Q&A box, which you can also find in the Zoom control panel. This also gives you an opportunity to upvote other people's questions, so I then moderate the questions, and I do tend to ask the questions that have the most interest and they also tend to be the questions that are most relevant to the topic, so please do keep your questions short and relevant to what we're talking about today.

As I mentioned a moment ago, UTS is delighted to be joined by Edward Santow. He is a former, just recently former, Australian Human Rights Commissioner, and he's joining UTS in the role of Industry Professor, Responsible Technology. We have previously partnered with the Commission on a three-year-long project around human rights and technology, with recommendations from academics and practitioners across the whole spectrum of UTS's disciplinary fields, feeding into the final report released in May this year.

Emerging technologies, including AI, are already a firm fixture in our lives. They're quietly reshaping our world. These advancements hold immense potential to improve lives and connect people, but are equally fraught with risk. We recognise that technology development and deployment doesn't happen within a social vacuum. In fact, social, political, and economic inequality can be reproduced and indeed amplified in tech innovation, if we're not careful.

Human rights must therefore be the bedrock for anyone involved in the development and deployment of these technologies. At UTS, we are striving to equip society with the literacy required to thrive alongside AI as its partners, to demystify the technology, ensure that nothing is taken for granted, and that there is transparency and accountability built into the fabric of the tech.

We're at a critical point as use of AI grows exponentially in the government, private sector, global security, and in our schooling system. Are we, in Australia, equipped to embrace the opportunities of technological innovations in a way that keeps humans and their rights and dignities at the core?

So it's my great pleasure to welcome Ed Santow to offer some brief opening remarks to start us off, and then we're going to move to a panel discussion with Fang Chen and Mikaela Jade. Edward Santow commenced last week as UTS's Industry Professor. He is leading a major UTS initiative to build Australia's strategic capability in artificial intelligence and new technologies. This initiative will support Australian business and government to be leaders in responsible innovation by developing and using AI that is powerful, effective, and fair.

Ed was Australia's Human Rights Commissioner from 2016 until July this year, and led the most influential project worldwide on the human rights and social implications of AI. Before that, he was Chief Executive of the Public Interest Advocacy Centre, a leading non-profit organisation that promotes human rights through strategic litigation, policy development, and education. He was also previously a Senior Lecturer at UNSW Law School, a research director at the Gilbert and Tobin Centre of Public Law, and a solicitor in private practice. Welcome, Ed.

Thank you so much, Verity, for that warm introduction. It's a great pleasure to be here. I too am beaming into your lounge rooms from Gadigal land, and I pay tribute to their Elders past, present, and emerging. It's also a real honour to share this virtual podium with two other people whom I really admire, Professor Fang Chen and Mikaela Jade, and I'm sure we'll hear more from them as the event goes on.

I have a relatively limited role here at the start to set the scene. In a moment, I'm going to share with you some slides. They'll mostly be pictures, and for anyone who has a vision impairment, I will describe what's on the screen.

So I am genuinely excited about the rise of new technologies, including artificial intelligence, and it's certainly true that they are, as Verity said, reshaping our lives and our world. We can measure this in lots of different ways. We certainly know that AI, or artificial intelligence, is growing exponentially. We can see the global market for machine learning, which is one of the key technologies that underpin AI, is growing very rapidly, as you can see on your screens now. And we're seeing, particularly in the private sector, that businesses are really starting to grasp how AI can be used in their operations. Nearly half of all businesses are now using AI in at least one function of what they do. But there are some risks, and we are really wanting to focus in on what some of those risks are, in order to make sure that AI gives us the future that we want and need, and not one that we fear.

So in the work that Verity referred to a moment ago, the partnership between the Human Rights Commission, UTS, and a couple of other core partners, we really went deep on how AI can not only be a force for good, but it can also cause harm. We've seen, for example, with the rise of the problem of algorithmic bias, that you can have a machine learning system that spits out decisions that are at least sometimes unfair. A machine learning system, by its very nature, learns from previous decisions that it's been trained on. And if there are problems with those previous decisions, then that will be baked into the new system. As I say, that can produce unfairness. In more extreme situations, that can actually be unlawful. It can result in unlawful discrimination. We've seen, especially overseas, how banks and others have used machine learning systems to make decisions that unfairly and unlawfully disadvantage people of colour, women, people with disability, and other groups. And that obviously presents a regulatory risk for companies and government agencies who use AI or machine learning the wrong way.

And then finally, there's just a risk of making the wrong decision. If your AI system spits out a decision that happens to be unlawful and unfair, it's probably also wrong. So to give a practical example, if the bank uses a machine learning system that thinks that women will be bad at paying off home loans and makes decisions accordingly, then they're going to lose a lot of really good customers who happen to be women. And they'll probably lend more than they should to customers like me, white, middle-aged men who tend to be privileged in these sorts of systems. So those are the sorts of risks that we are seeing with first-generation AI, and that's something that we think has not been properly taken into account in the rise of AI.

But there's another problem as well. This is my second asterisk. There's a skills shortage here in Australia and in lots of other countries when it comes to artificial intelligence. In fact, there are two skill shortages. There's one that we already know about and talk about all the time. There's a technical skills shortage. So people who graduate with STEM backgrounds, particularly data science, you can see on the grey line there that we are kind of increasing the number of people who have those technical AI expertise in their back pocket when they come out of university or technical institutions. And that is increasing, but it's not increasing quickly enough. The dotted red line shows that there is a gap of over 70,000 graduates with those technical skills that we're just not going to have by the end of this decade.

But anyway, we already know about that technical skills drought. That's well understood. And there's a whole bunch of strategies in place that are designed to address that problem. There's another skills drought as well that is much less talked about and we consider to be incredibly dangerous and needs to be addressed. And that other skills drought is in strategic expertise. So you'll see on the right of your screen that in a recent piece of research, the vast majority of executives in companies who were surveyed said that they know that their company needs artificial intelligence. 84% said that. But about three quarters of those people basically said, we've got no idea how to use AI. And so that is a really dangerous combination of statistics.

Because what it shows is that companies feel this enormous pressure, and government agencies do as well, to invest in AI and use AI, but they don't feel that they have the skills that they need in order to do that well. And so what happens when those two things collide? You have problems, which in Australia, perhaps best exemplified by the Robodebt disaster. And we could have a pretty arid debate about whether Robodebt truly involved artificial intelligence. The short version is it was an algorithm with automation, which are technologies that are associated with AI. But let's put that debate to one side. The crucial issue is this. You have a government agency that was trying to recover debts that they considered were owed by people who received welfare in Australia. They wanted to use new technology to do that efficiently and quickly, but they weren't able to work with the private sector effectively to develop a system that was fair, accurate, and accountable.

And those three principles, fairness, accuracy, and accountability, are absolutely crucial whenever a government agency or a company is making a significant decision that can really affect people. So that's the big problem, right? And that's where this new initiative that I'll be leading at UTS comes in. We really want to be at the forefront of building Australia's AI capability so that companies and government agencies can use artificial intelligence smartly, responsibly, and in accordance with our liberal democratic values. And that means respecting people's basic human rights.

So from the outset, we're going to be working with companies at the kind of C-suite, board of directors level, and the equivalent senior leaders in government to help them better understand what are some of the risks and opportunities that AI brings, to help them set good strategy for their organisation so that they can say, look, this is a really smart area where we can perhaps invest in AI. And if we put in place appropriate governance and other mechanisms, we can do so safely and respecting our citizens or our customers' basic rights.

And then if you go one layer down, we've seen time and again that you have some senior person in an organisation make the decree almost like a monarch, we are going to use AI in this particular area, but that then becomes the task or the job of a bunch of people who are generally in middle management type positions. Now, if they also lack the basic understanding of how to do that safely and effectively, then it's highly likely that the project will go off the rails. There's a really interesting piece of research that I came across recently, where a large number of companies were surveyed about how they've used AI. And fully a quarter of those companies said that their AI projects were a failure. You think about the psychology behind that, right? Most people in companies don't like to admit that, but a quarter of them made that acknowledgement, which shows that even from the company perspective, if you get these things wrong, if you either get your strategy wrong, or your implementation or operationalisation of AI wrong, then you're going to have very serious problems and you're not going to be able to do the right thing by your organisation, but you're also much more likely to cause harm.

So I'm going to leave you just with my contact details. As Verity said, this is literally my first week as Industry Professor of Responsible Technology at UTS. I'm very excited to be in this role. And I know that there are a lot of people in this session who may well be interested in reaching out and being in contact with me. And so you've got my contact details on the screen. For anyone who can't read it, it's edward.santow@uts.edu.au. And with that, I'm going to stop sharing my screen and hand back to Verity.

Thank you, Ed. Thank you for that brilliant introduction to today's discussion. So I'm now going to introduce our panellists for this next part of our session.

Firstly, it's my honour to introduce Distinguished Professor Fang Chen and Mikaela Jade to the discussion. Distinguished Professor Fang Chen is a prominent leader in AI data science with an international reputation and industry recognition. She has created and deployed many innovative solutions using AI data science to transform industries worldwide. She won the Oscar Prize in Australian Science in 2018, Australian Museum Eureka Prize and is also a 2021 winner of Women in AI Australia and New Zealand Award.

She's been appointed to the inaugural NSW Government AI Advisory Committee and serves as the co-chair of the National Transport Data Community of Practice at ITS Australia. Professor Chen has more than 300 publications and 30 patents in eight countries and has delivered many public speeches, including TEDx. At UTS, she is Executive Director, Data Science. Welcome, Fang.

Thank you so much, Verity.

Mikaela Jade is the founder and CEO of Indigital, Australia's first Indigenous edutech company. She has a background in environmental biology from UTS and a Master of Applied Cybernetics from ANU, as well as having spent most of her career as a National Parks Ranger, which actually is fantastic. I want to talk to you about that later.

Mikaela's company Indigital provides development and delivery of digital skills training platforms and programs that specialise in fourth industrial revolution technologies, including artificial intelligence, machine learning, Internet of Things and augmented and mixed realities. Indigital's programs are designed through an Indigenous cultural lens, using cutting-edge digital technologies to translate cultural knowledge within Indigenous communities, showcase their cultural heritage in compelling ways and sustainably create jobs from the digital economy. Welcome, Mikaela.

Thank you, Verity. Thanks for having me.

Now, I'm going to come first to you, Mikaela, because, really, I think it's a great opportunity to tell people about Indigital. It's a quite incredible company. It's doing really interesting work in demystifying and offering access to cutting-edge technology and skills to the very young and also, of course, to the very remote. So how important is it that we develop these abilities in our society across all age groups and generations?

Super important, Verity. And I'd just like to say I'm coming to you from Ngunnawal country today, and I'd just like to pay my respects to Elders past and present who walk with us on this journey. I'm very privileged to work with Ngunnawal community here and my own community, Dharug.

So it's super important because at its foundation, the suite of technologies known as artificial intelligence are really about economic, political, cultural and historical power. And as a First Nations woman, I don't think I'm particularly satisfied with the system that we have right now to the degree that I'd like to see that scale as it is without the input of First Peoples in the design of artificial intelligence systems. And in fact, the systems that they live within, because we have to remember that AI is a logic in a system that's much broader than just computers. It's about communities and people and the planet and all the things that we're surrounded with as humans. So not having our voices in the design of these systems is inherently dangerous because we're leaning into economic, political, cultural and historical power structures that exist that don't always benefit First Peoples and don't always benefit young people or even people in rural and remote communities.

So being able to be involved in understanding what they are and having a degree of literacy around technologies like AI is integral to us being able to design our future because these systems will underpin a lot of our life. And I think when I'm thinking about justice, education systems in particular, I don't know how our people are going to fare if we're not involved in the design because only we can speak from lived experience of our communities.

I think also for entrepreneurship too and opportunities for wealth creation and caring for country even, AI will extend to managing conservation in large parts of our estate too. So, yeah, I think it's important.

Yes, that sums it up pretty well, Mikaela. I like your line about economic, political and cultural power vested in AI and making sure that there's actual equal access to that power. To what extent, now this is actually a question for all the panellists, but I might start with you, Fang. To what extent is AI already being used in Australia and are the public equipped to comprehend its use? How do we fare against the rest of the world?

This comes to my favourite subject. So I've been working in AI or data science for more than 20 years. So if looking at it in Australia, we see a lot of applications already being used or in place. Just to give some examples, I mean, general public may not be aware, but it has already happened. For example, just to take the project that our UTS team have done, examples are predictive water quality so that we can better manage that and understand the chemical dosing in the water or in sewer water, predicting leaks and breaks in water pipes, reduce service interruption, etc.

I think from the most recent data we collected, the work UTS team did, since December 2019, it has saved more than 5,000 megalitres of water. That's thousands of Olympic pool of size. And not to mention better things like a harbour bridge, how you monitor the safety or integrity of the structure, how to predict fruit growth and how to manage the traffic, how to understand air traffic controllers workload. I mean, the list can be on and on.

And however, I mean, on the end, one side is that the success story probably haven't been populated well enough, I mean, some of the successes. So that all helped general public to understand the success or understand what's in, what's out and how the technology has been used in those success stories. And then probably, I mean, this is the overall issues worldwide, it's not only Australian issues. I think we heard a lot of stories from states, from Europe, in terms of some of either misuse or either some of the concerns hasn't been properly addressed in data and bias, etc. On the other side, we haven't heard, just my opinion, haven't heard enough stories on the good use of it. I think with the good and the bad, and then we come up with a certain approach to say how to achieve the good ones, how to avoid the bad ones, that will be really ideal.

For you, Ed, so things outlined there, there's all these applications of both technology and AI that's being used across our systems in our predicting leaks and traffic monitoring and all of this sort of stuff. How aware are the public that this is even happening in the first place? And does it matter?

I mean, the short version is often we're not very aware at all. And I think it does matter. So just take something that many of us take for granted, like using an AI-powered maps application on our smartphones, like Google Maps or Apple Maps or whatever. What that does is it provides us with, usually, a much better, more efficient way of getting from A to B. And I can say this as someone who's quite dyslexic, I was one of those people that would be constantly turning like a paper map upside down and so on. I'd still, on my journey from A to B, I would actually go to C, D and E and often never get to B. And so there's real, real benefit from that.

But it does change our brains. And I'm not overstating this, it literally changes our brain. So when we're relying on one of those kind of maps applications where we basically follow a little blue dot to get from A to B, we engage our active brains at least 30% less than if we were kind of self-navigating, even using a kind of conventional map. And what that means is if you take the application away and then ask us tomorrow to try and get our journey from A to B just from memory, we're 30% less likely to be able to do it. So it is having a really fundamental effect on the way in which we live our lives and the way in which our brains work.

Now, if we don't know that, then we are not well placed to be able to take advantage. And as I said at the outset with that example, there are real advantages, particularly for constantly lost people like me in being able to use AI, but we're not able to take the protective action that we need in order to guard against the risks.

Mikaela, the work that you're doing, I mean, presumably part of what you're doing in schools is equipping the next generation to properly comprehend the use of AI and have that capacity to properly engage with the technology, is that correct?

Yeah, and we do that for a couple of reasons, and one of the reasons I know is close to Ed's heart in particular is we consider AI as an extractive industry. So and part of that is modern slavery and the people that are most at risk of modern slavery practices in every sector are First Nations peoples. So we're already seeing AI based companies approaching First Nations communities and wanting to upskill us in AI with the view to us participating in what can be classified as modern slavery practices.

So being aware of the kind of labour and skills markets around AI is really important for young people in schools to understand so they can understand what they're participating in and also being able to see the future as well. It is some really exciting career opportunities for young people in Australia, particularly around the space sector, and being able to understand what those opportunities are from a technical perspective, not just going to young people and saying, hey, do you want to have a career in space? It's awesome. And kids start thinking about being an astronaut when there's myriad other jobs that they could find really fulfilling and satisfying from working on country and rural and remote Australia.

So really helping them understand what those opportunities are and then also the opportunity to create their own businesses from country like I've been able to do. So, yeah, and what the future looks like, like if you haven't seen mixed reality, you don't really know what it is. So you can't start thinking about a future that you might have in that if that's what really floats your boat and you certainly wouldn't be able to determine the pathway to get there. So that's what we help students understand when we're working with them.

Yeah, that makes total sense. So, Fang, the need for ethical frameworks around AI are pretty well recognised. In practice, are these ethical principles translating into ethical actions? You know, is it actually working and can more be done to make it easier for people?

That's a great question. I think you already alluded to the framework has been well recognised. We've also done a research analysis, hundreds of different documents coming from standards, government policies, guidelines, academic papers. I think universally the consensus are there in terms of top ethical principles. So I had to mention a few of them, like transparency, accountability, fairness. Those are the top ones. However, the implementation framework or practice is still at its infancy. It's definitely an area that needs to boom. So in terms of how to take principles to practice, how to clearly define processes and even tools to help people to clarify whether they have followed or not those principles.

So give them a way past to do the assessment to say, yeah, they have done it, followed or haven't or what that improvement area, which areas they need to fulfil so that they can do either AI development or procurement or as a use. And this is not a magic wand or like a rocket science anymore. I think, as I said, the fields pick the areas I didn't mention, like fairness, transparency. They are quite some detailed assessment measures has been researched and published. I think is that how to take those one into active debate, not only debate on the principles. I think that part we have done good exercise is how to debate, how to implement it and what are the necessity or steps we should take things forward.

Yes, and that leads quite nicely to my next question to Ed, which was going to be, OK, so we've got the principles, Fang is talking about the challenge of implementation. You've called for Australia's government and its institutions to be model users of AI, almost like to model what best practice looks like. So what does it look like?

There are a couple of things in particular, the first is acknowledging that this is often pretty experimental technology. Historically, not just in Australia, but a lot of countries have got a very bad record of beta testing new technology on literally the most vulnerable citizens in our community. And frankly, that's what seems to have happened with Robodebt. You know, it's hard to identify a group of people in our community that are more vulnerable than people who have at some point in the last five to seven years received a welfare payment. And so to sort of try this new new technology on that group and not put in place adequate safeguards, that's not what we should be doing. We should learn the lesson from Robodebt.

And one of the key lessons is trial it in a safe way and make sure that when you go live that you do so in respect of people who are able to protect themselves reasonably well. The second thing comes back to something I said right at the outset, that those three principles of fairness, accuracy and accountability are critically important. What we saw with Robodebt was that there was a very high error rate. So people were being sent debt notices, they were being told they owe the government money. Many people who didn't actually owe the government a cent. And so it's crucially important whenever the government is making a decision or a company that they do so accurately and that they not have a high error rate.

When we talk about accountability, what we mean by that is making sure that if there is an area, if there's a problem with the decision making process, that people are able to kind of get redress simply, that you don't need to have to pay a high priced lawyer to kind of untie the Gordian knot that you find yourself in, but rather that you can have a simple process of getting the problem fixed. And fairness is an overarching principle that's really important. And that means that, for example, sometimes when you're claiming money back from someone that you may have overpaid a hundred dollars five, six, seven years ago, maybe it's actually not really fair to claim that money back. So you need to have an overarching kind of look at the system that you're creating and making sure that it really works fairly for people.

And I mean, Robodebt's a particularly bad example. My next question was going to be around how easy is it for people to interrogate the process of AI or the decisions made by AI? In your experience, I mean, how easy is it? Not very easy from what you're saying, but are there other better examples of where it is easier to interrogate those decisions?

Yeah. So, I mean, for momentous decisions in people's lives, there is usually a requirement that the decision maker give you some reasons. And we found this at the Human Rights Commission in the way in which we asked questions of the community without polling, but also the other consultation processes. People say, we know we're not always going to get decisions that we like, but we want to make sure at the very least that we weren't treated unfairly, that I wasn't the victim of discrimination. And when you are given reasons for your decision, that's where you're able to determine whether the decision is one that you may just have to suck it up and accept, or you may want to challenge it because it wasn't the right one. So if you can't get those reasons, that's a crucial problem.

Now, it's a design question. It may be easier often to design AI powered systems that don't provide reasons, but there's no technical reason why that has to be the case. I mean, frankly, for a human, it's easier just to give you a decision. I mean, I find this as a parent, not infrequently. One of my kids is saying, I'd like the third ice cream today. It's easier for me to say no than to say no, and here are the reasons why. But those reasons are very important. And particularly if we're talking about, I don't want to be glib about it, if we're talking about much more momentous decisions like welfare decisions, bank loan decisions, those sorts of things that really affect people, then those reasons are crucially important. And decision making systems need to be designed to accommodate that requirement, because without wanting to go too highfalutin about it, not only is it practically really useful, it's a principle on which our legal system and our entire liberal democracy depends. And that is the rule of law, that when someone makes a decision, that they can be held to account that they have followed the law in making that decision. So an opaque decision, a black box decision using AI just doesn't cut the mustard.

Fang, what do you think about how normal people interrogate the process of AI or decisions made by AI? How difficult or hard is that?

I, yeah, I want to say that it's easy. However, it may be a way to open it up a bit. So I like an example about the children or child. AI is actually a child, you know, it's a, so we teach them or teach AI system to do something, we influence them, we give them some, you know, principles and then how we design the system that will follow. Many, many years ago, when my daughter is only about 12 months old, she knows where to sit. She knows where to sit, where to put things, right? She knows what to call the table, what to call the chairs. But even nowadays, it's not easy for AI system to know all the different chairs, different tables, you know, what sort of surface you can put the stuff on, what sort of surface you can sit on. This is, I mean, even on poor thing, human is far more advanced than the current AI system.

Having said that, I'm just saying that give the how to design and set the boundary and let AI system perform with the, you know, learning examples we give so that the system can keep learning and keep improving. And while we're doing that, we can set certain expectations. Expectations means that, first is, the system is not going to be 100% correct

If you are interested in hearing about future events, please contact events.socialjustice@uts.edu.au.

We really want to be at the forefront of building Australia's AI capability so that companies and government agencies can use artificial intelligence smartly, responsibly and in accordance with our liberal democratic values and that means respecting people's basic human rights. Ed Santow

Being aware of the kind of labour and skills markets around AI is really important for young people in schools to understand so they can understand what they're participating in and also being able to see the future as well. Mikaela Jade

AI is a child, you know. So we teach them or teach the AI system to do something. We influence them, we give them some principles and then how we design the system follows. While we can set certain expectations, expectations means that the system is not going to be 100% correct because it's a probability based system. Fang Chen

Speakers

Edward Santow is Industry Professor – Responsible Technology at UTS, and works with the business, financial and government sectors to address technical, legal and human rights challenges in the area of AI. He was Australia’s Human Rights Commissioner from 2016–2021. He has led the most influential project worldwide on the human rights and social implications of AI, involving extensive public and expert consultation.

Mikaela Jade is the Founder & CEO of award-winning company Indigital – Australia’s first Indigenous Edu-tech company. As part of their work, Indigital delivers Indigenous designed digital skills training for primary and high schools students. It enables Indigenous and non-Indigenous kids to connect with and learn from Indigenous Elders about cultural knowledge, history and language, while learning digital skills in cutting-edge technologies.

Distinguished Professor Fang Chen is a prominent international leader in AI/data science. She has created and deployed AI/data science solutions to transform industries worldwide. She won the ‘Oscar’ of Australian science, the Australian Museum Eureka Prize in 2018 for Excellence in Data Science, and is the 2021 Winner of Women in AI Australia and New Zealand Award in AI in Infrastructure. She has been appointed to the inaugural NSW Government AI Advisory Committee.

Share