• Posted on 2 Jun 2023
  • 86-minute read

Today, Professor Nicholas Davis and Sophie Farthing discussed the key trends in corporate governance and the existing obligations that apply to organisations using AI today. This discussion took place following the launch of HTI’s The State of AI Governance in Australia report.

The human implications of corporate use of AI

Around two-thirds of Australian businesses report using or actively planning to use AI systems in their business operations. While there are a range of existing laws of general application pertain to the design, development and use of AI systems. Australia does not yet have AI-specific laws or regulations. Without proper governance, the rapid deployment of AI systems exposes organisations, employees, consumers and the broader community to severe harms and significant risks.  

K8k7ZrJhpYY

Descriptive transcript

Welcome, everyone. For those who don't know me, I'm Sophie Farthing. I'm head of the Policy Lab here at the Human Technology Institute.

Before we kick off, I would like to acknowledge the Gadigal people of the Eora Nation, which is where I am joining this call from. I appreciate everyone is possibly all over Australia, maybe even beyond, but I would like to acknowledge the Gadigal people and pay my respects to their Elders, past and present, as well as acknowledging emerging leaders and Elders.

Thank you. We have a jam-packed hour today to talk about a very exciting report that HTI published this week on the state of AI governance in Australia. Just one housekeeping thing to note is that we are recording this session, so just bear that in mind.

I will hand over now to our Co-Director, Professor Nicholas Davis, to start us off and give us the framework for what we're discussing today. Over to you, Nick.

Thank you, Sophie, and welcome, everyone. I'm joining from Ngunnawal land down here in Canberra, and it's great to be with you all. It's amazing how literally just putting a link on LinkedIn can get so many people interested and engaged in AI governance, but it is a bit of a topic du jour.

Today, we're really going to do only two things together. First of all, I'm going to have a chat with Sophie about some of the elements in the report. We thought we'd do that in a bit more of a conversational style than death by PowerPoint for 15 minutes or so. And then by the time we get to the half hour, and maybe even before, we'd love to start engaging you all in conversation about this, because you are all experts in this area from different perspectives—as users of technology, as leaders in your organisations, as non-executive directors or senior executives, or as members of the media and commentators in this space. So we'd love for you to really push us and one another, particularly as to where all this goes.

We've got a lot of talk around risk and harms today, around the shortfalls in corporate governance. But it's quite good to think about—so what, where next, how does this get taken forward? So I will have a couple of slides running along as Sophie and I talk. I'll stop that when we finish.

There are a couple of ways that you can contribute directly in this session. You can throw your hand up if you're interested to jump in and speak. I think we'll leave it so that everyone can unmute themselves at the moment. But if for any reason we end up getting 500 people on the call, we might go to a little bit higher level of moderation. But I think we're pretty good with this group.

Second, there is the chat function, and I can see that there's already a bit of chat coming in. So please do feel free to ask questions or exchange. And then as well, if you want to ask a question that gets kind of recorded and moderated, there's the Q&A function as well on your panel there in Zoom.

Before I pass back to Sophie, I will say that the work that we're doing here in the Corporate Governance Program is a three-year program that's supported by the Minderoo Foundation. We're also incredibly grateful to our HTI advisory partners, Atlassian, KPMG and Gilbert & Tobin. I know that some of you are here today, so thank you for joining.

But really, this is just the first phase of a longer conversation around the corporate governance of AI. And I think the word conversation is our trigger then, Sophie, for me to pass back to you for a chat.

Thanks, Nick. So, yeah, as Nick mentioned, this is a chat about the content of this report, and we are really hoping you will join us in this conversation about halfway through. So my job is to keep Nick to time. But, Nick, can we start off—just the timing of this project and this report this week is pretty incredible. I think we're all keenly aware of the kind of week we've had.

So on Tuesday, we've had an open letter signed by AI experts from around the world, which was pretty alarming in terms of talking about extinction, human extinction and the risk of AI. HCI published our report on Wednesday. And yesterday, of course, we had the Australian government open up a pretty wide-ranging public consultation on what Australia needs to do to regulate AI. So briefly, Nick, can you tell us, you know, why this project and why now?

Yeah, thanks, Sophie. Well, first, I guess it's important to recognise that we are really focused on the promise of AI as much as we are on the risks. This is really the Human Technology Institute taking a human-centred approach to looking at these issues. And it's good timing in the respect that the launch happened kind of in consequence or in coincidence with those other aspects. Those other launches you mentioned. But we've been working on this since September.

The reason why we focused on corporate governance here is partly in recognition that about 90% of all major research work in AI and more than 95% of applications that you and I would encounter in day-to-day life are managed and delivered, dreamt up and invested in by the private sector. So we have this wealth of engagement and design power, marketing power and technical understanding that rests in the private sector. Given that, we really wanted to take a hard look at how authority, management and governance worked inside private organisations as the key players that are actually using and deploying these systems.

So that's kind of the first answer to the question. The second answer is we do know and we're keenly aware that there's a couple of key gaps going on at the moment with corporate leaders. One is a general sense of a lack of awareness on current obligations and even on the actual use of these systems. And the second is, as you say, this increasing set of calls for regulation, many of which are untethered from actual policy or legal experience in these areas. So we're hoping to bring those things together through this work.

Certainly the calls can be pretty alarming. And when you get into the detail of them, there's a lot of nuance in these discussions and this report fills in a lot of those gaps.

So in terms of looking at risks, we've heard for the last few years that there's a range of risks posed by AI and we keep hearing of them. We've got evidence of algorithmic bias, entrenching inequality and discrimination, alarming things around social media algorithms undermining our democratic processes. And here in Australia, we've seen the pretty horrific impact of robodebt, which is obviously less about machine learning and more about oversight and what is effective oversight of automated systems.

So going back to the report we published this week, what new ideas does this report contribute to our view of risk and AI?

Yeah, this is a really important and interesting part of shaping the narrative around artificial intelligence—what we really mean when we say risk.

One of the key and critical distinctions that we drew—and I should, by the way, here recognise a person who isn't able to be on the webinar, but is the lead author of this work, our colleague Lauren Solomon—this is really her deep insight. Unless you are really clear about distinguishing between a harm and a risk, you can use the word risk in ways that really take away the human being or take away the kind of irreversible damage that AI systems can do.

So the first key thing that we do is really take that point to heart and draw a clear distinction between what is a harm to an individual or to a group that is in many cases irreversible and it's hard to provide compensation, versus what is a risk in terms of something perhaps financially quantified, potentially far in the future and potentially occurring more to an organisation or a group, which kind of dehumanises what could happen.

So that's the first thing—really focusing, like many others have done in this space, on the fact that there are real people involved when systems like the robodebt system go wrong.

The second big movement for us was that we did find that when people talk about risk and harm in AI systems, they generally provide a laundry list of a variety of different things that can go wrong or do go wrong or how people get harmed.

We found that talking to policymakers and particularly the corporate leaders that we engaged in this project—so the corporate leaders we defined as non-executive directors and senior executives leading organisations using or planning to use AI systems—they didn't have an organising concept for how AI risks develop and what those components look like in terms of both the harms to individuals and the risks to organisations.

So we spent a lot of time coming up with essentially a typology, which looks a little bit like this. The first one is a lot of AI risks and harms flow from when an AI system fails—it doesn't do what it's supposed to do.

A completely different source of risk that can lead to similar harms is when an AI system is used in malicious or misleading ways. But obviously, the intention and the failure points of those two sources of risk are quite different.

The third is thinking about the overuse, the inappropriate, the reckless use, the downstream impacts of AI systems as externalities.

By stretching these out into different harm categories, you can see in that second top box there—biased performance—that's where your algorithmic bias comes from. It's when the errors of an AI system are distributed in such a way as to harm a certain group of people and not others.

If that harm is distributed across a protected characteristic, that is in breach of discrimination law in Australia, and so we produce examples there.

The same can be done for malicious or misleading systems. The Trivago case was an example of a misleading algorithmic system that produced really bad financial outcomes for consumers—so, dark patterns. But you also have other sources in here.

And then there are a lot of more society-wide and environmental-wide impacts in overuse. But there's also just the use of AI systems when it's not warranted.

I think we're seeing a lot of that where systems are not being used maliciously and they're not failing—they're working as intended. But do you really need to capture everyone's licence plate entering your car park and match that up against a store card and gather and hold that information in order to do the job your store is doing? That's a really important question that currently isn't really covered by our law.

Obviously, in the report, there are so many risks that organisations are grappling with. So can we get practical, which is what this report does incredibly well.

Thinking about organisations, how can a generative AI system pose risks to an organisation? And how are companies governing those risks at the moment?

Yeah, maybe I'll just jump back a few slides to show the data that we've gathered that showed where organisations are using the risk, because I think that gives a good insight into where those risks come from with an example like generative AI.

We found that really two thirds of Australian organisations say they are using or planning to use AI in the coming year.

Now, when we dive deeper into this, I could not find a single organisation that wasn't using AI when you think about how employees are actually going about their day-to-day business.

We found that about half of the people that I spoke to who said they were using generative AI at work had not told their bosses about it.

That stacks up—it's about the same order of magnitude as the recent research from February this year, which shows that about 30% of professionals reporting using generative AI at work haven't told their boss.

You can kind of understand why—when people stumble across something that feels magic and saves them a lot of time, it's uncomfortable to tell your supervisor or manager, "Gosh, I actually spent a third of the time that I used to spend on that task," because often efficiency gets eaten up in other tasks in different ways and you're not quite sure whether or not that's allowed or appropriate at the moment.

The second thing I'll say is that when we think about risk at the organisational level, you are often trading off your risk appetite—what you want to gain out of it—versus what risks you're incurring.

You can see here on the right-hand side of this diagram that business leaders—senior executives in the dark blue versus non-executive directors—had quite different perspectives on what they expected the benefits to be.

The non-executive directors there were really focused on customer experience, whereas the senior executives were really focused on business process efficiencies.

So it's interesting to think about what risks you might take in different areas in order to get that upside.

Just in terms of the data here, if you look across the top five uses that our survey revealed in terms of where AI was being applied at an organisational level more than the individual employee level, three of the top five are really touching important stakeholders.

So customer service, marketing and sales, and human resources are all systems that can make really critical decisions on behalf of the individuals that interact with your company.

It was in that kind of view that we started to think, well, how do organisational risks in this area evolve?

I'll just step forward to this example here for generative AI that I think illustrates really nicely the risks of using this in your organisation. About two weeks ago, one of the national organisations that did telephone help for suicide prevention in the US, called Helpline, introduced a bot called Tessa.

They'd been testing it since February, but about two weeks ago, they said, "Actually, we're going to fire 120 volunteers and let go all our Helpline staff because Tessa can replace them."

Now, what was terrible about that, for someone like me that's worked in other areas, is that this was not just the fact that Tessa was better than the Helpline staff. It was that unionisation of those employees was a threat to the wage bill of the company, so they said, "No, we're going to replace it with a bot."

About a week, 10 days later, they shut down the bot because it just could not perform. It was producing really terrible outputs that put people at risk.

Obviously, when you're talking about suicide prevention, that kind of edge is just incredible to think about as a live example.

The other thing I want to mention is that we do see that examples like that amplify risks in three different ways.

So that Tessa example, first of all, it just provided a terrible service. That's bad for your commercials, it's bad for your operational efficiency. If the AI system just doesn't perform as intended, you're going to lose business, you're going to be less efficacious just because it's a worse product.

Second, you're going to expose yourself to some pretty big regulatory risks, particularly if you get into any of those areas where there are duties on your business to perform at a certain level, or if those errors are distributed in ways that result in unlawful discrimination.

Of course, these risks often co-occur, all three of them. You get a reputational hit because those headlines look pretty bad.

Just to mention, Sophie, to finish up on this point, a really interesting finding of our report was that when you ask organisations and executives in general what they think about the risk of these types of things going wrong, people with less knowledge who are just coming to the party with AI will tend to cluster their answers in the middle of the spectrum. It'll be a bell curve where most of the answers are low to moderate risk.

But once you start speaking to organisations and executives, corporate leaders who have spent a lot of time or more time with AI systems, the distribution goes bimodal. You get quite a lot of people clustering in the very low area because these are systems either that they're very used to and maybe they're a bit complacent, or on the other hand, they are genuinely low-risk systems like mapping, optimisation, etc., that are not touching stakeholders in a way or they're not core to the business.

But at the other end of the spectrum, you actually see quite a big spike in people thinking, "Oh, no, there are a whole bunch of systems here that I view as very high risk in our organisation."

For people like us at HTI who also work heavily in the policy space, that's a really interesting and positive finding because it means that risk-based regulation could work really well, because you can separate out those different use cases and create a clear line between those systems which are lower risk and those which are higher risk. It also shows that there is that recognition evolving in the market.

And that, of course, as the head of the Policy Lab at HTI, is something I'm thinking a lot about—this question about regulation.

So, Nick, obviously we've got, I've mentioned before, and I'm sure it's at the front of a lot of people's minds on this call, all these calls for regulation and what that might look like. Can you talk to us a little bit about how you deal with this in the report or these calls for regulation and what regulation might look like?

Yeah. I think it's first important to recognise that despite a lot of chat about the risks and awareness of the risks, there's been very little action actually going on in terms of organisations changing their behaviour and investing in governance systems to deal with these risks.

That's a key finding of the report—that essentially corporate governance of AI is unsystematic, unstrategic, and unequal to the risks that have emerged.

This is from some McKinsey data, global data, but it's entirely backed up by what we've been looking at as well.

I'll take it to the point about existing obligations. The fear that we have and that I think really crystallised during our interviews and workshops was when people hear public calls for regulation of AI, their assumption is that there is currently no regulation of AI. So there's this kind of unstated sense of, "Well, if it's needed in the future, there must not be much there today."

While it is true that in Australia, we don't have AI-specific laws, we do have a range of existing laws of general application that span a huge range of areas that are directly applicable to how organisations should be managing and governing their AI systems.

From a corporate governance perspective, of course, the most important of those are those duties that go directly to the director out of section 180, section 181 of the Corporations Act, as well as the common law fiduciary duties that are owed to the company. Those are due care and diligence, good faith, proper purpose—they're about being the kind of reasonable awareness, skill and capability that you bring as a director.

A lot of non-executive directors we were speaking to on this hadn't really thought about their directors' duties in a similar way as they might have with, say, cybersecurity, as starting to encompass taking care of what could go wrong—these kinds of critical risks.

The AI system use is not just growing, it's also becoming more core to organisational business models. So it's not just about saying, "Oh, do directors need to be aware of the fact that we're using AI in recruitment?" It's actually the fact that our company is using AI in multiple areas, and often mission-critical areas that become very strategic, very important and expose us to a whole range of risks, which could engage those duties.

Beyond the directors' duties and those Corporations Act duties, we have a number of areas that we present in the report where there are really important specific legal obligations to look out for because they are particularly pertinent to AI systems. So consumer protection, particularly around misleading or unfair misleading information, unfair dealing, cybersecurity, anti-discrimination, duty of care, work health and safety, and of course privacy and data use.

Given first that AI systems do tend to engage data sources from a variety of different areas than your traditional IT system, but also that as soon as you are using an AI system to solve problems that are directly consumer facing, you are almost by definition pulling in personal information and sometimes sensitive information, and you are exposed to a higher burden of data management.

If those systems are not yours, they're not in your control, but you're still responsible for them, you need to be really careful.

Another thing we found in the existing obligations is that both senior executives and company directors weren't really on top of how their organisation was using third party services that deployed AI, and they were not at all aware of the risks, the legal obligations about how they applied through those third party services as well.

Did we lose Sophie?

Right, well, in that case, Sophie might have had some internet problems. I might just take us down to one final question that I know she's going to ask, which is around what can we actually do with all of this.

It's really important that we give people a way forward out of these kind of, "Gosh, people aren't doing enough." It seems that corporate Australia isn't really on the early end of the maturity curve in dealing with the risks and harms we can already see. And yet, the explosion of use is threatening this big gap that presents these risks—commercial risks, regulatory risk, reputational risks.

We came down with these four actions that we thought were particularly pertinent. The first one really comes out of the fact that directors and senior executives repeatedly told us that they didn't think that their organisation had strategic expertise in artificial intelligence.

Many of them said, "Look, we do have quite a good data and analytics group, we have some technical assets, some of which we borrow, some of which we outsource, but internally across teams like procurement or HR, particularly across senior management and definitely on the board, we're not sure how to leverage this, what it means, where the risks are, how it fits into our strategy."

Only 10% of the people we surveyed or spoke to even had AI in their strategy or an AI strategy at all. So these first two points are basically around skilling up on the strategic side of artificial intelligence—not becoming data scientists and knowing how to specifically deploy or operate it, but knowing how to decide when it's appropriate and how to govern it well internally.

Literally just having an AI strategy which sets out your risk tolerance, your risk appetite, was critical for board members. That was the biggest request from boards—we don't see an AI strategy in our organisations and we want that present to help us do our job as company directors.

The third thing is that when we asked our group of people who we surveyed—about 268 people—those of them who had been currently using AI systems, how they governed those systems, about a third of them said they had some form of assessment or governance system in place.

But when we dived into what those systems actually looked like, we found that they were hugely diverse and fragmented and really unsystematic, to the point where, apart from the number one answer being "we don't have a system in place"—that's two thirds of organisations that we spoke to—the one third that did have a governance system in place, a lot of them were using just an Excel spreadsheet to record risks.

This includes some of Australia's biggest corporations using a spreadsheet just to record risk controls. Many organisations reported that they used a form of governance that I've named guru-led governance, which is just where there's one person in the organisation who everyone points to and says, "You know about AI, should we do this project or not?" That's remarkably common, including a big feature of one of the world's biggest tech companies that we spoke to.

A lot of companies reported that they were sending AI plans through either IT processes that weren't suited to the specifics of AI, or to legal teams and privacy teams for sign-off where those legal teams and privacy teams had no training and no real experience with these systems.

So that action three is really around starting to cut away at those deficiencies in terms of getting something that's more integrated and fit for purpose.

Finally, as we might pivot towards discussion with you all, we know from corollaries in other areas of governance and management, particularly work health and safety and financial services, at the end of the day, it really does come back to how people behave, even when they're not being watched and even when they don't have to fill out a compliance checklist.

Having a really human-centred AI culture where your frontline staff are trained and know when an error affecting a customer is actually a big deal that should be investigated and could be systematic, as opposed to just an unfortunate "computer says no" outcome—those things are really important to be inculcated and part of the way that organisations work.

Unfortunately, in some of the big tech-driven companies that we spoke to and work with on this, business model drivers are currently at odds with a lot of the kind of human-centred AI approach that we'd like to see. That doesn't have to be the case here in Australia for companies using and deploying AI.

I had some really encouraging conversations, particularly with startups who were basing their organisation around AI-driven processes and platforms, and their first question was, "How do we make this human-centred? How do we build an organisation around these platforms that is completely safe, inclusive and protects people, particularly marginalised and people with less voice?"

So I'm encouraged by this, but we are at the very beginning of this journey.

So I promised, maybe even only tacitly, that we'd move over at half past.

Sophie, are you back with us?

I am back. My apologies for dropping out. But yeah, we do want to switch in and we want to hear what you have to say. I've already got a couple of questions in the Q&A, so I might just draw everyone's attention to that. We've got a pretty good big group now. So perhaps if people can pop their questions into the chat and we'll do our best to get to them all.

So Chris has put a question in that I think relates to the point that you were just making, Nick, that you were encouraged by some of the feedback that you'd had from organisations. Chris is, I guess, not so encouraged because in terms of we've just had this consultation. The question in the chat, Nick, I think is a really pressing one for us, is that we've had some Australian federal government consultations—Chris pointed to the one last year, which a lot of people contributed to and which we haven't had a government response yet.

So Chris has commented that he feels like we're starting this conversation again. Can you reflect a bit on that, especially because you have been talking really practically, and what this report does is give a really practical day-to-day view of that regulatory framework and some of the gaps. Can you speak to a little bit about what you think this latest government consultation will do to contribute and move this conversation along?

Yeah, I think the current consultation is, first of all, an indication that the minister and the department are finally moving into a bit more of a policy focus rather than an industry focus in this area. This seems to me to be a signal that perhaps, unlike previous consultations, this is feeling like now there's the resources and the focus to move on this.

Chris, you probably might say, well, that's a little bit belated and not using the work that's already been done. I'm sure much of that will be rolled in. But I have been encouraged by the fact that the department has certainly been much more proactive in reaching out and engaged in this.

I might also ask Ed to comment on this because I want to make sure that we put this regulatory discussion also in the frame of another project that we're working on, which is the future of AI regulation, which goes hand in hand with corporate governance.

Ed, how do you view this recent consultation? And is it meaningful? Is it a signal of actual change?

Look, I mean, I think it's positive. I think the federal government has not said anything really significant up until this point about what it wants to do by way of reform in AI. So this is the first, I think, major marker they've put. So that's a good thing.

But I also think I share some of the frustration of Chris and others that there are some really important reform processes that have set out clear, actionable reforms, and they're just sort of sitting on a shelf somewhere.

I say that with the bruises of having worked at the Human Rights Commission, and with Sophie Farthing and Lauren Perry and others, to deliver on the Human Rights Commission's report there, which has some really clear reforms, but also there's the privacy reform and others.

So I think what the government should do is two things. One, it should do a clear audit about what it wants to take forward in terms of really carefully considered reform that is already on its plate. And two, I think it should identify where those gaps are.

One of the things I really like in the new discussion paper is that it kind of acknowledges, if tacitly, that we need to move from high-level ethics principles to practice. That's something that's really exciting.

We've seen, for example, here in New South Wales, the AI Assurance Framework, which applies to government agencies, is designed to do just that. It's not perfect, but it is, I think, a really good first effort in that regard.

You mind if I just bounce straight into Charles's question in the chat from there?

So just to Charles wrote in the chat that he leads a government legal working group in the New South Wales government and asked a question there about the role of government to legislate codes of practice, etc.

I think we're firmly on the side here, at least I expect for myself here, that it is time for government to step in and create a series of quite firm guidelines that still allow organisations to do all the great innovation we want, but provide really positive rights and positive guardrails for systems that can go awry.

Even if that regulation is just forced reflection and transparency, as is the case of the AI Assurance Framework, most of it really.

So basically forcing organisations or saying that in order to deploy a system which is of a certain risk, you need to have done a review, you need to have thought about it, you need to have registered it and gone through this process.

In the model law, the facial recognition model law that Lauren Perry and Ed and I worked on and published last September, we proposed a model law for one subset of AI—so facial recognition technologies—that would be risk-based and would clearly prohibit a certain set of system uses. Not the actual underlying technology, but the use in certain cases, so for mass surveillance and public surveillance, etc., and biometric facial analysis and drawing characteristics for anything other than entertainment.

So I certainly think there's a case for government to put those kind of high-risk guardrails in place, but also to promote the use of mandatory instruments like AI Assurance, or even if they're not mandatory instruments, to support the use of international standards that encourage organisations really firmly, particularly from the market or even if in legislation they're mandatory standards, to put in place governance systems that are fit for purpose.

This year we will see published towards the end of the year, the 42001 ISO standard, which is the Artificial Intelligence Management Standard, and that is a set of guardrails and acti

Professor Nicholas Davis and Sophie Farthing led a thought-provoking discussion on the human implications of corporate use of AI. The webinar presented the findings of HTI’s ground-breaking report:The State of AI Governance in Australiaby Lauren Solomon and Professor Nicholas Davis.

Sophie Farthing, Head of Policy Lab

Sophie Farthing, Head of Policy Lab

Professor Nicholas Davis, HTI Co-Director

Professor Nicholas Davis, HTI Co-Director

Professor Edward Santow, HTI Co-Director

Professor Edward Santow, HTI Co-Director

Byline: Neda Dowling

Share