• Posted on 2 Sep 2022
  • 48-minute read

Artificial intelligence is transforming our world. It’s revolutionising how governments and companies make decisions.  

AI aims to remove human prejudice and produce better, data-driven decisions. But too often, the reality is far from this vision, with horrifying consequences. We've seen algorithms make it harder for women and people of colour to get a home loan or a job. And 'Robodebt' involved a faulty system of government debt collection that pushed thousands of the most vulnerable people in our country into poverty or worse. 

In this session, Dr Alondra Nelson (head of the White House Office of Science and Technology Policy) joins Prof Edward Santow and Prof Nick Davis (co-directors of the Human Technology Institute) to discuss how we can ensure human values are at the heart of how new technology is designed, used, and overseen. 

ykzDnpz86Pg

Descriptive transcript

Hello, everyone who's joining us. We'll just wait for about another 30 seconds while a few more people enter the virtual room and then we'll begin.

All right, we have around 120 participants already in the virtual room and I know that's going to keep climbing, so I will begin today's event because we are really very excited about our special guest that we have with us today.

Before I begin, I'd like to acknowledge that for those of us in Australia, we are all on the traditional lands of First Nations peoples. This land was never ceded, and I want to acknowledge the Gadigal people of the Eora Nation, upon whose ancestral lands the UTS City campus now stands and it's also, of course, where I am joining from today. I want to pay respect to the Elders past and present and acknowledge them as the First Nation owners and ongoing connection to this land, waterways, and culture. They're the traditional custodians of knowledge upon which this university is built. I further acknowledge the traditional owners of the country where all of you are joining us from and pay respect to their Elders.

I'm Professor Verity Firth and I'm the Pro Vice-Chancellor, Social Justice and Inclusion at the University of Technology, Sydney, where I lead our social impact and engagement. It is my pleasure to be joined today by a world-leading expert on the human impact of technology, Dr Alondra Nelson. She's also the head of the White House's Office of Science and Technology Policy. This webinar will also feature the co-directors of the Human Technology Institute, Professor Ed Santow and Professor Nicholas Davis.

But there is a couple of housekeeping pieces that I need to let you know about first. Today's event is being live captioned. To view the captions, you click on the "CC" closed caption button at the bottom of your screen in the Zoom control panel. We're also posting a link in the chat now, which will also open captions but in a separate internet window if you would prefer. If you have any questions during today's event, please type them into the Q&A box, which you'll also find in the Zoom control panel. You can like questions that others have asked and that will push them up the top of the list, but please do try to keep them short and relevant to the topics that we're discussing here today.

Artificial intelligence is transforming our world. It is revolutionising how governments and companies make decisions. AI is increasingly everywhere, from banking, recruitment, law enforcement to social welfare. The promise of AI is that it will remove human prejudice and produce better, more data-driven decisions. Sometimes this is true, but too often the reality is far from this vision. We've seen in all the areas I just mentioned how AI can replicate and even worsen existing inequality. The consequences can be horrifying, especially for our human rights. We've seen algorithms make it harder for women and people of colour to get a home loan or a job, and in Australia, Robodebt involved a faulty system of government debt collection that pushed thousands of the most vulnerable people in our country into poverty or worse.

At this crucial moment, UTS has established the Human Technology Institute. The HTI is working with leaders from civil society, government and the private sector to build a future that applies human values to new technology. I especially want to acknowledge some key collaborators who have been with us from day one, or even day zero, as we've been building this new institute. They are Gilbert + Tobin, KPMG, Atlassian, LexisNexis, Humanitech, Transport for NSW, and Microsoft. We'll have a lot more to say about our wonderful partners at our formal launch event in October.

Today, we will be talking about humanising technology and how we can ensure that human values are at the heart of how new technology is designed, used and overseen. It is now my honour to introduce the founders of the Human Technology Institute, Ed Santow and Nick Davis.

Ed Santow is Industry Professor, Responsible Technology at UTS, where he leads our initiative on building Australia's capability on ethical AI. From 2016 to 2021, Ed was Australia's Human Rights Commissioner, where he led the Commission's new work on AI and new technology, among other areas of responsibility. Welcome, Ed.

Nicholas Davis is Industry Professor, Emerging Technology at UTS. From 2015 to 2019, Nick was Head of Society and Innovation and a member of the Executive Committee at the World Economic Forum in Geneva, Switzerland. More than anyone else, he has developed the idea of the Fourth Industrial Revolution and how we as a world community should respond. Welcome, Nick.

And now I'm very excited to introduce our guest of honour, Dr Alondra Nelson. Dr Nelson leads the White House Office of Science and Technology Policy and is Deputy Assistant to President Joe Biden. As a scholar of science, technology, medicine and social inequality, she has contributed to US national policy discussions on inequality and the social implications of new technologies, including artificial intelligence, big data and human gene editing. Welcome, Alondra. I'm now going to hand the proceedings over to Ed.

Thank you so much, Verity. Gosh, this is such an honour to have you, Dr Alondra Nelson. I can't hide my enthusiasm and excitement. But I'm going to start with kind of a mixture of the personal and the professional because most careers take a winding path, but very few of us end up anywhere near, let alone in, the White House. So many people I know take enormous inspiration from you and your role leading the OSTP. Can you tell us a little bit about your path to the White House?

Yes, it's been a winding path indeed, but first let me say thank you to you and Verity and Nick for the invitation to be here. It's really a pleasure to be with you all and I have learned so much from your work, Ed, about these exact topics and so it's a real honour to be here with you all today. So let me just say at the top, the headline here is that I never expected to be working in the White House and so I pinch myself most days as I go to the White House campus and find myself there, but it is a great privilege and an honour to be doing public service and to be doing it in this extraordinary Biden-Harris Administration.

So, you know, it's not a total accident. I mean, as Verity said in her very kind introduction, as a scholar, as a researcher, most of my work has been around, you know, effectively science and technology policy, but really thinking about the sort of social implications of science and technology. And, you know, more recently I have been working on a book about the White House Office of Science and Technology Policy during the Obama years, so the office that I came to lead, at least temporarily, right now is an office that I knew very well as a kind of historical and sort of organisational structure. And now I'm there every day working with an incredible team of about 140 people on everything from AI to quantum science to climate innovation and energy science to thinking about, you know, how we, you know, get more diverse and innovative STEM fields.

You know, I think I'm the second woman to ever lead the office in an interim basis, certainly the only person of colour ever to lead the office. So I do understand that my appointment by President Biden in this role is historic and I also bring with me, I think, an appreciation into the work of science and technology policy both for the ways that technology can be so net positive, very much productive and generative in people's lives and the ways that technology, science, innovation can cause harm, you know, traditionally, historically and in the present for certain communities, particularly disadvantaged communities, underrepresented communities like the African American community, of which I'm a member.

So the great thing about this administration, which on day one issued an executive order on equity and on, you know, the work of government being used to drive equity in American society means that there doesn't have to be sunlight between thinking about science and technology policy and thinking about issues of equity and inclusion and democracy. There's a real understanding and an attempt to draw these things together. So, I think it's really the particular vision of this Administration that really made it possible for someone with my particular interests and trajectory to be a part of science and technology policy making in this really wonderful moment.

I think that's a wonderful way of setting up the conversation and I think what you've done is you've highlighted some of the things that we're all really excited about when it comes to the rise of AI and other new and emerging tech, but also some of the things that we should be fearful of. So I'm going to just lean in more to that secondary category first. For people who are new to this area, how can unfairness or even discrimination arise when artificial intelligence is being used to make decisions?

Yeah, that's such a great question because, you know, we see—I know probably many folks here have been following or have used DALL·E 2, so we see these examples of the use of data science brought into machine learning and artificial intelligence that are magical, that are entertaining and some of DALL·E 2 that are even beautiful. But, you know, AI as we use it in day-to-day life and as it really impacts people's lived experience is often a lot more brittle, it's not as elegant and as beautiful as something like DALL·E 2, and so we have a long way to go and part of how that manifests itself are forms of discrimination.

I mean, part of how we're using AI is as a kind of robot gatekeeper to various kinds of resources and services in, you know, Australian society and in US society and, you know, it is often the case that because the data that we use to train machine learning and AI is often historical data, that this kind of historical precedent can embed past prejudice into these technologies and enable present-day discrimination. So as much as we would like to think that there's a kind of ultimate objectivity that comes with artificial intelligence that frees us from that, we're finding that it increasingly bakes it in—bakes in discrimination and bakes in discriminatory patterns from the past.

Verity referenced a few of these in her introduction, you know, that there are like hiring tools that learn from a company's prior employees. So we might think about a field like a very gender-segregated field like computer science in which past successful employees in a place like the United States or a place like Australia have most certainly almost always been men and in many instances almost been men of European descent. So if you're trying to train an algorithm on that historical data about what success looks like for this particular field because you want to have "objective recruiting", that means that women computer programmers, for example, will fall out of potentially what that algorithm understands to be somebody who would be predicted to be well qualified for this particular role.

Obviously, there are issues around housing. You know, mortgage approval algorithms are used to determine creditworthiness and in the United States they use home zip codes often and on the face of it, a census tract or a zip code should be a kind of neutral data point that we can place into an algorithm to help us sort of know more or better, but because of the extensive generation upon generation of housing discrimination in the United States, part of what zip codes do is they can be correlated with race, they can be correlated with poverty, they can be correlated with forms of historic racial segregation and ethnic segregation and they really extend decades of housing discrimination into the digital age.

So there's been other examples of this, but I think that, you know, the challenge we face—and I think the opportunity for innovation actually—is facing head on these challenges and understanding that the technology alone is never going to solve some of the big problems and challenges that we need it to solve and that we can also think about innovation in a way that leans into equity and democracy, so that if we truly think that this technology, this technique, is innovative, there are things that should come with that and that should include being maximally beneficial and minimising harms to folks.

That's an incredibly useful description. I want to just kind of zero in on a point you just made. I think what you said was technology alone won't solve all of our society's problems and part of the issue we have to wrestle with there is that technology exists as part of decision-making systems.

So very briefly I want to give a personal story about how I as a human rights lawyer first saw how artificial intelligence can threaten human rights and I saw this as a lawyer. The situation arose over 10 years ago. The state police here in New South Wales were using an algorithmic tool to create a list of young people who might be at risk of, to use their terminology, "descending into a life of crime". So, the police targeted the kids on this list. Police officers would come to their homes between midnight and 6am, officers would check on these kids at school and at work and, understandably, they hated it. The kids hated being on this police list.

Let's put to one side for a moment whether that is even an acceptable approach to policing, but I want to focus on which kids were on that police list because over time we noticed that literally all of our clients had dark skin, every single one of them. Later it emerged that 55% of the young people on that list were Indigenous, even though less than 3% of the population here is Indigenous. So that seemed a clear case of precisely the phenomenon that you've just described, where the police technology reflected and then entrenched an historical injustice.

Again, being very personal and not particularly professional, a decade on when I think about that, I don't really think about it as a lawyer, I think about it as a human. I'm sickened. These were kids. Some were as young as 11 or 12. Many had never been convicted of anything serious and yet this system, this algorithmic system, resulted in really significant intrusions in their basic rights. To some of them it would have been the defining experience of their life in the worst possible way. They were traumatised, it was terrible.

Have you come across a particular situation in the US that keeps you awake at night? Is there something like this that is your kind of origin story about these concerns?

Yeah. You know, sadly, there's been too many and I think, you know, thank you for sharing that story and also thank you so much for the work that you did at the Australian Human Rights Commission. I'm just such an admirer of you and your work, that work and this new work as well.

You know, sadly, in the United States, particularly with regards to black and brown communities, we know that there is this history often of disproportionate negative impacts of policing and, you know, the challenge that we face in this moment is that it's carried into the digital age.

There are many examples and we are trying to think about these in our policy making. For example, in Chicago there was an algorithm used by police that reused previous arrest data and the outcome of this was that it repeatedly sent police to the same neighbourhoods again and again, predominantly black and brown neighbourhoods, even when those neighbourhoods didn't have at the moment the highest crime rates. So again, this is that kind of historical into the present challenge.

Some of the challenges are around the historical data, but others are about privacy and consent. The challenge we are facing now is growing kinds of surveillance in communities that may not feel empowered to be vocal about asking questions.

We've had a few instances in the United States in which facial recognition systems have been installed at entrances of housing complexes, to assist law enforcement, to monitor when people are leaving and going. But the outcome of that is a kind of continuous surveillance of certain kinds of communities, in this case in a public housing authority—so already poor, under-resourced communities really subject to that kind of persistent surveillance.

And sometimes, because people are poor, we think we don't have to ask their permission. Like, we would never think of doing that kind of automated surveillance in more well-off communities without consent. So there are consent issues that we need to think about as well.

There's been so-called predictive policing systems that claim to identify or "predict" people who could be aggressors and in these cases, we have the privacy challenges, the historical precedent as a proxy for the present or the future, and in some instances we just have the black box in which the inability for communities to be able to ask for redress or to ask questions or to ask for an explanation about how a certain system reached its conclusion.

So I think there are a few—you know, we talk about AI and civil rights in the United States context or human rights internationally or democracy issues. It is this kind of Gordian Knot of lots of issues that we care about in government, including issues of consent, of surveillance, of privacy, and of equality.

I think that's a great description, a Gordian Knot. So now you and your colleagues in the Biden Administration are responsible for solving these problems. Can you tell us a little bit about how the Biden Administration is approaching these problems of AI?

Well, I would say we and the world, including you and your colleagues at the new Human Technology Institute, are going to have to wrestle with this, right? These are big governance challenges.

I think government can do a few things. When I came into the Biden-Harris Administration, so this was late January of 2021, the prior Administration had just stood up what's called the National AI Initiative Office and so it fell to myself and my colleagues to stand that office up and to really implement the framework that had been passed by Congress, which included a lot of work for government around responsible AI and really tasking departments and agencies in the Federal Government with working together to define what that is in practice and to come up with discrete ways that different agencies and departments would move that forward.

So that National AI Initiative Office sits within the Office of Science and Technology Policy that I lead right now. So we've got that project, which is really trying to coordinate and understand and map and also leverage the uses of or potential uses—current and potential uses—of artificial intelligence for government.

There's another piece of that work which is what's called a research resource taskforce, which is trying to broaden access to resources for automation.

Certainly in industry, part of the challenge that we face is that there's a very often homogenous group of folks who are making algorithms, who are designing automated systems, and who have access to the kind of compute and data resources that really are driving the AI turn and this is an attempt to democratise those resources and make them available to researchers at smaller institutions, emerging institutions.

The theory of the case here is that we can do a better job with all sorts of technologies if we have more people at the innovation table, people who think about some of the challenges we face around discrimination, who maybe even have experienced it firsthand, as part of the process of creating design parameters and creating visions for what technology looks like in the world.

So that's part of what we're doing. I think government can also be a bully pulpit—we can show leadership by offering a vision of what we want technology and science to do and be in the world.

We've been really excited over the last few weeks because we've had some historic legislation pass in the United States, including something called the CHIPS and Science Act, and there's also been what's called the Inflation Reduction Act, which has the biggest US investments in climate science and energy innovation, energy technology, ever in the history of the United States.

There's been a few other pieces of legislation as well. But taken all together, what they do so powerfully is say that the Biden-Harris Administration has a vision for how science and technology and innovation can be used in the world and how it can create jobs, how it can be used to support institutions that aren't typically supported at the same levels as other, larger institutions—be those small businesses versus large businesses or minority serving institutions as we call them in the United States or historically black colleges and universities—and making sure they're a part of this new science and technology innovation ecosystem as it's being built out.

So part of that leadership and vision piece is different from a regulatory piece and, at its best, government can offer us—often through legislation, but not exclusively—these sorts of visions for what technology might look like at its best.

Part of what we've been trying to do at OSTP is develop what we've been calling, or what's become called, the AI Bill of Rights. Actually we picked another name for it and in a lot of extensive consultation with the interagency in government, with the American public, with folks in industry, that's become the name that folks have called it.

We're really trying to, over the last year, think about a way to design and develop the use of automated systems and ways that ensure that technologies really promote and reflect and respect democratic values.

What's great about the Bill of Rights framework, which is one of the foundational documents of the United States, is that it is these high-level aspirations—that we should expect and we can envision through our aspirations a world where systems are safe and not harmful, where algorithms aren't developed and used and deployed in ways that place the American public and other publics at risk, that algorithms are used in a way that preserves our privacy, preserves our data and are used—our data is used in accordance with our wishes.

Those are hard things, but I think, as we're building out these new systems—and let's be very clear, DALL·E 2 notwithstanding, a lot of automated technologies and AI machine learning are very much in their nascent stage—what a tremendous opportunity to be able to work upstream to create systems, parameters, conversations and ideals for the technologies and for how people should be treated and engage with them, as opposed to waiting for, as the kind of examples that you talked about, Ed, and I shared, these kind of downstream challenges and poor, bad outcomes for certain communities then to be our response.

So, I like to think that we could take this as an opportunity to really be transformative in how we think about the governance of technology.

That's really informative and it provides a really interesting segue and I'm going to draw on some of the questions that are starting to come through the chat here.

It's sometimes said that there's a global arms race in artificial intelligence. Each country I think naturally brings its values to bear.

So you described initiatives like the US AI Bill of Rights and the desire to promote democratic values and to bake those values into the way in which AI is developed and used and regulated in the US.

As Jess Wyndham has pointed out in the questions, the United States is taking a related but different approach to the EU.

What do you think is the role of countries like the US and Australian Governments in cooperating in this really competitive environment—as I say, this global arms race in AI—and if you think that there is a role, what does good cooperation look like?

So there has to be cooperation, and I think let's be positive and look at this webinar as a kind of example of that, and of course we need to—a lot of where the big, powerful automation comes from are organisations that are multinational, multinational technology innovation companies, and the impacts of them are global and so none of these issues really abide nation state borders, and so we really can't have our kind of cooperation be within those borders as well.

Of course there will be pieces that are very distinct to particular historical communities or particular countries. A concept like the Bill of Rights is very much about harking back to the founding ideals of US American society and so we all will have those particularities.

But I think in the space of AI there are a couple of really great examples in which neither the US nor Australia nor the EU are really driving everything.

One of these is the OECD, which is a coalition of nearly 40 nations that are committed to democratic principles and are trying to work through a few initiatives focused on AI.

Our office, people from our office, have been really proud to participate in the OECD work as part of its network of experts and working together with other countries, other democracies, to think about how both our collaboration and in the technical design of technologies and use cases that values like fairness and transparency and safety and accountability can be agreed upon and deployed.

We've been really pleased to be involved in that work, including the standing up of this new framework that was launched a couple of months ago at the International Conference on AI in Work, Innovation, Productivity and Skills, which is this kind of risk system framework.

It provides us opportunities to see what's working in other countries or not working, think about use cases and think about ways to collaborate where we can even while having to abide by one's particular national laws, policies and politics.

Another example of the collaboration in the AI space in particular is the Global Partnership on AI, which Australia is part of, and these are cross-sector—this is a multinational but a multistakeholder initiative that is philanthropy and academia and industry trying to think about applied activities and research with regards to AI priorities that really builds out of this larger OECD space.

So those kinds of collaborations, for people here who are international lawyers or political scientists or the like, there's a lot of shifting happening right now in our multinational organisations, but it's also good to see, even as things like the UN continue to try to innovate, that there are these new multinational formations that are also at the same time helping us to think through things.

I would just offer one collaboration that we're doing with the UK because it's open and people can apply for it—a grand challenge on what we're calling democracy-affirming technologies.

Last winter, President Biden had a Summit for Democracy and part of that was standing up this challenge and so right now there is—I think the prize is up to a million dollars—for innovation around developing safe and effective and equitable systems that really preserve privacy and the use of technology, so this can be everything from differential privacy to other kinds of technical or even theoretical systems that might work.

There's a lot of talk more and more these days about science and technology diplomacy and I think it's a pretty important tool for collaboration and that a lot of the things that we want to do with innovation can be competitive, but also some of the big problems that we want to use technology and innovation to solve are global problems—climate change, for example, really having clean and green energy—and that will require healthy competition, but also quite a lot of cooperation.

That's terrific. So when we talk about cooperation, we've been focusing for a moment at that high level cooperation, but I want to circle back to something you talked about before about people often being excluded from the room, so to speak.

In a moment you'll have a sneak peek from Nick Davis, the co-founder of our new Human Technology Institute, and as Verity said, the institute applies a sociotechnical approach and at its core what we mean by that is bringing together technical experts, people responsible for decision-making systems that use AI and the communities affected by the use of AI and looking really carefully at those groups and making sure that there's no demographic groups that are being excluded.

Is that the kind of approach that you would support and have you seen it being done well?

Yeah, absolutely. I think that's brilliant and I think it's the right approach, so you can tell by my smile, I'm really excited to hear that that's the approach that you all are taking and look forward to following and staying in conversation with you.

I think the sociotechnical approach for dynamic evolving systems is how we have to think about policy making, about governance, about ethical frameworks for how we think about new and emerging technologies.

I've been really encouraged—the US Department of Commerce has this body probably known to many of you here called the National Institute for Standards and Technology, or NIST, and NIST goes back to the 19th century, I think, at least in the United States and it was the sort of measures like how many bags of grain equals a pound. It was this organisation that created the kinds of measurements and standards that allowed us to have commerce and allowed us to agree as a nation and as a world about weights and measures and allowed trade and all sorts of other things that we take for granted in modern society to take place.

So this is a historic organisation that has dealt with almost binary, one-to-one, like we measure something and we create a standard around it.

So AI has been this really wonderful challenge for NIST and they've really risen to the occasion and right now they're in the last stages of creating what they call an AI Risk Management Framework, but for them—and this was a big leap for the organisation—they really moved into the sociotechnical framework, so creating standards for technology is not just about the data and the algorithms, it's also about human and societal factors or how AI systems are used by people in the real world and how the development of these systems can amplify biases that are historical in the data, that are societal, that are personal, and an understanding that the challenges or biases or potential problems of automated systems really require that we pay attention to specific use cases, to design parameters, but also to human and social and organisational behaviour.

But it's really hard to do. It's easier to say, you know, this bag filled with grain equals this many pounds or this many kilograms than it is to think about how we should create the best possible, most rigorous standards for technologies that are really complex and computational and often have humans in the loop.

So I think we will find that—there's a quote from John Lewis, who was a civil rights leader and also a legislator in the United States who died just fairly recently, but he would say democracy is a practice, that it's not a static thing, that democracy is a practice and I think as we want dynamic technologies to be democratic that we need to think about it as a process and a practice, as opposed to something that we will achieve and that we can stop.

So I think a sociotechnical approach is one that is going to be essential for solving the challenges that AI might pose, will be essential for allowing us to leverage the benefits that AI may offer, but also will be essential for thinking about how humans matter and how human organisations and human behaviours are always a part of the technologies we create and use.

I think that's fantastic and what it also raises is what are some of the preconditions in helping people to engage with this massive technological change that is happening all around us.

Again draw

If you are interested in hearing about future events, please contact events.socialjustice@uts.edu.au.

Find out more about the Human Technology Institute

Democracy is a practice. And we want dynamic technologies to be democratic, so we need to think about it as a process and practice, as opposed to something that we will achieve. – Dr Alondra Nelson

Those who experience the worst effects of algorithmic bias and discrimination are not necessarily engaged in the design of automated systems. These folks and their diverse concerns and experiences should inform the design and governance of these systems. There should be a participatory democracy around technology assessment. – Dr Alondra Nelson

Speakers 

Dr Alondra Nelson leads the White House Office of Science and Technology Policy and is a Deputy Assistant to President Joe Biden. As a scholar of science, technology, medicine, and social inequality, Alondra has contributed to national policy discussions on inequality and the social implications of new technologies, including artificial intelligence, big data, and human gene-editing. 

Prof Edward Santow is Industry Professor – Responsible Technology at the University of Technology Sydney and Co-Director of the Human Technology Institute. Ed leads UTS's new initiative on building Australia's capability on ethical artificial intelligence. From 2016-2021, Ed was Australia's Human Rights Commissioner, where he led the Commission's work on artificial intelligence and new technology, among other areas of responsibility. His areas of expertise include human rights, technology and regulation, public law, and discrimination law. 

Prof Nicholas Davis is Industry Professor – Emerging Technology at the University of Technology Sydney (UTS) and Co-Director of the Human Technology Institute. From 2015-2019, Nick was Head of Society and Innovation and a member of the Executive Committee at the World Economic Forum in Geneva, Switzerland, responsible for developing the theme of the Fourth Industrial Revolution and overseeing the development of cooperative emerging technology policy efforts around the world.

 

Share