• Posted on 13 Dec 2021
  • 45-minute read

New technology can improve our lives – but there are also profound risks and threats.

This phenomenon is exemplified by the rise of facial recognition technology. For example, using this tech to unlock your smartphone is relatively low risk, but its use in policing could cause significant harm to marginalised groups. 

Research has shown this technology tends to be far less accurate in identifying people with dark skin, women and people with a physical disability.  

The risk of overuse is also significant, because this can result in our sliding into a society with the infrastructure for mass surveillance—a profound challenge to our right to privacy. 

In this session, Aaina Agarwal, Dr Neils Wouters, Amanda Robinson and Duncan Anderson joined Edward Santow to discuss whether the potential benefits of facial recognition technology could outweigh the risks.

M5pPCjTlcOw

Descriptive transcript

Hello, everyone. Thank you for joining us for today's event. All of us beaming into this webinar from Australia are doing so from First Nations land. I acknowledge the Gadigal people of the Eora Nation upon whose ancestral lands the UTS City Campus now stands. I pay respect to the Elders past, present and emerging, acknowledging them as the First Nation owners and their ongoing connection to this land, waterways and culture. I particularly want to acknowledge the Gadigal people as the traditional custodians of knowledge for the land on which UTS stands.

My name is Ed Santow and I am the Industry Professor, Responsible Technology at UTS, where I'm leading an initiative to support Australian business and government to be leaders in responsible innovation by developing and using artificial intelligence that is powerful, effective and fair. It's my great pleasure to be joined today by a distinguished group of speakers: Aaina Agarwal, Dr Niels Wouters, Amanda Robinson and Duncan Anderson.

I want to give a bit of background to this webinar. In my previous role as Australia's Human Rights Commissioner, I led the Human Rights and Technology Project, which explored the human rights and broader social implications of artificial intelligence, or AI. We said really clearly that new technology can improve our lives. We've seen AI enable extraordinary progress in important and diverse areas from healthcare to service delivery. But there are also profound risks and threats.

That phenomenon, that idea that new technology is double-edged, bringing opportunities and risks, is exemplified perhaps most by the rise of facial recognition technology. Many of us now take this tech for granted because we use it to unlock our smartphones and other devices, something that carries some risk but relatively low risk. The uses of facial recognition, however, are limited only by our imagination and there are some more risky areas of facial recognition. For example, when we use that tech by the police to identify someone suspected of committing a crime.

When I was Human Rights Commissioner, I had two particular concerns about this technology. The first is misuse and the second is overuse. Research has shown how facial recognition tends to be far less accurate in identifying people with dark skin, in identifying women and people with a physical disability. If you apply that to a high-risk context, then that problem can become very serious, even catastrophic.

To return to the example I gave before, if the police wrongly identify someone as a criminal suspect, that can lead to very significant violations of human rights. But even if facial recognition never made errors, the risk of overuse is also really significant because that can result in our sliding into a society that permits mass surveillance, a profound challenge to our right to privacy.

So this is the big question, I think, for us: how can we encourage positive innovation that benefits our community while guarding against the risks? I'm leading a research project at UTS to outline a model law on facial recognition. Our aim is to achieve that balance, to set red lines where facial recognition technology should be prohibited or subject to very strict safeguards, but also to encourage positive, safe innovation in other areas because the law should do both of those things.

To begin our discussion today, Dr Niels Wouters will give a short demonstration of his creation, Biometric Mirror, which some of you may have already had a chance to play with. You'll know if you've had a chance to look at it already that Biometric Mirror aims to provoke debate about the legal and ethical implications of facial recognition specifically and perhaps AI more broadly.

The Biometric Mirror application works by taking a photo of your face for psychometric analysis and then it gives you that analysis of the person, including characteristics such as weirdness and emotional instability. We think this is a provocative but hopefully a really useful introduction to some of the issues we'll be discussing today.

Before I hand over to Niels, some very brief background. Niels Wouters is a world-renowned designer, researcher, innovator and co-creator of Biometric Mirror. Niels is a Senior Design Researcher at Paper Giant and a sought-after expert on the societal risks and opportunities of emerging technologies. So over to you, Niels.

Excellent. Thank you so much for that introduction, Ed. I can only echo your point that you introduced so eloquently. As a technologist myself, as someone who is really interested in human-computer interaction but also as a trained architect, I really think that innovation can only be responsible and can only be positive if at some point we include the public in those conversations.

When I started looking into facial recognition technologies a couple of years ago, there was an enormous debate emerging in the Western world where some academics took it upon themselves to develop fairly controversial facial recognition systems and models. I really identified that as an opportunity to bring the discussion around the challenges and the opportunities into the public realm. So really what we did was set out and develop our own controversial facial recognition system that we conveniently called Biometric Mirror.

Now, I'm going to share my screen with you and, if I'm not mistaken, you might have all received a link to try out the Biometric Mirror analysis yourself. If you haven't, don't be concerned. I'll share the link at the end of my session as well.

Again, as Ed points out, first of all, this is research; second, any assumption and any analysis or conclusion that Biometric Mirror presents you with, please take that with a significant or a very largely sized grain of salt. We know very well what the data set is that feeds into our system. We also understand and acknowledge that our data set is inherently flawed in so many ways.

What does Biometric Mirror do? As this web page says, it's a tool that can be used to assess your personality by simply looking at your face. Many of us walk down city streets and we like to look at other people's faces and very often we make immediate assumptions about who these people are. So this in itself is not a new thing. We are just automating that process.

Once I click Agree, you'll get a second view into my home office and you'll see that my face is already identified. Once I'm happy with a certain posture and I'll do a bit of a smile before I press the button at the bottom of the screen. Once you're in position, once you're happy with how you appear on the screen, you can take a photo of yourself. Definitely smile. That's what we all do.

The Biometric Mirror then takes a couple of seconds to upload your face to our facial recognition model and you then see some of these assumptions appear straight away. Age isn't too far off. I think it's about two years off. But you also see that Biometric Mirror very quickly turns nasty and starts to analyse traits that, first of all, I wouldn't necessarily want to be shared with computers, with systems.

Secondly, I'm also very conscious that a lot of these traits have nothing to do with my face. Aggressiveness – apparently I'm average aggressive. I don't even know what 'average aggressive' means. I'm very humble. I did not know that my face could tell that. Unfortunately, I'm only average attractive, but again, as an academic, I can live with that assumption.

What Biometric Mirror is really interesting for as well is that it's not just these assumptions that it makes, but it also ties them to a speculative scenario. This is actually an interesting one in the context of the discussion we'll be having today. But I'm indeed perceived to be quite aggressive and 39 years old. Not sure how that matters. But imagine that this information is automatically fed to police forces to monitor my movements or to monitor some of my movements. Of course, that is very far from a desired and wanted scenario.

But really, Biometric Mirror is a tool to have that conversation with members of the public and take a conversation that is otherwise very technical in nature or very easily influenced by legal conversations, policy conversations, take these conversations into the public realm and make sure that every single one of us, regardless of technical proficiency, can participate in that discussion.

After having run this study for the last three or four years, I can tell you, everybody has an opinion about this technology and everyone feels really included in the conversations that they can have with us about where this technology should take us as a society. As I said at the start, if you haven't had the chance to try it out yourself, head to biometricmirror.com/webinar. However, if you are accessing this panel from your mobile phone, I would suggest you do it after the panel has concluded. Ed, over to you.

Thank you, Niels. I'm inclined to take a beat to process some of that information, because it can be quite something, right? It gives us a bit of a window into a potential future and lots to chew over there. Before we do that chewing, it's a great pleasure to introduce the other three members of the panel.

First, Aaina Agarwal is a business and human rights lawyer based in the United States. She works as counsel at BNH.AI, the very model of a modern law firm, which joins lawyers with data scientists to advise clients on AI. She's the producer and host of Indivisible, a podcast that explores AI, crypto, and human rights. Formerly, she served as the Director of Policy at the Algorithmic Justice League with Joy Buolamwini and others, a leading not-for-profit organisation focused on the impact of facial recognition. Welcome, Aaina.

Secondly, we have Amanda Robinson, who is the Co-Founder and Director of Humanitech at Australian Red Cross, a think and do tank which seeks to ensure that technology serves humanity by putting people and society at the centre. Before the Red Cross, Amanda held senior strategic roles in innovation, digital product development and marketing. Amanda's work focuses on how social innovation and frontier tech can help solve complex social problems. She's also a member of the Industry Advisory Board for the College of Business and Law at RMIT, and Chair of the Trust Alliance Pilots and Programs Working Group. Welcome, Amanda.

And last but certainly not least, we have Duncan Anderson, who is the Executive Director, Strategic Priorities and Identity at the New South Wales Police Force. Duncan co-chairs the New South Wales Identity Security Council, which works across New South Wales to promote security, privacy and accessibility of identity products and services. Duncan previously held senior roles on national security and law enforcement with the Australian Federal Government, particularly PM&C, the Attorney-General's Department and Home Affairs. While he was at Home Affairs, Duncan was responsible for the National Identity Security Strategy, including managing the National Document Verification Service and implementing the new face-matching services. So, welcome also, Duncan.

So, I'm going to start with some questions. As we saw with the demo from Niels, discussion about facial recognition tends to move pretty quickly towards some dystopian visions of the world. We're going to get there, don't worry. But before we do, I'd like to invite each of you to give an example of how you see facial recognition technology being used well, either now or into the future. Aaina, as you're beaming in from the US, I might start with you to give us a quick example.

Yeah, so thanks for the question, Ed. I'm not quite sure that I have the positive answer that you might be looking for. I think that, as you mentioned, there is a pretty low risk with the one-to-one security access applications that you had mentioned, so being able to get into your phone and perhaps being able to access a building or an area that you frequent. I think the risk there, from a risk perspective, is pretty low – debatable as to whether the convenience and potential advantages of security merit the overall use of the technology there, but I don't see too many risks.

But when it comes to positive applications in the broader social context, I am of the opinion that I can't really think of any. I think that the risks of having the infrastructure of a surveillance state in place are really too great to justify the use or the potential use, even in very limited circumstances. For example, you might have some checks whereby police would be required to obtain a warrant, and there would be thresholds for a level of criminality. Only in instances of very serious crimes, terrorism, child abduction, sex trafficking, could facial recognition potentially be used. And that's all well and good, and then obviously systems would have to be vetted and made sure that they're fit for purpose, and there would also be checks of human review. However, that doesn't get around the fact that you still would require a very robust infrastructure of surveillance in place for that exception to even be there, and I think that the risks of that are really just too strong for me to play out how that could be justified. So I'll leave it there. I don't think that that was a positive answer that everyone was looking for there, but that's where I'm at with how I feel about that.

No, I think you've given us a good lead there. Maybe, Duncan, then I could pass the baton to you. What do you feel most positive about in terms of facial recognition?

Thank you, Ed, and good afternoon, everyone. I'd also like to start by acknowledging that I'm joining you from the lands of the Gadigal people of the Eora Nation. I think that some terminology is really important in this space, and in particular I see a distinction between face recognition and face classification. The demonstration that Niels provided, which is really interesting and concerning at the same time, is about technology which seems to make judgments about a person's gender or ethnicity or whatever it might be, which, as I understand it, works quite differently to technology around face recognition, which is based around seeking to determine whether two or more photos are of the same person.

So I think that's an important distinction to make. Then within face recognition, there's the different use cases, but I would also say that the verification use case where people can use facial recognition to help prove their identity when accessing online services, I can see a lot of benefits in that. There's some stats – I won't reel them off now – but identity crime continues to be a significant issue in Australia and elsewhere. It's not been helped by the pandemic, and I think the appropriate and responsible use of face verification there can help people protect their information and their identities from compromise and still deliver privacy benefits.

Thank you, Duncan. I think there's some really important distinctions there that you're drawing. I'll go to Amanda now. Amanda, is there a particular use case for facial recognition that you feel most positive about?

Yes, thank you, Ed, and hello to everyone. I just acknowledge that I'm joining from the Wurundjeri lands of the people of the Kulin Nation here in Melbourne. The International Committee of the Red Cross has been running a program called Trace the Face for a number of years now and have been using biometric systems in conjunction with refugee databases to better match refugees with loved ones in times of conflict. As you can imagine, not without its inherent risks and is managed very carefully around ensuring that we can still deliver that service without biometric data if it needs be, but we are seeing these technologies provide significant benefit and efficiencies in terms of reuniting people who have been separated in times of conflict more quickly. I think the opportunity there in the humanitarian sector to be able to deliver services to people in need quickly and more efficiently is certainly there, but we do need to step into it very mindfully.

Thank you, Amanda. And Niels, you're working with this technology a lot. What do you feel is a positive use case?

Yes, I think I'm echoing largely what Aaina and Amanda have said. It is hard for me to find positive use cases, even though the work you are doing in that realm is really exciting. What is interesting is you are combining it with other technologies, so you're not just relying on the face to make assumptions. I think that's something we always have to keep in mind. This is a technology that is fairly young. There are false positives, but I think we should also acknowledge the false negatives that often appear. When we start talking about crime, for instance, a false positive might have a pretty significant impact on an individual, and so might false negatives that are not identified by a system or are not connected to a certain case that people are trying to solve. If anything, I think there are positive developments in the medical field where facial recognition – in a broader sense, I should say machine learning – are being used, but what is really interesting is that the medical field itself, they always have a human in the loop at some point. It will never be a computer or a machine or an algorithm making a decision. It'll make an assumption, but it'll always be a trained professional ultimately that sees that assumption and turns that into a procedure, a medical procedure, for instance, or a treatment plan. I think that is something that we can learn a lot from as well.

Thank you, Niels. There are a couple of things that you touched on there which I think are really important, particularly about false negatives and false positives. When we were consulting the Australian community about AI and specifically about facial recognition, that was really important to them. People wanted to be safe and when they talk about safety, they particularly talk about accuracy, but they also wanted it to be fair and accountable. Amanda, really a question for you. When we talk about safety, fairness and accountability, is that something that developers and users of this technology should do out of the goodness of their hearts, or are there laws in place right now that require that?

There are already a range of legislative provisions when it comes to collection and use of biometrics, and we see that through GDPR and current Australian privacy laws, which include things around consent and notice, consideration of purpose around collecting of information and storage of data. But I guess the question really comes down to how enforceable these provisions are. We're also increasingly seeing calls for a ban on biometric recognition technologies from people who believe that technical and legal safeguards actually could never fully eradicate or eliminate the threat that they pose. What we do feel is that this notion of goodness of the heart doesn't tend to work out so well in the real world, even with the best of intentions. So we have experienced and we've all experienced or read about well-intentioned technology that has gone wrong and unintended consequences that have caused harm, particularly to vulnerable groups.

So the humanitarian sector operates under a principle of 'do no harm', which really puts people at the centre, and we are constantly assessing whether risks are too great and whether those risks outweigh the benefits of anything that we do, and that includes technologies. The growing concerns about implications of new and emerging technologies on society are real and we need to address those. We're seeing, particularly through some of the work that you led, Ed, with the Human Rights Commission around development of ethical guidelines, and this cuts across private, public and for-purpose sectors. So these types of ethical frameworks, which are supported with proper guidance and training, can be a really valuable addition to the regulatory system.

Within the Red Cross movement globally, we have implemented different policies and processes around biometrics to help facilitate responsible use and to address data protection challenges in particular, and so we're starting to implement our own guidelines and processes to manage ourselves. I think introducing things like frameworks alongside with laws and regulations are going to be really key in helping us be able to move forward. So I think best of intentions, absolutely, but we need to do more than that to ensure that we protect people, and particularly the most vulnerable.

Thank you, Amanda. I'm going to ask you a question in a moment, Duncan, but just a reminder for everyone listening in, I'm going to ask about another 10 or 15 minutes of questions. People have already started putting questions in the Q&A, which is fantastic, so feel free to continue to do that. I'll come to those in about 15 minutes. Duncan, clearly you're at the cutting edge in this area. The police have already been noted by me and a couple of others as an area of use of facial recognition that may well be pregnant with possibility, but it is also an area of concern. Do you think that there are extra obligations on a government body like the police to make sure that they're safe in how they use this sort of technology?

Well, the short answer is yes, Ed. This is a really important discussion to have, I think, because building community confidence in the use of technology by police is certainly part of that – an important part of the relationship the police have with the community. Amanda mentioned there are privacy laws which cover the use of personal information, including biometrics, but in New South Wales, the privacy law doesn't apply to police in this way, and that's because the Parliament has made the decision that given the nature of some police functions, where you're dealing with people who don't always cooperate, it's not always feasible to seek consent for the collection of information. But even though privacy law doesn't apply in the same way, there still is quite an established legal framework around police activities which applies to the use of facial recognition. It wasn't necessarily specifically designed for that but it has more general application.

So there's a Police Act which sets out the functions of the organisation which is around providing policing services to prevent and detect crime and prevent injury, et cetera. That Act also sets out some values that the Police Force has to abide by and they cover things like preserving rights and freedoms and exercising authority responsibly, and also the efficient and economical use of resources, which can come into play in this sense as well. There's other legislation. There's the Law Enforcement Powers and Responsibilities Act, which sets out some procedural matters, some of which are to do with, for example, how police collect photos when people are being charged with offences. There is a Law Enforcement Conduct Commission Act, which sets out various things including provisions around what is called agency maladministration, so police need to make sure that whatever is being done isn't unreasonable or unjust or improperly discriminatory, even though they might otherwise be lawful. There's anti-discrimination laws. There's the other laws – GIPA, which is access to government information – which covers the explainability of decisions. There's the State Records Act and then there's also policy around the Government's AI strategy and policy and the ethical principles around that, and we have been doing some work in that space, on top of internal policing policies and procedures as well.

So there is an established framework there. It applies to facial recognition and other things, and, as I said, I think it is an important discussion to have about whether that is adequate, whether it might need to be looked at in future. Also think about whether – I think we can all agree that the irresponsible or inappropriate use of facial recognition, nobody wants to see that, but I suppose the question I have is: is it the nature of the technology per se that means it can't be used responsibly or is it more in the way it is being used and whether there's sufficient human involvement and oversight?

That's a really crucial question. In a sense, we'll come back to that in a moment when we talk in a bit more detail about police use of facial recognition. I'll come back to you, Duncan. Before we do that, I want to zoom out a little bit and ask a bit more of a philosophical question of you, Aaina.

Last year you wrote, I'm quoting here, "The idea of privacy is meant to provide people with a space to determine their own identities. When this space is intruded upon by algorithms that use a profile to determine what we see, it limits our cognitive autonomy to construct how we think and feel." How might biometric information specifically, which is really things like our face – how might that sort of information be used to build profiles and what are the implications of this, especially in what you have seen from the United States?

So that is a great question. I think that we are fortunate that we live in democracies where we aren't seeing a lot of the potential ramifications of facial recognition in the hands of government and a surveillance state play out, but I think that it's important to recognise that we don't get from zero to the CCP overnight, and that's the reference there to China's government, for those who don't know. It kind of starts with civil liberties and how they're eroded through lesser applications and through just the feeling and the sense of living in a society where there are cameras and where there is surveillance and what that does. That comment is really meant to say when you live somewhere where you feel that you're being surveyed, where you feel that your movements are being tracked and then being able to reconstruct a sort of identity of who you are, it kind of limits how you're able to show up and express yourself because you don't know whether there's a certain pattern of movements or somebody that you're associating with or some conversations that you're having, how that might be used against you. Effectively, every time you go outside and your face is captured in a public place, the state can gather points of data that can then be configured and constructed in a way that can potentially be used against you and you don't really know on what basis that might be, based on how you are associating or identifying or even not. Maybe it's something to do with your neighbours or your family members. So I think that on a philosophical level, there is just this notion of surveillance and what it does to kind of erode the ability for people to show up and express themselves in their lives.

So that's kind of on one level that I just wanted to say more generally. And then to speak about it more specifically, in China, the CCP does have hundreds of millions of cameras that are overseeing society and these cameras can distinguish and sort you instantly. In their case, in the Xinjiang province, that is being used for them to quickly sort and effectively commit genocide. So there are discriminatory implications when it is being used in the hands of a government that is vested in repressing a minority or minorities. So that's obviously at one extreme end but I think on the way towards that, you create a society of fear and when people live under a sense of fear and being watched and reprimanded, as I was mentioning before, they really aren't quite sure how they might show up in their own lives. So they just start to shut down and then what does that mean for human rights? What does that mean for a human life?

Thank you, Aaina. I think that's a really fascinating tour of what it means to be surveilled. I think probably most of us have experienced the discomfort of being watched without any kind of consent, but the vast majority of us, I suspect, on this webinar have not experienced the kind of worst of that form of surveillance, and I guess what you are really exploring there is how that can be incredibly chilling on a person's ability just to go through life and do the normal things, go shopping, meet with friends, all of those things that many of us are lucky to take for granted.

So moving from that discussion about surveillance back to the police here in New South Wales, Duncan, I wonder if you can just answer a factual question for us to start with, which is how is the NSW Police using facial recognition and other similar biometric technology right now?

So the New South Wales Police, we're committed to the responsible use of facial recognition. In broad terms, it's used as an aid to human decisions about identifying people, rather than an automated decision-making tool. So we are not using live facial recognition. As Aaina pointed out, that's in use in China and even in countries such as the UK, where they're using CCTV and facial recognition to monitor public places. That is not what we're doing in New South Wales.

What we do do, though, is use facial recognition as part of investigating crimes after they have occurred – what is sometimes called retrospective facial recognition. That can also be used to help identify or locate missing persons as well. That's primarily about taking images which can come from various sources and then matching them against holdings that police would already have access to, such as photos that are collected as part of when people are arrested and charged. In some cases, the images can also come from CCTV footage. For example, if police investigate an armed robbery and there was CCTV footage of the location, the police would collect that and then try to identify the perpetrators using those images.

We're also in the process of trialling some national face-matching services which provide, under certain conditions, the ability to match against the image holdings of other government agencies to help police when they're seeking to identify people as part of investigations. But, as I said, in all those cases, it's the machine assisting with the job of filtering and then providing assessments which are then reviewed by trained facial recognition examiners who then can make assessments about whether these two photos are in fact of the same person. Even then, that information is passed to investigators. So it is a combination of automated matching and human review. There is some research done in recent years which indicates that that type of approach is actually more accurate than using either of those two methods in isolation. But even then, the combination of automated matching and human review is only used to generate leads for further investigation.

Thank you, Duncan. Essentially, Duncan was talking about a number of uses of facial recognition by the State police here in New South Wales. A crucial point that he was emphasising was that the technology is primarily used to generate leads, and at that point a human comes in and will assess the strength of that lead and may say, "No, this person is not who the machine thinks they are," or "Yes, it probably is," and then take whatever action the individual police officer sees fit.

How do you feel about it when I pose this question generally? Do any of the panellists want to comment on that? Amanda, from a Humanitech perspective, did you want to make any observations?

Yes, thanks, Ed. I guess a couple of things. One is, as Duncan said, context is really important, so the way and the specific use cases in which this technology is being used and in conjunction with human oversight is really important. The other is this risk-benefit and how we weigh up the risk of the misuse and overuse of these technologies versus the benefits that it can provide to society. But I guess more broadly, as we know, these systems come with inherent bias and the potential to discriminate is very real, and so we have to be really careful that it doesn't further entrench marginalisation or harm on vulnerable people when we do use these technologies in these contexts.

Maybe on that point I can bring you in, Aaina. We're fast running out of time, but maybe ju

If you are interested in hearing about future events, please contact events.socialjustice@uts.edu.au.

The risks of having the infrastructure of a surveillance state in place are simply too great to justify the use or the potential use [of FRT] even in very limited circumstances Aaina Agarwal

Discussions around the ethics of facial recognition are very often and too much led by ethicists in their ivory tower. What we really need is that close connection with members of the public. Dr Neils Wouters

The growing concerns about implications of new and emerging technologies on societies are real, and we need to address those. Amanda Robinson

Building community confidence in the use of technology by police is certainly... an important part of the relationship police have with community. Duncan Anderson

The idea that new technology is double-edged bringing opportunities and risks is exemplified perhaps most by the rise of facial recognition technology. Ed Santow

Speakers 

Aaina Agarwal is a business and human rights lawyer and media voice focused on the impact of disruptive technologies. She works as Counsel at BNH.AI, and is the Producer & Host of podcast Indivisible.

Dr Niels Wouters is a senior design researcher at Paper Giant. He is the co-creator of Biometric Mirror – an online tool that demonstrates facial recognition usage in psychometric analysis.

Amanda Robinson is Co-founder & Director of Humanitech at Australian Red Cross, a think + do tank, which seeks to ensure that technology serves humanity by putting people and society at the centre.

Duncan Anderson is the Executive Director, Strategic Priorities and Identity within the NSW Police Force. He co-chairs the NSW Identity Security Council which works to promote security, privacy and accessibility of identity products and services.

Edward Santow is Industry Professor – Responsible Technology at UTS. He was Australia’s Human Rights Commissioner from 2016–2021, where he led the most influential project worldwide on the human rights and social implications of AI.

Share