Human Rights, technology & the Fourth Industrial Revolution
In the world of intelligent machines, we need smart, adaptive governance models to ensure that machines benefit humans.
My expertise lies not in technology, but rather in governance, both the theory and the cold, hard practice of politics.
In the current era of rapidly advancing, disruptive technologies, my concern naturally goes to questioning the fundamental principles of governance which will manage and regulate the societal changes this era will precipitate.
On 24 July 2018, the Australian Human Rights Commission launched an international conference on human rights and technology. It is part of their major project on human rights and technology to explore how life-changing technological progress will impact on our human rights.
As the AHRC Issues paper points out there are already government and parliamentary processes underway in this space, including the Australian government’s commitment of almost $30 million to develop a technology roadmap, standards framework and national AI Ethics Framework to identify global opportunities and guide future investments.
But I feel that further exploration is necessary for us as a society, facing a juncture in history, to reflect on and agree upon the fundamental principles that should be applied across these processes.
Should government be involved at all?
First, this old chestnut.
We’re currently undergoing what is being referred to as the Fourth Industrial Revolution – the three phases that have preceded our current wave of technological advancement were: first, mechanised production with steam and water; second, mass production with electricity; and third, automated production with electronics and information technology.
We are now experiencing the fusion of technologies that blur the line between physical, digital and biological spheres. This can probably be best understood by considering the technologies fuelling the revolution, think: AI, the Blockchain, nanotechnology, The Internet of Things, and autonomous vehicles.
What marks each of the industrial revolutions is that at each stage society experiences a struggle over distributing the benefits of new productive technologies. It was from this power struggle that trade unions were born, and after WWII, the rise of the welfare state.
Each mechanism was relevant to the historical context and technological forces of the time. We need to ensure that a similar creativity and collective responsibility are adopted in seeking solutions to the challenges we now face.
There are already structures in place for government to decide how companies are allowed to use data. Governments decide how to invest public funds in AI development, and governments decide how they want to harness AI, for policing, healthcare, education and social security – systems which touch all of our lives. Yet in the current political climate distrust in politicians and the state is high – just look at the public reaction to My Health Record, which shows a record low in public trust in data security.
Trust in government springs from transparency and accountability, but both of these areas come up against difficulties in the world of big data and AI. The challenge inherent in AI is that they are learning organisms; you can only “lift the lid” to a certain point. So how do you make a process transparent that will always, to a certain degree, be invisible?
The Australian Government’s response to this question has been expressed by Michael Pezzulo, secretary of the Home Affairs Department, who has said that government must apply the golden rule that:
No robot or artificial intelligence system should ever take away someone’s right, privilege or entitlement in a way that can’t ultimately be linked back to an accountable human decision-maker.
At least in government, clearly there is an assumption that human beings need to be involved.
I would add that the talents present in the human beings in question need to be broad.
The humans behind the robots
As Marina Gorbis, Executive Director of the Institute for the Future highlighted in a recent speech, technologists are no longer just developing apps, they're developing political and economic systems.
We need multi-disiplinary collaborative approaches – with historians, policy makers, economists, lawyers, financiers and philosophers as well as technologists involved in the ideation and development of our technology.
We need involvement from business people, civil society representatives, people with disability, people whose jobs are affected by automation, young people, old people, people from various countries, backgrounds and culture, women and men – and most of all we need the general public.
At the UTS Faculty of Transdisciplinary Innovation, Assoc. Professor Theresa Anderson proposes an analogy. Just as it take a village to raise a child, we have reached a level of technological complexity where it effectively takes an infrastructure to raise an AI. This infrastructure is technical as well as human, physical as well as social, visible as well as invisible.
The organisational structures needed to reset power inequalities, such as trade unions and the welfare state from previous times of social disruptions, need to be adapted to fit the era we are in. In the area of AI and big data, government must drive the setting up of participatory, co-designed frameworks for ongoing conversations. This is fundamental to citizen trust.
Likewise citizens need to be empowered to stand up for their own interests.
But we do not need to completely reinvent the wheel. We do not need to innovate simply for innovation’s sake. There can be value in using existing norms and frameworks.
This is the beauty of a Human Rights approach: It is established, global, and generally uncontested.
Fundamental principles of a Human Rights approach
The UN International Covenants provide us with an international framing to push for new norms – one that is still highly relevant. A human rights approach applies five basic principles to assessing technology of the Fourth Industrial Revolution:
- Non-discrimination and equality
While the circumstances which we face change, these principles remain relevant and sound.
Another reason we should not try to reinvent the wheel in this instance is the cold hard reality of political capacity, where having an agreed-upon set of ‘boundary documents’ is critical. Especially when we face additional practical concerns of limited funding, a declining government sector and an increasingly powerful tech sector.
The human rights framework was created as the world emerged from one of the most terrible periods of human conflict and genocide, and its fundamentals remain true to this day.
Questions for the future
In the early days of this Fourth Industrial Revolution there are going to be more questions posed than answers provided. Part of the human challenge is finding the right questions to ask.
Some that I suggest we might begin with include:
- How do we sufficiently rebuild trust in the state so that it in turn can be trusted play a part in governance in the era of AI?
- What mechanisms can we use to achieve this in a world where technology is changing so fast that just keeping up with its capacities is hard enough?
- How do we create a framework with enough flexibility to ensure that governance doesn’t stifle creativity or innovation while at the same time maintaining the confidence and trust of civil society?
- How do we ensure that the voices and interests of all in our democracy are heard?
- How do we ensure that there is sufficient digital literacy amongst the population to empower people to stand up for their own interests?
We are only beginning the querying process required to tackle the big issues faced at the onset of the 21st Century.
The impact and opportunities possible through new technologies are being explored in a structured three-year project by the AHRC. UTS is partnering on the project, and you will no doubt hear more from us on this topic in the near future.
At the forefront of our minds is that the process continues to be shaped by human values, and that we are inclusive in our consultation.
The AHRC is currently taking submissions until 2 October 2018 on the views and questions around human rights and technology. If you would like to participate in this democratic process and make your views heard, please refer to page 46 of their Conference Issues Paper to make a submission.