Encounters of the Tech Kind: Digital Ethics Summit
With the emergence of complex technologies, particularly artificial intelligence, this ethics debate has become even more complicated – and even more crucial.
As Tim Gardan, Chief Executive of Nuffield Foundation, puts it: “what is the future of human agency in a world of manufactured intelligence?” We went to TechUK’s Digital Ethics Summit to find out.
The Summit drew together some of the leading minds in politics, philosophy, science, technology and academia; and across the private, public and third sectors.
One thing was very clear. People don’t think the technology sector is trustworthy or acting responsibly with their data. People don’t know how their data is being used, but they increasingly know that there are algorithms at work behind the scenes. Big Data is transforming human interactions and producing new types of manipulation.
But it’s not just the public and commentators who are concerned – as Martha Lane-Fox pointed out in her opening address, the tech titans themselves are starting to question the technologies they have unleashed on the world. There are intelligent technologies that are not necessarily doing what was originally intended by their creators – an image straight out of a dystopian sci-fi movie.
The design of AI systems has made explicit certain judgements in human decision-making that have, to date, always been implicit. As these systems have started to show bias and make discriminatory decision-making, we need to have honest discussions not about what technology can do, but what it should do. Such ethical flaws in AI has also shone a spotlight on the lack of diversity of perspective in many development teams; it is critical that when we’re programming these systems with our intelligence, we ensure that we bring all of society into the mix.
How can we embed digital ethics in business, investment, academia and government communities?
Thankfully, we are not starting from a standing beginning. Governments and organisations around the world have begun producing ethics reports and tackling some of these challenges. One example of this, which was much discussed at the Summit, is the new Centre for Data Ethics and Innovation. The Centre is due to open after the UK Government pledged £75m in investment after an independent review on AI assessed some of the ethical and social impacts of the technology. Elizabeth Denham, from the ICO, pointed to three important aims of the new Centre: encouraging public dialogue and consultation, futurology, and linking different regulators.
The Summit also heard the Nuffield Foundation’s proposed plans for a third-party convention, to bridge the gap between academics and policy makers and to foster ethical decision making. This convention will be made up of 12-15 members, from an array of industries and backgrounds, who will work to anticipate, share and evaluate emerging ethical issues.
For innovation, we need reliable and secure availability to data and the ability to share data in a fair and transparent way. However, today machines are making opaque decisions in their use of data and are, in some instances, used for political causes that impacts trust in our political system. This is exactly why we have started to speak about “Ethics by Design”, which is a take on the Privacy by Design clause of the GDPR. We need ethics to be considered at every turn, put into frameworks adopted and endorsed at the highest level of organisations. We also need to be realistic in our expectations of technology and examine whether AI is necessary in each situation. Will it deliver “data for good” and social benefits, or will it further harm the relationship between people, the tech sector and governments? This should be a careful consideration of any developer.
TechUK’s Digital Ethics Summit was a brave and inspiring attempt to take on some of the most complex philosophical, political and socio-economic challenges of modern day society. Issues such as homelessness, climate change, political turmoil and challenges in education and healthcare are extremely difficult to tackle. AI can be a solution, but regulation needs to keep pace. Moreover, the reignition of philosophical debates around ethics and human integrity will be essential to its success.
If 2017 has been the year for talking, 2018 is the year for doing.
Share