Law, Ethics & Accountability in AI
I was honoured to be invited to advise the UK All Party Parliamentary Group for AI and on Monday we met at the House of Lords to debate the ethical and legal framework for decision making and moral issues in AI.
There were panellists from the academic world, philanthropists, blue chip corporates, and tech start-ups who all gave hotly debated presentations. Much of it was really useful and current; others were highly complex.
Here are the top 5 outcomes from that debate!
1. Artificial intelligence (and complex algorithms in general, fueled by big data and deep-learning systems) have a huge influence on how we live now. From the news we see, how we finance our dream home to the jobs we apply for, without global standards or a governance framework to operate in.
2. It’s crucial that AI R&D is shaped by a broad range of voices—not just by engineers and corporates, but also by social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers.
3. Algorithms have parents and those parents have values that they build into their algorithmic progeny. This Group want to influence the outcome by ensuring ethical behaviours and governance that includes the interests of the diverse communities that will be affected.
4. Advancing accountable, fair and transparent AI. What controls do we need to minimize AI’s potential harm to society and maximize its benefits?
5. Communicating complexity in plain English. How do we best communicate the nuances and diversity of a complex industry like AI?
What if we leave it to industry to define what is right for us?
AI must have ethics, be accountable and advance the public interest. Therefore collective Governments must take the lead on defining laws, definitions and direction. We should not leave it to industry to take the lead otherwise this automatically generates a path towards self-interest and shareholders.
What is the future risk if we do nothing?
Jonathan Zittrain, co-founder of the Berkman Klein Center and Professor of Law and Computer Science at Harvard University advises “…the thread running through these otherwise-disparate phenomena is a shift of reasoning and judgment away from people. Sometimes that’s good, as it can free us up for other pursuits and for deeper undertakings. And sometimes it’s profoundly worrisome, as it decouples big decisions from human understanding and accountability. A lot of our work in this area will be to identify and cultivate technologies and practices that promote human autonomy and dignity rather than diminish it.”
In 90 minutes we only scratched the surface and there is still a huge amount of future debate and work to be done to even arrive at a high level governance structure that must be truly global.