Algorithms that block or promote online content must be explainable

Statement to the Christchurch Call second anniversary summit

I’m a member of the Christchurch Call Advisory Network, representing Civil Society. Jacinda Ardern and Emmanuel Macron recently convened a summit on the second anniversary of the establishment of the Call. Over 35 heads of state, along with top-level managers from online service providers attended. I was one of four speakers from Civil Society invited to provide comment of maximum two minutes length on the work of the Call. I made the following contribution on the subject of how algoritms block or promote content online.

Algorithms must be explainable and subject to human review and audit, through an ongoing multistakeholder process funded by those profiting from online activities.

Shalom aleichem, salaam aleikum, Kia ora koutou!

I’d like to start by remembering the victims of the Christchurch mosque attacks – may their memory be a blessing. I’m here today representing the Christchurch Call Advisory Network, and the Wellington Abrahamic Council of Jews, Christians, and Muslims. I am also a software developer and a professional director of companies applying AI in a number of industries.

We are grateful that online services can now prevent the distribution of most terrorist, violent, and extremist content. They are using the only real tool we have to do this at scale, algorithms.

But the Advisory Network is also concerned about algorithmic overreach. Algorithms regularly classify legitimate free speech as offensive. Algorithms are not perfect and they never will be.

I’d like you to remember four things today: First and most important, Algorithms must be explainable. If you can’t explain why your algorithms make the decisions they do, you shouldn’t be using them.

Second, we need human review of any algorithmic decision on request. Humans must be able to override algorithmic decisions or we become slaves to the machines.

Third, there must be regular, open, independent audits of algorithmic outcomes, and sharing of anonymised data with researchers.

Finally, we must all work together in a sustained multistakeholder approach to ensure that we maintain public safety whilst protecting human rights. This must include members of the affected communities, and funded by those profiting from online activities.

Explainable AI, human review, independent audits, and multistakeholder engagement are key to maintain both our human rights and our security.

Let’s work together to make our online future safe, free, open, innovative, and rewarding for all.