A UK Parliamentary report, "AI in the UK: ready, willing and able?", on the fast-moving artificial intelligence sector sees a role for the UK to push development of the technology in an ethical direction with a new AI Code. But it also proposes a range of practical measures, and encourages policy-makers to take a hands-on, proactive approach rather than leaving the industry to develop in its own way. Committee chair, Lord Clement-Jones commented:
“The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences”
Practical steps to boost AI
AI has been developing for years, but recent enhancements in areas like deep learning and computer processing power, along with the expansion of big data, have led to rapid progress. The report downplays fears about superintelligent machines taking over society, and instead focuses on more immediate and practical concerns. Among the report’s proposals the following stand out:
- Large companies with control over vast quantities of data should not be allowed to dominate. Competition authorities should be tasked with a proactive review of the potential monopolisation of data. Innovative companies of all sizes, and research organisations, should be able to access data on fair and reasonable terms. This has similarities with the approach taken by competition regulators to access to standard-essential patents in areas like mobile communications.
- Greater visibility and control for individuals and better protection for privacy. Individuals should know when and how AI is being used to make decisions about them. Datasets should be audited to reduce the risk of prejudice against groups in society.
- Support measures like a growth fund for SMEs in the AI field to support scale-up, co-funding of PhD positions and standardised mechanisms for spin-outs from universities.
- Increased access to visas for international recruitment.
- Targeted procurement by public sector organisations to promote the use of AI.
- Clarity around legal liability if AI systems malfunction or cause harm through poor decision-making, ideally through a Law Commission review.
Engaging with the ethical issues
The Committee heard from experts across industry and academia. The active approach of businesses like DeepMind, Microsoft and Prowler.io in engaging with the ethical issues is recognised in the report.
A core recommendation is for a cross-sector AI Code embodying five principles, that might form the basis of an international consensus:
- Artificial intelligence should be developed for the common good and benefit of humanity.
- Artificial intelligence should operate on principles of intelligibility and fairness.
- Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
- All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
- The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
The importance of ethics in this context should not be underestimated – it has been emphasised by various experts during discussions about AI that we have been involved in (for example: the pro-manchester annual business conference).