Last week the UK’s Science and Technology Committee published a report into the use of Robotics and Artificial Intelligence in the UK. The report raises some interesting questions about the implications of using Artificial Intelligence (AI), particularly within driverless vehicles and other autonomous technologies.
What is AI?
It’s difficult to define. The report describes AI as:
“a set of statistical tools and algorithms that combine to form, in part, intelligent software that specializes in a single area or task. This type of software is an evolving assemblage of technologies that enable computers to simulate elements of human behaviour such as learning, reasoning and classification.”
Rather wordy, but in brief the key difference between AI technologies and more traditional tech is AI’s capability to evolve and learn through information flows.
Day-to-day technologies are already using AI in narrow fashion, such as spam filters or voice-recognition software – for example digital assistants like SIRI or Cortana.
On a grander scale, autonomous cars provide a good example of the potential for AI systems. Autonomous systems can learn, respond and adapt to situations that are not pre-programmed or anticipated in the design, such as unexpected weather or traffic conditions and unknown environments. For example, Tesla Model S cars are connected to the cloud and constantly contributing data to a shared database as part of a "fleet learning network". The idea, according to Tesla's Elon Musk, is that
“when one car learns something, all learn”.
Key findings of the Science & Technology Committee Report
The report has three key areas of inquiry, namely:
- Economic and social implications of AI and Robotics – how might growth in AI and robotics affect human employment and well-being? While noting the potential for gains in productivity and efficiency, the report suggests this may perhaps be coupled with some loss to well-established occupations (for example, professional driving services). Equally we may see a situation where algorithms are developed that can substitute for human cognitive work (for example that done by – er – lawyers). Digital education, re-skilling and up-skilling are identified as priority areas.
- Ethical and legal issues – Autonomous car technology has helped to highlight the ethical and legal issues with AI (see our microsite here), because driverless technology is close to entering the mainstream, and because of life and death road safety questions it asks. Public trust and safety come into sharp focus where (in true sci-fi style) machine behaviour might change or adapt over time, so there is a need to ensure AI stays predictable and operates as intended. In addition, the use of AI involves weighty privacy challenges, with user data and environmental data being collected, stored and transferred constantly.
- Research, funding and innovation – the report highlights the current lack of government strategy to develop skills and secure critical investment and funding into Robotics and Autonomous Systems (RAS). It recommends the establishing of a Leadership Council with membership from academia, industry and government to pursue a national RAS strategy.
A legal framework for AI
The legal implications are manifold, with one of the first questions being
“How do we ensure accountability if an AI operated machine goes wrong?”
UK government plans to use its proposed Modern Transport Bill to put Britain “at the forefront of autonomous driverless vehicles ownership and use”. This includes a plan to address liability via the extension of compulsory motor insurance “to cover product liability to give motorists cover when they have handed full control over to the vehicle”, as well as relying on the existing Consumer Protection Act and common law principles of negligence to determine responsibility.
But other branches of AI do not have a handy pre-existing compulsory insurance framework to hitch onto. Should a new compulsory insurance scheme be introduced? Something to think about – or should we leave that to an AI?