In the context of this presentation, emerging risks means risks where the underlying exposure is evolving and may not be adequately reflected in the historical experience that risk assessment and pricing are currently based on. The focus on technological risks means that there are three notable areas of emerging risk that would not be covered: pandemic, cyber and climate change.
Autonomous vehicles
Until now, in the event of a motor accident, the operator of the vehicle responsible for the accident is liable, and the motor insurance responds accordingly. If there was a failure of the vehicle itself, then the product liability insurance responds. This split has worked for many years, but the rise of autonomous vehicles is changing how this is going to work.
Autonomous vehicles are on a spectrum, known as SAE (The Society of Automotive Engineers) automation levels, ranging from SAE 0 (no automation) to SAE 5 (full automation). Currently we are somewhere between SAE 2 and SAE 3 where the driver needs to be at the wheel and/or be prepared to intervene if requested. Sensors and processing are the key to moving past SAE 2 as these allow the vehicle to properly monitor its environment and react. There have been issues with the sensor technologies that have caused several fatal crashes since they were introduced. Another problem is the expectation gap regarding self-driving capability, with some drivers mistakenly believing that the vehicle is operating at a higher level of automation than it really is.
An increased number of self-driving cars should lead to a move from motor insurance towards product liability, because as more driving is delegated to the vehicle the liability shifts to the manufacturer. From an insurance perspective, this raises the questions of how much product liability will have to be bought and what impact it will have on insurers with large motor portfolios. There is also a shift in how product liability works, where historically it has dealt with damage or injury caused by failure or incorrect operation of a product, whereas self-driving cars will potentially lead to situations where damage or injury has been caused by a product operating exactly as designed
In summary, it is a very complicated situation with a lot of technical issues to overcome to reach a level of failure rate that is societally acceptable. Product liability is going to change dramatically over time and motor insurance will diminish as the exposure moves to the products side.
Additive manufacturing/3D printing
3D printing allows someone to build 3D objects out of metals and plastics based on a downloaded plan. This will lead to home/distributed manufacturing, so this raises the question of who the manufacturer is from a product liability perspective. Is it the person who makes the item, the manufacturer of the 3D printer, or the creator of the file used to build the 3D object? What does this mean for homeowners’ insurance and for product liability?
There are also crime implications of additive manufacturing, as it is possible to use a 3D printer to build a gun or a replica of a face sufficiently accurate to unlock a phone or thwart other biometric identification. There are issues around manufacturing patented/copyrighted products, and things that require testing/certification such as bike helmets and medical products. There are also health issues arising from the use of additive manufacturing products.
Artificial intelligence
Artificial intelligence (AI) is the ability of a machine to mimic human-like intelligence to perform functions a human might otherwise perform. There are three major features of AI that impact insurance: learning ability, where behaviour may not be totally preconceived by the programmer; use of robotics can cause injury or damage without human action; connectivity, where all these things can be linked together through the Internet of Things (IOT). Most product liability laws are designed for products which do not change after they are sold, and their capability remains the same. This is no longer the case as we now have hybrid products that are a combination of hardware, software, and services.
Regulatory intervention is in its very early stages. The EU published a communication “Fostering a European Approach to Artificial Intelligence” in April 2021, the US are also reviewing but are in an earlier stage of development. The desire is to harmonise the legal framework around AI, splitting it into four categories or risk, ranging from unacceptable risks which will be banned to limited risks where no legal changes are proposed. Legal proposals for high-risk AI, which includes critical infrastructure, transportation, and hiring and recruitment, will have implications for product liability. These proposals include adding software into the scope of product regulation, moving to a strict liability approach where the injurer pays, and reversing causation so it will be assumed unless disproved.