Artificial Intelligence (AI) is a rapidly growing market, with a global value of USD 450 billion in 2022, expected to grow by 20% annually. The AI market is projected to be worth USD 15 trillion by 2030, three times the size of the mobile market. However, the adoption of AI technology by businesses varies depending on location, with 54% of Canadian companies using AI compared to 15% in the UK. The discrepancy in adoption rates may be due to who is surveyed, with better-informed answers from Chief Information Officers. It is crucial for non-technical leaders to also be aware of AI’s potential.
AI is gaining attention, particularly in the area of recruitment and discrimination. The EU’s AI Act, set to become the first horizontal legislation for high-risk AI applications, identifies recruitment as a high-risk area. Businesses are allowed to use AI in recruitment, but must ensure risk management frameworks are in place to avoid biased data and track the system’s performance. The legislation is expected to be finalised by the end of 2023 with a two-year implementation period. Canada’s AI and Data Act follows a similar pattern, assessing high-impact AI applications based on anticipated impact. Existing legislation, such as GDPR, also restricts the use of personal data and requires transparency in automated decision-making processes. The AI Act allows some wiggle room for recommendations that still involve human decision-making.
There are potential legal risks associated with the use of AI systems, particularly those classified as high risk. There is a growing concern around the question of liability, with a presumption of harm in cases involving AI systems. This means that users of the system would need to prove that their AI was not the cause of harm to the affected party. Additionally, there is a concern about the autonomy of workers and the potential for AI to be used to micromanage and abuse workers’ rights.
There is also a need for organisations to have a good management process in place, ask the right questions about how the system operates, and ensure that the vendor has gone through the necessary assessments. Transparency and communication with employees about how the system works and their ability to engage with it are important. Organisations must also consider individual differences, such as disabilities, when using AI to track employee engagement with the computer.
While most of the legislation regarding AI liability is still in early stages, it is important for organisations to be informed and ask the right questions to ensure they are using these services responsibly.
This presentation was delivered during the monthly ICMIF Member-to-Member Discussions for HR Directors. Please contact Mike Ashurst, Vice-President of Reinsurance and Professional Development, to find out more about the HR discussions.