As regulatory thinking evolves, firms must ensure that any current or planned use of AI complies with regulatory expectations.

By Fiona M. Maclean, Becky Critchley, Gabriel Lakeman, Gary Whitehead, and Charlotte Collins

As financial services firms digest FS2/23, the joint Feedback Statement on Artificial Intelligence and Machine Learning issued by the FCA, Bank of England, and PRA (the regulators), and the UK government hosts the AI Safety Summit, we take stock of the government and the regulators’ thinking on AI to date, discuss what compliance considerations firms should be taking into account now, and look at what is coming next.

The FCA recently highlighted that we are reaching a tipping point whereby the UK government and sectoral regulators need to decide how to regulate and oversee the use of AI. Financial services firms will need to track developments closely to understand the impact they may have. However, the regulators have already set out how numerous areas of existing regulation are relevant to firms’ use of AI, so firms also need to ensure that any current use of AI is compliant with the existing regulatory framework.

A new publication from the UK’s financial regulator signals to firms that they should take steps to manage risks in the use of AI.

By Stuart Davis, Fiona M. Maclean, Gabriel Lakeman, and Imaan Nazir

The UK’s Financial Conduct Authority (FCA) has published its latest board minutes highlighting its increasing focus on artificial intelligence (AI), in which it “raised the question of how one could ‘foresee harm’ (under the new Consumer Duty), and also give customers appropriate disclosure, in the context of the operation of AI”. This publication indicates that AI continues to be a key area of attention within the FCA. It also demonstrates that the FCA believes its existing powers and rules already impose substantive requirements on regulated firms considering deploying AI in their services.

Latham lawyers explore the latest insurtech trends and regulatory developments impacting the sector in Europe and Asia.

Disruptive technology is revolutionizing insurance, enabling insurers to achieve growth by leveraging big data and creating innovative solutions to enhance customers’ digital experience. We are pleased to launch Insurtech Insights, a series of webcasts to discuss the most recent trends in the insurtech space and how to navigate regulatory developments.

If adopted efficiently, the PCPD’s Ethical Accountability Framework should help organizations to demonstrate and enhance trust with individuals.

By Kieran Donovan

In October, 2018, Hong Kong’s Privacy Commissioner for Personal Data (PCPD) presented the findings of an inquiry into the ethics of data processing, commissioned by the PCPD with the help of the Information Accountability Foundation (IAF). The result of the inquiry, published as the Ethical Accountability Framework, provides an “instruction manual” for processing data in an ethical and accountable manner.

Following on the heels of the PCPD’s report, the Hong Kong Monetary Authority (HKMA) issued a Circular titled Use of Personal Data in Fintech Development, encouraging authorized institutions (AIs) to adopt the PCPD’s Ethical Accountability Framework.

By Andrew Moyle and Stuart Davis

The UK government’s 2017 Autumn Budget included some measures of particular interest for fintech firms, demonstrating the government’s continued commitment to making the UK a world-leading fintech hub.

The government has provided only scant detail on these measures at present, but no doubt firms will be watching closely to see how they are developed, and what benefits they can bring for the fintech sector.