AI and Algorithms in Financial Services—Future Areas of Focus

AI and Algorithms in Financial Services—Future Areas of Focus

Blog WilmerHale W.I.R.E. UK

Artificial intelligence (AI) and algorithmic models are used extensively in the financial services sector across a broad range of business areas. Two-thirds of respondents to a survey conducted jointly by the UK’s Prudential Regulation Authority (PRA) and Financial Conduct Authority (FCA) in 2019 said that they already use machine learning (a subcategory of AI) in some form. At that time, its principal use was in detection of money laundering and fraud, as well as some customer-facing applications (customer services and marketing). However, it was also used in ways that had a more direct impact on the products and services provided to customers, such as credit rating, trade pricing and insurance. One can fairly assume that the use of AI has proliferated considerably since then and that the systems have become far more sophisticated and advanced. Rightly, their regulatory implications are starting to come into greater focus.

In its “2022/23 Business Plan,” published in April this year, the FCA expressed its commitment to become a “data-led regulator.” In addition to exploring how it can use AI in discharging its own supervisory and enforcement duties, the FCA intends to explore how the use of AI is changing UK financial markets and to better understand the risks such systems pose to customers, particularly through the production of biased outcomes.1 More fundamentally, these statements reveal how nascent the regulators’ thinking is on these issues. That will have to change. In time, the regulators will need to clarify what standards and governance practices they expect firms to adopt when using AI.

While the FCA will publish a discussion paper on AI later this year, three relevant reports have been published this year to date. In February, the FCA and PRA published their final report on the AI Public-Private Forum (AIPPF).2 In April, two papers were published by the Digital Regulation Cooperation Forum (DRCF), the members of which are the FCA, the Information Commissioner’s Office, the Office of Communications and the Competition & Markets Authority. The first considered the benefits and harms of algorithms.3 The second considered algorithmic auditing.4 These three papers identify the core issues that the UK’s financial services industry will need to grapple with in connection with AI and algorithms, as well as the areas where more regulatory requirements and guidance are likely. Six of those issues are considered below.

1) Data standards

Both the AIPPF and the DRCF acknowledge that the quality5 of the underlying data is a “major determinant” in shaping how any AI system performs and may be the principal source of its risk.6 The controls and governance systems that firms adopt to monitor and verify data quality will be critical. However, as the AIPPF acknowledges, there is a lack of established standards and consensus on data quality good practice, both generally and specifically, in respect of AI.7 While the report encourages firms to develop their own internal standards and systems,8 it seems that a regulatory steer on data quality may be on the agenda.

The risks presented by data quality are particularly acute in relation to alternative data, which increasingly is being leveraged by financial institutions through AI systems. Alternative data is typically obtained from third parties. This increases the risk that the data may be of insufficient quality or was obtained unlawfully. Moreover, it raises a question of accountability that could, in part, be addressed by the regulators clarifying their expectations of firms to verify and conduct due diligence on the data. The AIPPF report advances a potential solution: a labeling system whereby the data vendors supply information on the attributes of the data’s quality, which the buyers would be entitled to rely on.9

2) Customer fairness and the avoidance of bias

Regulated firms are required to pay due regard to the interests of their customers and treat them fairly.10 This obligation will be expanded when the overarching Consumer Duty, which requires firms to act to deliver good outcomes for retail customers, formally comes into effect.11 Given its focus on outcomes, it will be especially significant in respect of algorithmic and AI systems. The fairness of an algorithm is, to a significant degree, a result of the data fed in: The risk of bias can be reduced by removing information about sensitive characteristics from the data (“fairness through unawareness”)12 and by ensuring that the algorithm has not been trained through a data set that reflects historical bias.

However, as the DRCF paper suggests, the expectation around how firms should approach bias caused by data that acts as a proxy for other characteristics or qualities is presently undeveloped. The use of proxy data, and how it can create bias, sits on a spectrum. Certain information can correlate to protected or sensitive characteristics and therefore unwittingly produce a biased outcome. Alternatively, information can be deliberately relied on to assess an otherwise unknown and unobserved quality. The outcome may become unfair where proxy data is shown to be an invalid or unreliable indicator for the quality sought to be assessed. More problematic is the reliance on information that is directly material to the model’s purpose but that separately may be a proxy for a separate characteristic.

The DRCF report concludes that organizations may benefit from regulatory guidance to understand what counts as a legitimate use of proxy data, particularly where, as with the financial services industry, the regulators have asked firms to demonstrate that the “fair treatment of customers is at the heart of their business model.”13

3) Transparency and explainability

A central finding of the AIPPF report is that transparency and communication are key elements of AI governance.14 The paper recognizes that the level and nature of transparency that are appropriate depend on the audience. In respect of the regulators and internal compliance departments, firms should be able to provide accurate information about the decisions made (i.e., explain them) and give assurances that the resulting recommendations and decisions are reliable.

However, in respect of consumers, who may be affected by the model’s output, the report notes that communicating how decisions are made may be as important as explaining them. Consumers should be told when a model is being used to automate decisions. They could be told what data was used and the most important features that led to the decision.15 The report acknowledges that “[b]est practice on transparency for consumers when AI is being used for decision-making is still evolving[,] but standardised approaches and a responsive attitude may lead to [a] better consumer outcome.”16In due course, the regulators may have to articulate what standards of customer transparency are required.

4) Model risk and governance

Noting that model risk management practices are well established, the AIPPF report acknowledges that AI models exhibit several features that may render existing frameworks inappropriate. These differences include the speed at which AI models operate, their increasing opacity and complexity, and the potential for multiple models to interact within a network. These features present a different set of challenges, e.g., validating a model quarterly or annually is too infrequent for AI models that are updating continually.17 They also present a wider range of risks: data protection and cybersecurity as well as financial. The report accepts that there is, at present, a lack of regulatory clarity and recommends that the PRA and FCA follow the Federal Reserve’s lead by creating specific model risk regulations for AI.18  

Separately, the use by firms of models that have been developed by third parties raises an accountability question: Who is responsible for their monitoring and auditing, the vendor or the user?19 The AIPPF report asks how firms and their senior managers could comply (and demonstrate compliance) with their responsibilities when relying on third-party service providers. It notes that the EU’s draft regulation on AI proposes to hold both the vendor and the client accountable for credit-scoring AI models20 before suggesting a shared responsibility model, which could be achieved through contract provisions that set out obligations for responsible practice on both sides. Such obligations include regular monitoring for performance and outcomes, both internally and with the third-party provider.

The use of AI and algorithms in the financial services sector will only expand, as will the complexity of the models and data sets used. Accordingly, the regulators will be increasingly focused on developing and articulating their expectations of firms in this area before their supervisory and enforcement departments turn their attention to whether those standards are being met.

5) Auditing algorithms

The second DRCF report is focused on the auditing of algorithmic systems. “Auditing” in this context represents a range of approaches to reviewing algorithmic processing systems and encompasses technical and nontechnical measures ranging from assessing organizational algorithmic governance policies to the specific data and models being used. The report acknowledges that while “algorithmic auditing is not currently conducted extensively and the market for audits is relatively immature, an audit ecosystem will likely grow in the coming years.”

The report proposes some ways in which the regulators could develop this aspect of the field. It suggests that they could:

  • provide guidance on when audits are appropriate (e.g., before a system goes live, at regular intervals or following concerns of harm being raised);
  • establish principles of best practice;
  • assist with the development of audit standards;21 and
  • develop a system of accreditation through which they can approve third-party auditors to certify the systems.

However, these proposals are unlikely to be implemented in the near term. The paper states that the DRCF intends to undertake further activity to understand the regulators’ roles in the field of algorithmic processing during this financial year.

6) Senior management responsibility

A central objective of the UK’s Senior Management and Certification Regime (SMCR) is to achieve individual accountability for regulatory failings. As was noted during a speech delivered by the PRA, assessing and ensuring accountability in the context of AI and algorithmic systems will prove challenging. Given the complexities of the systems, it is difficult to determine whether any failure is a function of poor design or implementation. Moreover, how does one define what it means for humans to have acted with due skill, care and diligence or with “reasonable steps” in the context of these systems?

These issues are not a focal point of the recent papers. However, the AIPPF report identifies the importance of having clear lines of accountability and responsibility for the use of AI at the senior management and board levels. It also recommends a centralized body responsible for a firm’s AI governance policy. One imagines that the regulators will, in due course, need to give more careful thought as to how the SMCR is expected to map out the management and oversight of algorithmic and AI systems.

Conclusion

The use of AI and algorithms in the financial services sector will only expand, as will the complexity of the models and data sets used. Accordingly, the regulators will be increasingly focused on developing and articulating their expectations of firms in this area before their supervisory and enforcement departments turn their attention to how those standards are being met.

A condensed version of this article was published in Thomson Reuters Regulatory Intelligence.


  

 

1 https://www.fca.org.uk/publication/corporate/business-plan-2022-23.pdf.

2 Final report, Artificial Intelligence Public-Private Forum, published February 2022.

3 “Benefits and harms of algorithms: a shared perspective from the four regulators,” published April 28, 2022.

4Auditing Algorithms: the existing landscape, role of regulators and future outlook,” published April 28, 2022.

5 Data quality is assessed against several attributes, e.g., accuracy, completeness, consistency, representativeness and timeliness. See page 16, AIPPF. See also paragraph 3.1.1 of the Turing Report for further explanation of the components of data quality.

From the DRCF paper, “Stakeholders told us that regulators should pay close attention to the way organisations handle data, given that the quality of data is a major determinant in shaping how an algorithmic system performs”; and from the AIPPF paper, key finding, “While modelling choices and the management of model risk are clear priorities, many of the benefits and risks from AI can be traced back to the data rather than the AI models, algorithms[] and systems.”

7 See, e.g., para. 53, “Adapting existing data quality metrics and standards to an AI context may also be hampered by a lack of industry-wide consensus on data standards in general, including agreement on good practice”; and para. 70, “As firms develop and evolve their data strategies to use of AI systems, there is an increasing call for the development and use of data standards specific to an AI context.”

8 Para. 54.

9 It is worth noting that the Alternative Data Council has started to produce standards for the purchase and use of third-party alternative data by investment firms.

10 Principles of Business, PRIN 6.

11 There are also separate statutory legal requirements that apply under the UK’s General Data Protection Regulation (GDPR) and Equality Act. The GDPR requires all data processing to be done fairly and transparently (Article 22). The Equality Act prohibits organizations from discriminating against people on the basis of protected characteristics, including where services are provided through algorithmic processing.

12 See para. 3.2.2 of the DRCF paper.

13 See the DRCF paper, “The benefits and harms of algorithms: a shared perspective from the four digital regulators.”

14 See para. 135.

15 The AIPPF also acknowledges that the transparency around how models work may be less feasible for certain areas (e.g., anti-money laundering monitoring) because it may undermine the effectiveness of outcomes. Para. 138.

16 The AIPPF report, para. 136.

17 Especially dynamic models, i.e., those that learn continuously from live data and generate outputs that change accordingly.

18 SR11-17 The Fed - Supervisory Letter SR 11-7 on guidance on Model Risk Management -- April 4, 2011 (federalreserve.gov)Reserve Guidance on Model Risk Management.

19 This issue is likely to extend beyond a bilateral relationship of vendor-buyer. Indeed, the supply chain of the underlying technology of any algorithmic system may involve multiple parties, making effective governance more challenging to implement. See the Turing Report.

20 See here.

21 Regulators are interested in developing standards for algorithmic processing, i.e., that algorithms are built using representative training data sets; standards for auditing algorithms, i.e., how an auditor should inspect an algorithm for biased data (e.g., by running fresh data through a model and comparing how the results vary by demographics of the data subject); and standards for performance, against which algorithmic system outputs can be benchmarked, i.e., by setting out the criteria that would determine whether outputs were transparent, explainable or biased.

 

More from this series

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link.(The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.