As the use cases for GenAI in the financial sector grow, regulators will play a critical role in how GenAI applications are shaped and brought to market.
Generative artificial intelligence (GenAI) could be the technology that forever changes the financial sector. Financial institutions are eagerly exploring the applicability of GenAI to their work – from AI-powered investment advice to automated call centres.
As the use cases for GenAI in the financial sector grow, how can regulators manage this lightning-fast pace of change? How can they ensure the Asia-Pacific region maintains its competitive advantage with GenAI innovation in the financial sector?
GenAI regulation in Asia-Pacific
Across the region, regulators have generally adopted principles-based approaches to GenAI that are less prescriptive than some other jurisdictions. Principles-based approaches focus on guidelines designed to achieve outcomes, rather than prescribing how firms should pursue those outcomes.
This flexibility should enable firms to experiment with GenAI technology and control structures, encourage innovation, and foster a competitive market relative to more restrictive regions. Regulators in Asia-Pacific have recognised this aim to seize a competitive advantage.
These principles are often categorised by fairness, ethics, accountability and transparency (FEAT), an acronym popularised in APAC by the Monetary Authority of Singapore (MAS). The FEAT principles were established to guide all firms using AI or Data Analytics (AIDA) to provide financial products and services.
This includes guidelines like regular review to minimise bias, using AI in alignment with the firm’s stated ethical standards and ensuring that an appropriate authority is responsible for AI decision-making. The Hong Kong Monetary Authority (HKMA) has issued similar principles and pursued further regulation in the form of guidance for GenAI.
Many financial services regulators across Asia-Pacific are extending existing regulations and adding new ones only where necessary to address the new risks of GenAI. This strategy is effective because existing technology risk frameworks and rules can often be easily applied to GenAI.
There are ample resources available to regulators for GenAI – consumer protection regulations, data and privacy regulations, regulations for model risk governance, regulations for third-party risk management and more. In some cases, such as Mainland China, there are additional targeted rules and regulations that focus on their particular concerns, for example national security, social aims or protecting intellectual property.
Agile regulatory strategies are not only an approach encouraged by the industry, but a necessary one. These principles-based approaches have created opportunities for innovation in Asia-Pacific, possibly positioning the region to lead on GenAI implementation in financial services compared with other regions, such as the European Union, which appears to be adopting a more prescriptive approach. For Asia-Pacific to remain competitive, regulators must continue to pursue principles-based approaches rather than more prescriptive ones.
While the regulatory environment is generally flexible in Asia-Pacific, there are challenges. The financial sector is a highly regulated industry and GenAI regulation is in its infancy. Many regulations are under development or have been recently enacted, leading to uncertainty about how they apply.
Furthermore, many firms are still establishing compliance with consumer protection laws. The more burdensome this process is, the more difficult the adoption of GenAI will be. This is where a more balanced approach from regulators could make a big difference for the Asia-Pacific region.
GenAI governance in finance
The financial sector, while eagerly eyeing GenAI as a disruptive technology, remains gun-shy in its usage. Financial institutions must consider customer interest, the risk of regulatory fines, licence conditions and their reputations when adopting such a burgeoning technology.
While financial institutions understand the general nature of the risks associated with GenAI, there is still some novelty to how those risks will manifest that hinders its rapid, widespread adoption. This will improve as financial institutions develop a fuller understanding of GenAI risks and their controls.
To roll out use cases using GenAI, financial institutions must manage a range of associated risks, most of which are not new. Model transparency, bias, accuracy, cybersecurity vulnerabilities, model performance decline, third-party dependencies, information leaks, hallucination and intellectual property issues are all challenges that financial institutions must face to have confidence in GenAI products and services.
There’s also some uncertainty about how the technology will perform in customer and market environments. These challenges are hindering the speed of GenAI rollout in the financial sector, but this does not reflect a lack of interest in GenAI. Many institutions are working on internal-facing products or experimenting in safe “sandboxes” before going public.
A good example of this is the GenAI sandbox launched by the HKMA, which allows firms to experiment within regulatory expectations – providing companies with hands-on experience in deploying and operating GenAI through small-scale, controlled experimentation.
This will surface the unexpected issues that will arise as GenAI is developed and enable the fine-tuning of controls – minimising the scale of inevitable mistakes.
Human-in-the-loop… for now
So how can financial institutions deal with the challenges associated with GenAI in the near term to stay competitive? Many are adopting “human-in-the-loop” approaches to mitigate risk and encourage trust from both executives and consumers.
Human oversight is a crucial component of GenAI – both from a trust perspective and from a risk perspective. Consumers expect human oversight over autonomous processes, with consumer-facing GenAI products and services facing more scrutiny, both from the public and from the companies that offer them.
From consumers to executives, people are more accepting of human error than of AI error. As a result, financial institutions use human-in-the-loop practices to mitigate the risks of autonomous systems. There are various ways human-in-the-loop can be applied, such as validating model parameters or the right to opt out and have a human make decisions rather than AI.
Relying on humans to govern GenAI has its limitations. As AI becomes more integrated in daily processes, its output will increase, requiring an ever-increasing amount of supervision by less efficient humans.
This scalability issue is why high use of human-in-the-loop should be considered a transitory solution until AI can effectively supervise itself, within defined governance frameworks where the placement of human-in-the loop is most impactful. Some regulators appear to recognise that its use may be transitory, permitting it on a risk basis or where “appropriate”.
Designing AI that can effectively supervise other AI and reduce the need for a human will advance both its efficiency and reliability. An example of this would be setting tiered thresholds for when an escalation pathway is needed, either to another AI review process or eventually a human.
Having these escalation pathways formalised will allow for greater transparency, and efficiency of straight through processing, with GenAI processes versus using a human-in-the-loop system that relies more heavily on human decision-making that will take time. This will drastically reduce the difficulty of administering GenAI processes while also increasing their transparency, explainability and safety.
The future of GenAI in Asia-Pacific finance
As GenAI is more widely adopted by firms, particularly in consumer-facing contexts, there will be problems that arise from its use. It should be expected that, in this growth phase, there will be painful lessons to be learned. However, much like regulation, most of the risk management that applies to GenAI is not new.
Bridging the gaps from existing risk management frameworks will help firms and their leaders understand, mitigate and accept the risks associated with this new technology. The earlier firms can do this, the more advantage they will gain in the market.
There are many opportunities for financial institutions to improve their business processes and controls by using GenAI. Reduced cost of service, improved financial inclusion, risk management, anti-fraud controls, improved customer service and process reengineering are just some of the ways AI will serve as a virtual assistant to humans working in the financial sector.
The applications of GenAI are limitless – like a compliance-checker that scans emails as they are being written or an AI assistant that helps call center employees identify a vulnerable individual mid-call. AI will become an invaluable resource for optimising workflows and creating better detective and preventative controls in the financial sector.
Regulators will play a critical role in how these applications are shaped and brought to market, as the industry is already outpacing the legislative process. In this ecosystem of rapid technological advancement, flexible, principles-based regulation and a principled response to it by industry will be the key for Asia-Pacific to maintain its edge over other jurisdictions with more prescriptive legislation.
***
By Eugène Goyne, Partner, Risk Consulting, Asia Pacific Financial Services Regulatory Lead, Ernst & Young Advisory Services Ltd; Portia Cerny, Partner, Risk Consulting, Asia Pacific Financial Services Data Risk Leader, Ernst & Young; David Millar, Partner, Risk Consulting, Ernst & Young; and Chris Barford, Partner, Technology Consulting, Ernst & Young Advisory Services Ltd.
The views reflected in this article are the views of the authors and do not necessarily reflect the views of the global EY organisation or its member firms.