Having recently highlighted technology trends in financial services and in particular AI technology, the next natural place to go is governance and compliance when using this new technology.
As a rule, AI systems should be viewed in the same way as any information system within your financial services organisation. It is therefore important to ask some key questions. Does it match your overall acceptable usage policies? Does it match your overall security policies? And does it adhere to both legal and specific regulatory rules?
As a minimum requirement, you should consider the following:
- Is your use of AI ethical, considering accountability and fairness?
- Are you operating within data protection rules with these systems and are you complying with regulations such as GDPR?
- Do you have controls and checks in place to avoid any bias and discriminatory outcomes? Depending on how AI models have been trained, they can be prone to this.
- Does it meet Regulatory requirements? It’s important to adhere to regulations as set out by the FCA including KYC and record keeping.
- Have you modelled risk management? Assess and monitor associated risks with using AI.
- Are you retaining human oversight? It is of paramount importance to ensure that you retain human oversight for critical decisions and overall accountability.
So, what does this all mean in practice? If we use ChatGPT as an example, when asking the system itself, it claims to not have access to personal information unless this is provided by the user. It also recommends “to avoid sharing sensitive personal information online and be cautious while interacting with AI systems or any other online platforms.”
When asked about ownership of the data provided by ChatGPT, we have the following: “ChatGPT is an AI developed by OpenAI, the ownership and handling of data are determined by OpenAI’s policies. It’s important to review OpenAI’s privacy policy or terms of service for specific details on data ownership and usage.” which in turn leads to information saying that inputs will be used as part of the system training.
What steps can you take to ensure compliance whilst using AI?
- It is important to understand that AI technology is not confined to large systems running on the internet with multiple global users. If your business is seeing a use case for these systems, then it is worth considering building your own in isolation. For example, if you like using a system like ChatGPT or Google Bard, then it is possible to create similar systems for yourself. In these instances, it’s possible to train them using only data that you decide and only have your users access them. By doing this you can control both data in and data out of the system.
- It is worth creating some owners within your business to keep abreast of the technology as well as the regulatory implications.
- Take into consideration how new the marketplace is. Although AI has been around for a long time, the mass growth of large-scale online systems means that the industry will grow at pace and providers will likely start creating compliant systems.
- As with any use of technology, companies should ensure the correct policies and procedures are in place for internal users. Acceptable user policies need to be updated to include AI systems and where appropriate, relevant sites and systems should be restricted.
- Where AI products are being integrated into day-to-day products (such as Co-Pilot being released within Microsoft Office) you should work with your IT Managed Service Provider to understand the implications, if needed, these products can be configured to ensure they meet your obligations as a financial services business.
If you would like to know more about the impact and opportunity of AI within your financial services business, please speak to one of our experts.