Banks and insurers in the EU need to move quickly to ensure their AI systems comply with fast-approaching regulatory requirements. But their commitment to responsible AI should go beyond regulatory compliance and instead support an ethical approach to business.
Financial services firms are eager to adopt AI to improve their operational efficiency and enhance customer service. However, few organizations are prepared for regulations that will soon require them to record all AI systems associated with their businesses and determine their level of risk.
The European Union AI Act, the world's first comprehensive AI law, came into force in August this year. Companies will need to comply with its initial requirements by February. Further deadlines are set to come into effect over the next six years (See diagram below).
“The EU AI Act doesn’t just affect companies in Europe. It also applies to organizations whose AI output is used in the EU, regardless of where they are headquartered,” says Bernadette Wesdorp, Financial Services AI Leader and Director at EY.
The organizations most affected by the AI Act, according to Wesdorp, are “providers” of AI systems and “deployers” that are using AI systems. The EU regulations are intended to ensure that AI systems are safe, transparent and don’t discriminate. Breaches of the EU AI Act could result in fines of as much as €35 million.
Want to keep reading?
Create a web account to get access to more insights