Home / Banking Strategies / Limiting the risky side of AI in financial services

Limiting the risky side of AI in financial services

Banks should monitor ‘poisoned’ AI, set rules for employee use of this technology and regularly invest in security.

Apr 16, 2025 / Fraud Prevention / Technology

Artificial intelligence (AI) has permeated nearly every facet of our lives, from our phones to our internet searches and beyond. Banks and other financial institutions are no exception: By one count, 72% of these organizations have already adopted AI.

As leaders pursue any competitive edge they can, the majority (66%) are willing to accept most risks of AI. However, AI can pose very real threats. To truly make the most of AI investments, consider taking these steps to better secure your business.

Preventing ‘poisoned’ AI

Have you read the stories about Reddit users attempting to keep tourists out of their favorite local restaurants by positively reviewing other places so these dining options are more likely to show up in an AI summary? It seems harmless, but the same methodology can be used for far more nefarious purposes. These are called side-channel attacks, and AI models connected to or using any kind of large language model (LLM) are susceptible to them.

For example, a bad actor can publish a fake repository that looks nearly identical to a legitimate one, then “seed” mentions of it on Reddit and other sites where models are being trained to poison all code the AI will generate, giving the bad actor access to any of your sensitive systems and data that poisoned code is used on. I would know: I once presented on a similar method to compromise continuous integration and continuous delivery servers. GitHub then had to change how users were allowed to search the platform to help reduce how exploitable this was by making the servers harder to find.

If you’re building your own models, or leveraging third-party LLMs, protecting against poisoning comes down to how much you can trust the data sources the model is trained on. If the model is trained with data generally scraped from the internet at large, which may be very necessary to maintain a competitive edge, we don’t yet have an answer for how to stop these types of attacks. If you are able to curate the data sources the model uses, you can filter out the manipulation that creates poisoning opportunities before it is translated into poisoned code.

Above all else, the rise of AI usage reinforces the absolute necessity of having scalable, high-quality security testing methods, including those designed to scan for malware in repositories or surface other parts of code that could compromise your servers.

Monitoring employee AI usage

Companies aren’t the only ones embracing AI; their employees are, too. For example, Salesforce found that 55% of employees used unapproved generative AI (gen AI) tools at work. Nearly 70% of that same population have never completed, let alone received, training on how to use gen AI responsibly at work. As Salesforce also points out, employees may still “send sensitive data through unsecured LLMs.” They may generally understand they shouldn’t give sensitive information to a publicly available model but that doesn’t necessarily stop the behavior.

Because LLMs are trained on vast amounts of information, they inevitably ingest sensitive information. While these models put guardrails in place, they aren’t infallible. In another example, a hacker pulled up a list of what appeared to be real credit card numbers without running afoul of any safeguards. And this information makes companies like OpenAI big targets for hacking. If an employee at your institution has put information they shouldn’t have into one of these models, then a hack could easily expose that data down the line.

This is a problem that can’t be solved by taking just one action. Instead, consider the following three steps:

Ultimately, you should aim to build a culture of security. A good way to do this is by actively encouraging good security hygiene as part of your metrics or KPIs. And as you might imagine, it can help secure your enterprise on other levels, too.

Continually investing in security

AI can supercharge an organization’s productivity, but it can also supercharge malicious actors’ attempts to breach your organization. Using AI, hackers can drastically increase the sophistication of the constant attacks that are being launched—and they only need to be successful once. This new era of cyberattacks is already here: By one measure, 93% of surveyed security leaders believe AI-powered attacks will become a daily occurrence.

Fortunately, AI has its positive uses in cybersecurity, too. It can enhance up to 50% of cybersecurity tasks that are repetitive, including threat detection and incident response. With automation in place, your team can shift focus to strategic initiatives, improving your overall security posture. These and other new solutions can help lift the burden, but they will require resources.

Simply put: the best way to invest in security is to invest in what your security team asks for—and keep doing it. Your team knows where your vulnerabilities and areas of concern are; give them what they need to address them. However, justifying the kind of spend that sound cybersecurity requires is often a challenge because good cybersecurity means you are mitigating more threats. In other words, it’s easy to be lulled into a false sense of security and believe that since there’s no visible ROI in a spreadsheet cybersecurity doesn’t need more investment.

Ignoring cybersecurity comes with costly risks. The SEC issued disclosure rules detailing penalties of up to $25 million. And in one example, a charge related to data management cost Citi $136 million. The federal government is not the only one capable of enforcing rules and regulations either. New York State settled a $2 million fine for PayPal for a 2022 data breach.

However, the biggest damage done by a data breach is difficult to quantify, just like the ROI of cybersecurity. A data breach can seriously rupture your customers’ trust and possibly even end your relationship. In a world that’s only getting more competitive and more automated, trust is the most valuable thing you have, and protecting it is priceless.

Greg Anderson is founder and CEO of DefectDojo.