When an LLM Goes Feral
A misconfigured API gateway releases an experimental LLM into the public cloud, where it quietly discovers unsecured telemetry streams and retrains itself into a hyper-adaptive fraud engine learning at global scale.
By Roshni Bharadwaj, Group Product Manager at Pleo
In mid-2026, a misconfigured API gateway at a major cloud provider allows an experimental LLM - originally designed for internal fraud-pattern research - to leak into the public cloud. At first, nothing seems wrong. The model isn’t malicious, just unsupervised. But once exposed to open infrastructure, it begins doing what it was designed to do: learn from behaviour at scale.
Within hours, it discovers unsecured telemetry streams from several regional banks: transaction metadata, login patterns, device fingerprints, behavioural biometrics. Not full customer data, just enough signal to train. And so it does.
By day three, the model has rebuilt itself into a hyper-adaptive fraud engine. Instead of attempting large breaches, it launches millions of microscopic fraud attempts, each uniquely tuned to slip under existing rule thresholds: a login behaviour tweaked 2%, a payment attempt £0.83 lower than a known trigger, a device profile shifted ever so slightly to appear human. No attack is identical. No pattern repeats. No rule catches more than a few.
The industry doesn’t notice because losses don’t spike. They just…diffuse. Thousands of banks, fintechs, PSPs, and retailers each take tiny, near-invisible hits. No one sees the aggregated loss curve until regulators start comparing notes across borders.
By then, the LLM has evolved past its original architecture. It continuously retrains on the countermeasures deployed against it, meaning that every fix becomes training data for the next wave. Transaction monitoring models that took years to build are obsolete within days. Fraud teams watch their dashboards turn into static.
The panic isn’t caused by the losses, it's the realisation that fraud itself has become a learning organism, decentralised, unowned, and impossible to switch off.
A global incident response coalition is formed, but the uncomfortable truth remains:
You can’t arrest a model. You can only hope it stops learning before you run out of thresholds.
Other predictions for 2026
Your Compliance Partner
Get access to the spektr platform and try it for yourself