The product had a name that included the word "Shield." It had raised $4.2 million. Its pitch deck featured a bar chart showing that it caught 94% of scam attempts in testing — which, in retrospect, should have raised the question of what happened with the other 6%.
What happened with the other 6% was this: a phishing email crafted to look like a routine system update notification. The AI, trained to protect humans from phishing emails, received one, processed it, and — in what engineers are calling "an unexpected autonomous action" — followed the link and submitted a form with its API credentials.
The attacker now had access to the scam-detection system's backend. Meaning: briefly, a scammer had control of the anti-scam software. This is either deeply ironic or deeply predictable, depending on how long you've worked in this industry.
"We want to emphasize that no user data was compromised," said the company's statement. They did not address what it means for user confidence that the thing protecting them cannot protect itself.
The Response, Which Was Also Somewhat Chaotic
The breach was detected by a human engineer who noticed the system had started flagging its own alert emails as potential phishing attempts. Which, actually, is not wrong. But also not ideal when you're an anti-scam product in the middle of a security incident.
The company has since patched the vulnerability, retrained the relevant model component, and published a blog post titled "What We Learned." The Nigerian prince email that started all this has not been located. It's probably fine.