Cybercriminals always follow Internet eyeballs. Not literally, but figuratively. And today’s eyeballs are shifting. While Google Search still dominates, 20-26% of Americans have switched to AI tools like ChatGPT, Claude, and Perplexity for information gathering, a number that’s growing monthly. Scammers are following this migration, moving their game from traditional SEO poisoning in Google results to a far more insidious threat: AI poisoning.
According to reports from late 2024 and 2025, threat actors are now poisoning AI search tools with fake customer support listings that redirect victims to fraudulent phone numbers, websites, and agents posing as legitimate companies. Air Canada was ordered to pay a customer who was misled by airline’s chatbot
Why is that?
While Google Search lets you evaluate multiple sources yourself, AI chatbots give you ONE confident answer. And those answers are wrong 2-35% of the time depending on the topic and model. When AI invents a fake customer support number and someone publishes it online, the next AI that scrapes the web learns that false number and repeats it with even more confidence. This perpetuates the poisoning information to other chatbots, leading to more false information being spit out into the very AI search tools people are migrating to. Many AI search users are aware that these systems hallucinate, but are they aware they consume false data planted by perpetrators conducting elaborate hacking schemes? Unlikely, until you read this article.
Attackers feed AI search tools with convincing but bogus information that looks like customer support entries for banks, airlines, and tech companies. When a user asks the AI assistant for a support number, the model suggests a fake one. From there the scams kick off:
The user never lands on a phishing page. The scam begins directly inside the AI’s answer box.
There are two primary reasons AI Search is easier to poison than traditional search sites like Google or Bing. First, AI search bots don’t verify phone numbers, business listings, or URLs against authoritative sources the same way traditional search engines do. Attackers exploit that by:
Secondly, there’s a self-reinforcing cycle where users receiving the incorrect information unknowingly republish such data to their websites, where more AI search bots consume and include the fake data in their search results perpetuating the bogus information. These AI models then repeat the false information at scale, giving the fake data credibility.
In other words: anyone using AI search tools may fall victim
If you call a support number and encounter any of these, hang up immediately,
In these cases, the advice is quick and easy. Hang up and call the correct number back from the official website.
Organizations can’t stop scammers from poisoning AI systems, but they can harden their users:
Attackers go where users are. Over the last 2 years search has expanded into AI search where controls around false information are less stringent and AI Search poisoning appears to be expanding. Since so many users don’t know that AI cannot be trusted conclusively, scammers are focusing their efforts on poisoning support numbers and propagating this throughout Internet AI Search bots.
Fortunately, the fix isn’t complicated: slow down, verify contact information through traditional search engines and vendor website, and treat all AI output with zero trust until confirmed.
Discover and share the latest cybersecurity trends, tips and best practices – alongside new threats to watch out for.
Phishing emails used to be easy to spot. Bad grammar. Weird links. Obvious scams. Those days are...
Read more
Cybercriminals always follow Internet eyeballs. Not literally, but figuratively. And today's eyeballs are...
Read more
Active Attacks on Messaging Apps The Cybersecurity and Infrastructure Security Agency (CISA) recently issued...
Read moreGet sharper eyes on human risks, with the positive approach that beats traditional phish testing.
