AI Poisoning: Fake Support Scam — AI Search as the New Attack Surface

16th December 2025 | Blog AI Poisoning: Fake Support Scam — AI Search as the New Attack Surface

Cybercriminals always follow Internet eyeballs. Not literally, but figuratively. And today’s eyeballs are shifting. While Google Search still dominates, 20-26% of Americans have switched to AI tools like ChatGPT, Claude, and Perplexity for information gathering, a number that’s growing monthly. Scammers are following this migration, moving their game from traditional SEO poisoning in Google results to a far more insidious threat: AI poisoning.

According to reports from late 2024 and 2025, threat actors are now poisoning AI search tools with fake customer support listings that redirect victims to fraudulent phone numbers, websites, and agents posing as legitimate companies. Air Canada was ordered to pay a customer who was misled by airline’s chatbot

Why is that?

While Google Search lets you evaluate multiple sources yourself, AI chatbots give you ONE confident answer. And those answers are wrong 2-35% of the time depending on the topic and model. When AI invents a fake customer support number and someone publishes it online, the next AI that scrapes the web learns that false number and repeats it with even more confidence. This perpetuates the poisoning information to other chatbots, leading to more false information being spit out into the very AI search tools people are migrating to. Many AI search users are aware that these systems hallucinate, but are they aware they consume false data planted by perpetrators conducting elaborate hacking schemes? Unlikely, until you read this article.

How these AI Search Poisoning Scams Works

Attackers feed AI search tools with convincing but bogus information that looks like customer support entries for banks, airlines, and tech companies. When a user asks the AI assistant for a support number, the model suggests a fake one. From there the scams kick off:

  • Phony agents try to extract payment info.
  • Remote access tools are pushed to “fix” fake problems.
  • Refund scams trick users into sending money back.
  • Account takeover attempts start the moment they answer the call.

The user never lands on a phishing page. The scam begins directly inside the AI’s answer box.

Why AI Is Easier to Poison

There are two primary reasons AI Search is easier to poison than traditional search sites like Google or Bing. First, AI search bots don’t verify phone numbers, business listings, or URLs against authoritative sources the same way traditional search engines do. Attackers exploit that by:

  • Publishing fake but legitimate-looking business data.
  • Creating SEO-boosted websites that models scrape.
  • Mass-submitting false support contacts on smaller directories.
  • Generating entire scam ecosystems with AI tools.

Secondly, there’s a self-reinforcing cycle where users receiving the incorrect information unknowingly republish such data to their websites, where more AI search bots consume and include the fake data in their search results perpetuating the bogus information. These AI models then repeat the false information at scale, giving the fake data credibility.

Who’s Most at Risk

  • Older adults who rely on AI virtual assistants
  • Anyone using AI to search for “customer support phone number”
  • AI Search users trying to contact airlines, delivery companies, app stores, banks, or subscription platforms

In other words: anyone using AI search tools may fall victim

Warning Signs You’ve Reached a Scam Support Line

If you call a support number and encounter any of these, hang up immediately,

  • Agent asks for payment via gift cards, wire transfer, or cryptocurrency
  • Pressure to “act now” or “your account will be closed”
  • Request for remote access to your computer for a “routine” issue
  • Asking for a full credit card number when you’re calling about a non-billing issue
  • The support rep cannot or does not bother to verify basic information about your account that a real agent would do

In these cases, the advice is quick and easy. Hang up and call the correct number back from the official website.

Advice for All AI Search Users to Follow

  • Never trust a support number from AI search without verifying it on the vendor’s official website. This includes visiting the website from links in your search results as lookalike websites are popping up with greater frequency in these poisoning scams!
  • Avoid calling numbers from random blogs, PDFs, Reddit posts, or forums.
  • Bookmark the official support pages of companies you rely on.
  • Enable MFA, so even if you slip up with credentials, attackers can’t immediately breach your account.

What Companies Should Do

Organizations can’t stop scammers from poisoning AI systems, but they can harden their users:

  • Register your official support numbers with Google Business Profile and other directory services to help your customers reach you
  • Train staff to verify support channels before contacting vendors.
  • Include “fake support number” examples in phishing and social engineering awareness programs.
  • Publish your official support contacts clearly and consistently.
  • Monitor the web for fraudulent listings, typo-squatted domain names, or look-alike websites that abuse your brand.
  • Implement DMARC, DKIM, SPF protections on your Mail eXchange records (MX Records)

Bottom Line

Attackers go where users are. Over the last 2 years search has expanded into AI search where controls around false information are less stringent and AI Search poisoning appears to be expanding. Since so many users don’t know that AI cannot be trusted conclusively, scammers are focusing their efforts on poisoning support numbers and propagating this throughout Internet AI Search bots.

Fortunately, the fix isn’t complicated: slow down, verify contact information through traditional search engines and vendor website, and treat all AI output with zero trust until confirmed.

Additional Resources


Latest Blogs

Stay sharp with the latest security insights

Discover and share the latest cybersecurity trends, tips and best practices – alongside new threats to watch out for.

AI-Powered Phishing Kits Are Game-Changing, In a Very Bad Way

AI-Powered Phishing Kits Are Game-Changing, In a Very Bad Way

Phishing emails used to be easy to spot. Bad grammar. Weird links. Obvious scams. Those days are...

Read more
AI Poisoning: Fake Support Scam — AI Search as the New Attack Surface

AI Poisoning: Fake Support Scam — AI Search as the New Attack Surface

Cybercriminals always follow Internet eyeballs. Not literally, but figuratively. And today's eyeballs are...

Read more
CISA Details an Emerging Mobile Spyware Alert

CISA Details an Emerging Mobile Spyware Alert

Active Attacks on Messaging Apps The Cybersecurity and Infrastructure Security Agency (CISA) recently issued...

Read more