The news
AtData's piece on Martech makes a pointed argument: most organizations claiming AI readiness are sitting on a foundation of identity gaps, fraud, and dirty inputs — and AI models don't fix those problems, they amplify them. The rush to apply AI before fixing the underlying data is producing confident-sounding outputs built on garbage.
Our take
dAIs has seen this exact pattern across multiple client engagements, and it's one of the most predictable failure modes in GTM AI projects. A team installs a new AI tool, points it at their CRM or MAP, and wonders why the outputs are wrong, weird, or confidently incorrect. The answer is almost always the same: the model is working exactly as designed — it's just working with junk.
The identity problem in particular is underappreciated. If your contact database has duplicate records, merged companies, stale emails, and mismatched intent signals, you don't have a segmentation problem. You have an identity problem. Slapping an AI layer on top doesn't resolve it — it bakes the confusion into every output at scale.
This is the operator version of "garbage in, garbage out," and it's not a new concept. What is new is the stakes. When a human reviews a bad lead list, they catch most of the obvious errors. When an AI agent is routing leads, enrolling contacts into sequences, or triggering multi-step workflows, the bad data moves fast and touches everything downstream before anyone notices.
The uncomfortable truth for most GTM teams is that "AI readiness" isn't about which tools you've licensed. It's about whether your data is clean enough to trust automation with. Most teams that think they're ready aren't. Not because they're behind — because nobody told them what "ready" actually requires.
The so-what
AI doesn't raise the floor on bad data — it lowers the ceiling on good intentions. Before any GTM team builds another automation, they should be asking one question first: do we trust the inputs enough to let a machine act on them without a human in the loop? If the answer is "mostly" or "probably," that's a no. Run a data quality audit on your CRM before your next AI project. Check for duplicates, validate your contact identity fields, and document what "good" looks like for the records you're about to automate against. The teams who skip this step will spend the next six months cleaning up AI-accelerated messes instead of shipping wins.
---