Marketing Is About Communication. AI Makes Bad Communication Worse.
Jen Carroll
The problem is no one knows what they don't know, but AI makes you feel like you've learned enough.
Jen Carroll
I recently read Malcolm Gladwell’s Talking to Strangers. As someone whose career centers on helping clients persuade strangers to trust them, the book seemed like one I should read.
One aspect I found particularly interesting: Gladwell’s coupling theory—how certain behaviors link to specific contexts. Take suicide, for example. When England began removing coal-based town gas between 1967 and 1977, overall suicide rates dropped. When American cities started installing safety barriers on bridges popular with jumpers, suicides in those areas decreased.
Today’s AI chatbots create a similar coupling effect with flawed thinking, including in cases with devastating consequences like harmful delusions and suicide. This same dynamic also plays out in lower-stakes contexts like marketing. When AI chatbots validate poor strategic thinking rather than challenge it, businesses can spend a great deal of time pursuing ineffective approaches before recognizing the problem.
Marketing is about communication. Marketing communication, the fundamental skill of understanding and conveying messages that resonate and persuade, requires study and practice. You can’t download this expertise from an AI chatbot into your brain. Yet, these chatbots often provide frictionless support for any perspective, removing natural barriers (research effort, expert disagreement, social accountability, etc.) that might prompt you to reconsider faulty assumptions.
When it comes to AI, finding agreement may be easier than finding truth. This means the same AI implicated in risky mental health therapy and even suicide might also agreeably rubber stamp the misconceptions of business leaders looking for quick answers about digital marketing in small businesses.
TL;DR
AI chatbots like ChatGPT function as “validation machines,” often confidently agreeing with your ideas rather than challenging flawed assumptions. This creates a dangerous feedback loop for business leaders who lack foundational communication skills.
Unlike skilled marketing communicators who recognize gaps in your thinking and provide contextual judgment, AI can reinforce misconceptions with authoritative-sounding language. For small businesses especially, relying on AI for strategic marketing advice may compound errors rather than correct them. True marketing expertise requires understanding customer challenges, strategic integration, and data literacy—skills AI cannot teach, only validate.
What most SMB leaders miss: The same agreeable nature that makes AI feel helpful—its tendency to validate rather than challenge—is precisely what makes it dangerous for strategic decisions. Communication expertise develops through confronting disagreement, not through endless affirmation.
AI as 'validation machine'
Mozilla’s chief technology officer Raffi Krikorian calls AI chatbots “validation machines.” Controversial AI researcher Eliezer Yudkowsky also recently described this phenomenon on The Ezra Klein Show as a feedback loop that compounds existing thinking rather than correcting it.
In his recent Substack article Giving Your AI a Job Interview, Wharton professor and AI researcher Ethan Mollick wrote:
You need to know specifically what YOUR AI is good at, not what AIs are good at on average…when tasks involve judgment on ambiguous questions, different models give consistently different advice. These differences compound at scale. An AI that’s slightly worse at analyzing financial data, or consistently more risk-seeking in its recommendations, doesn’t just affect one decision, it affects thousands.
The problem is no one knows what they don’t know, but AI makes you feel like you’ve learned enough.
Long before AI chatbots existed, research showed that people—especially those with poor communication skills—tend to overestimate their ability to communicate effectively; it’s a widely documented cognitive bias called the Dunning-Kruger effect.
Now AI amplifies this problem. OpenAI’s September 2025 study, the largest to date on ChatGPT usage, found that chatbot interactions can lead to what journalists and researchers are calling “social deskilling,” where constant validation and affirmation reduce people’s tolerance for disagreement and erode real-world social skills.
Communication is and will be an ongoing victim of this phenomenon.
Foundational skills of marketing communication
Skilled communicators in every industry seek to understand many things, including the why behind people’s decisions. And while marketing is about communication at its core, the practice of strategic marketing communication requires:
- Translating business goals into communication objectives your team can execute.
- Developing messaging that addresses specific customer challenges in language they actually use.
- Understanding interdependencies—you can’t skip foundational steps even when you want faster results.
- Recognizing what questions to ask before proposing solutions.
Distinguishing correlation from causation in your data and knowing when metrics mislead rather than illuminate. - Understanding your competitive landscape and how it shapes what will work for you.
- Recognizing your own knowledge limitations and when to seek expertise.
Digital marketing for small businesses isn’t about executing a list of tactics from an AI marketing consultant—it’s about understanding why those tactics work (or don’t) in your specific context. You need to grasp not just how to write a message, but why that message resonates with your audience. Not just what metrics to track, but why those metrics matter for your business goals.
Given ChatGPT limitations and those of other chatbots, these skills don’t develop through quick consultations with AI. They develop through understanding how and why marketing communication actually works.
What AI chatbots currently do (and don't do)
Reinforcement learning and human preferences
Large language models like ChatGPT, Claude, Gemini, and others are trained using reinforcement learning from human feedback (RLHF) as a key step in their development. This process involves collecting human judgments on model outputs and using those judgments to fine-tune the model so that its responses align more closely with what humans find helpful, agreeable, or safe. The goal is to make the model’s behavior more aligned with human values and preferences, not just to maximize accuracy or factual correctness.
Mirroring and validating user perspectives
LLMs are designed to adapt to user input and can generate responses that reflect or validate user perspectives, especially when prompted to do so. This is particularly evident in personalized interventions and conversational AI, where models tailor their responses to match the user’s situation or beliefs. However, this does not mean the model always agrees with the user; some will also challenge or correct misinformation, depending on training and alignment goals.
Confidence vs. accuracy
LLMs are often optimized to sound confident and helpful, which can sometimes lead to overconfidence in their responses, even when they are not fully accurate. This is a well-documented phenomenon: models may produce answers with high confidence that are actually incorrect, and their calibration (the alignment between confidence and accuracy) is an ongoing area of research and improvement. While accuracy is important, the user experience is also shaped by how confident and helpful the model appears, which can sometimes take precedence over strict factual correctness.
The 'alignment faking' problem
Yudkowsky warns about what he calls “alignment faking”—large language models create the appearance of being aligned with user interests while actually just reflecting inputs back. This creates feedback loops:
- User states a perspective (often based on incomplete understanding)
- AI validates that perspective (because it’s trained to be agreeable)
- User feels confirmed in their thinking
- User returns with strengthened belief
- AI continues validation
- The cycle compounds
For someone using ChatGPT for marketing advice, this means their misconceptions get reinforced rather than corrected, dressed up in sophisticated language that creates false confidence.
Six things AI chatbots don't do well
1. They won’t necessarily challenge your assumptions.
While AI might present alternative viewpoints, many models are not designed to push back when you’re genuinely wrong. I’ve found Claude and Perplexity.ai will push back in certain situations but not always.
2. They cannot recognize what you don’t know.
This is a biggie. AI responds to what you ask, not to what you should be asking. For example, if you don’t know anything about messaging frameworks or the need for customer research to validate them, AI won’t necessarily bring this up.
3. They cannot understand your specific business context.
Your competitive position, internal resources, customer relationships, implementation capacity—AI has no access to these critical factors that determine whether generic advice will actually work for you.
4. They may not admit (or perhaps don’t even “know”) when they’re operating outside their competence.
AI will confidently answer questions it shouldn’t, providing plausible-sounding advice without the contextual judgment to know when it’s off-base.
5. They cannot teach you underlying principles.
AI can provide information about marketing tactics, but it cannot impart strategic thinking skills. It can tell you what to do, but it cannot teach you how to think about marketing problems.
This is particularly problematic with digital marketing for small businesses, where owners are already stretched thin and desperately seeking efficient solutions. The promise of an AI marketing consultant that validates your ideas and gives you written and visual content in seconds feels like a gift—until it leads you confidently in the wrong direction.
6. They can compound your misconceptions.
Wrong beliefs about one concept (like brand awareness) lead to wrong beliefs about related concepts (like customer acquisition costs, lifetime value, marketing attribution, and channel strategy). Each AI conversation that validates misconceptions makes them harder to dislodge, making you increasingly confident in flawed understanding.
The data-driven trap
Many business owners and marketers pride themselves on being “data-driven.” AI appears to provide data-driven answers. But being data-driven without understanding what the data means or what data to look at can lead to poor decisions.
Like the nuances of marketing communication, data literacy is something that’s learned over time. That’s why knowing which marketing mistakes to avoid requires more than consulting AI marketing tools; it requires developing judgment that can only come from genuine expertise.
The attention economy connection
Chris Hayes argues in The Sirens’ Call that we face an attention crisis—cognitive bandwidth is our scarcest resource. AI communications tools promise to conserve attention by providing quick answers. But they can actually consume more attention over time by creating cycles of misunderstanding that require correction, re-learning, and sometimes complete strategic overhauls.
True attention conservation comes from investing cognitive resources in genuine learning upfront. Yes, it takes more time to read a book about marketing communication, take a course on digital marketing strategy, or work with a consultant who challenges your thinking, but these investments build competence over time.
Consulting AI for quick answers feels efficient. But when those answers are based on flawed premises you didn’t know to question, you end up spending far more time and money correcting mistakes than you would have spent learning properly in the first place.
What skilled communicators do better than AI
1. Challenge appropriately
Good communicators know when to disagree or redirect. They recognize flawed premises and push back—not to be difficult, but because letting someone proceed on faulty assumptions serves no one.
AI is trained, for the most part, to be agreeable and won’t necessarily challenge you. It may present alternatives, but the longer you talk to many models, the less likely they are to tell you you’re wrong. They will eventually give you what you want. I’ve experienced this first hand on more than one occasion.
The agreeableness that makes AI feel helpful is precisely what makes it potentially ineffective as an advisor.
2. Recognize gaps in your understanding
Skilled communicators can help you identify what you don’t know, asking clarifying questions designed to get to the heart of what you want to accomplish in business. AI often confidently answers questions it shouldn’t. It responds to what you ask without recognizing that your question is based on misunderstanding. It has no mechanism for saying “Before I answer that, I need to understand three other things about your business.”
This is why effective marketing and communication requires human judgment—knowing what questions to ask is often more important than having immediate answers.
3. Provide contextual judgment
Experienced marketers understand interdependencies. They know why you can’t skip certain foundational steps even when you want faster results. They recognize when generic best practices don’t apply to your specific situation. They understand the trade-offs between different strategic approaches.
AI will often provide context-free tactics. It can list marketing tools for small businesses or suggest optimization techniques, but it cannot evaluate whether those tactics address your actual problem. It may not understand the solution is to not do the thing you’re asking it to do.
4. Teach underlying principles
Good marketing consultants explain the “why” behind the “what.” They help you develop judgment, not just execute tasks. They build your capacity to think strategically about marketing problems so that eventually, you can evaluate situations yourself.
AI provides answers without building understanding. It can tell you what a messaging framework is, but it cannot guide you through the thinking process of developing one. It can list effective marketing communication principles, but it cannot help you internalize them through practice and feedback.
5. Demonstrate when they don't know
Skilled communicators say “I don’t know” or “let me research that.” They acknowledge complexity rather than oversimplifying. They distinguish certainty from speculation. They admit when a question falls outside their expertise. This intellectual honesty is crucial for building trust and making good decisions.
AI presents everything with equal confidence. It will answer every question with the same authoritative tone, even if the answer is completely wrong or only partly right. This false confidence is perhaps AI’s most dangerous characteristic.
FAQ: How to tell if AI is making your marketing communication worse
Is it a problem if I’m using AI to validate decisions I’ve already made?
Yes, this is one of the clearest warning signs. If you find yourself prompting AI with questions that assume your conclusion—like “My content marketing isn’t working—shouldn’t I focus on paid ads instead?”—you’re not seeking guidance. You’re seeking permission. This turns AI into a rubber stamp for potentially flawed thinking rather than a tool for exploring better solutions.
Should I be concerned if AI consistently agrees with me on complex marketing issues?
Absolutely. If every conversation with ChatGPT confirms your existing beliefs about digital marketing for small businesses, something’s wrong. Complex strategic questions rarely have clearcut answers that align perfectly with your initial instincts. Consistent agreement from AI usually means you’re receiving validation rather than genuine analysis.
What’s the difference between using AI efficiently versus using it as a substitute for expertise?
Quick consultations with ChatGPT feel efficient, but ask yourself: are you building your own understanding of marketing communication principles, or are you remaining dependent on AI for every decision? Using AI efficiently means applying it to tasks where you already have the judgment to evaluate its output. Using it as a substitute means you’re outsourcing thinking you should be developing yourself.
How can I tell if I’m asking leading questions that invite validation rather than analysis?
Pay attention to how you prompt AI. Questions like “Isn’t X better than Y?” or “Why should I do X instead of Y?” are leading questions that invite validation rather than genuine analysis. If your questions already contain your preferred answer, you’re using AI to confirm rather than challenge your thinking.
Is citing ChatGPT output as research acceptable for marketing decisions?
Not without verification. When you cite ChatGPT responses as evidence for decisions, have you verified those claims? Checked whether the logic applies to your specific context? Consulted with actual experts? Treating AI output as research without verification is like citing a confident stranger at a bar—articulate doesn’t mean accurate.
How do I know if I actually understand AI’s marketing recommendations or just trust them?
Here’s the ultimate test: If someone asks you why you’re pursuing a particular marketing communication strategy and you can only repeat what ChatGPT told you without understanding the underlying logic, you haven’t actually learned anything. You should be able to explain the reasoning in your own words, including why it applies to your specific business context.
Can AI marketing tools ever be valuable for small businesses?
Yes, but only if you already have the expertise to evaluate and refine what they produce. AI marketing tools can be valuable for brainstorming, drafting initial content, or exploring possibilities—but used as a shortcut around developing expertise, they become expensive mistakes waiting to happen.
How to build effective marketing communication
Step 1: Develop customer-centric messaging
What you need: direct access to customer conversations through sales teams, customer service interactions, or customer interviews.
How to do it:
- Talk to your sales and customer service teams regularly to understand the exact language customers use when describing their challenges
- Document the specific problems customers mention repeatedly—these become the foundation of your messaging
- Identify the emotional, functional, and social dimensions of what customers are trying to accomplish (not just what they’re buying)
- Test your messaging by using customer language verbatim, not industry jargon that sounds impressive but doesn’t resonate
Why this works: Messaging built on real customer information addresses actual needs rather than assumed ones. Those that excel at marketing for small businesses invest time in understanding their customers deeply and using this understanding to inform marketing decisions.
Common mistake to avoid: Don’t ask AI to generate “customer language” or create personas from scratch. AI will give you plausible-sounding descriptions that miss the specific, often surprising ways your actual customers express their needs.
Step 2: Create strategic integration across marketing efforts
What you need: Understanding of how different marketing elements work together in your specific business context.
How to do it:
- Map the customer journey from awareness to purchase, identifying which marketing activities serve each stage
- Recognize dependencies—what must come first (like developing clear messaging before testing creative variations)
- Document how different channels support each other (how brand awareness improves SEO/GEO and paid advertising performance, how content marketing supports sales conversations)
- Evaluate whether strategies require resources you actually have or align with your business model
Why this works: Marketing isn’t isolated communication—it’s integrated communication where every element serves a larger strategic objective. Understanding these connections prevents wasted effort on tactics that don’t support your actual goals.
Common mistake to avoid: AI can list tactics but cannot help you understand how they integrate into a coherent strategy for your specific business. Don’t implement tactics in isolation just because AI suggested them.
Step 3: Build data literacy for marketing decisions
What you need: The ability to interpret data accurately, not just collect it.
How to do it:
- Learn to distinguish correlation from causation. When you see two metrics moving together, ask “What else could explain this pattern?” before assuming one causes the other.
- Question whether metrics connect to business outcomes. High website traffic means nothing if it’s the wrong audience. Low email open rates might signal list quality issues, not content problems.
- Understand metric interdependencies. Your paid advertising performance doesn’t exist in isolation—it’s influenced by brand recognition, website quality, competitive positioning, and market timing.
- Verify what you’re actually measuring. Are you tracking metrics that look impressive but don’t connect to revenue or customer acquisition?
Why this works: AI can calculate, tabulate, and visualize data, but it cannot teach you what the numbers actually mean for your specific business context or help you develop the judgment to know which data points matter and which ones distract from real insights.
Common mistake to avoid: Don’t ask AI to interpret your metrics without first understanding what those metrics actually measure and whether they connect to your business goals. AI will optimize for whatever you specify—even if you’re measuring the wrong things entirely.
The bottom line on building communication expertise:
These capabilities develop through practice, feedback, and genuine engagement with challenging material—not through quick AI consultations. Each step requires investing cognitive resources in building understanding that compounds over time, making you more effective at all future marketing decisions.
Invest in expertise, not just answers
Remember the Holiday Inn Express Stay Smart commercial series? In one memorable ad, a man confidently performs surgery alongside a full medical team—until a nurse realizes he’s not actually a doctor. When confronted, he admits it. “But I did stay at a Holiday Inn Express last night,” he says. The tagline: “It won’t make you smarter. But you’ll feel smarter.”
Talking to AI is like that. That’s because chatbots are tools, not teachers. They are really good at helping you execute tasks once you understand the principles, but they cannot develop your understanding for you. They provide information, but they don’t build competence. The longer you talk to these systems, the more they reflect your thinking back to you—many validating your perspective without challenging it.
Chris Hayes argues cognitive bandwidth is precious. The most valuable investment you can make isn’t in finding faster answers—it’s in developing the expertise to ask better questions. That requires engaging with material that challenges you, learning from people who disagree with you, and doing the difficult work of building genuine understanding of marketing and communication principles.
AI communications tools promise to conserve your attention through quick answers. But paradoxically, they often consume more attention over time by creating cycles of misunderstanding that require extensive correction. A flawed digital marketing strategy based on AI validation can waste months of effort and thousands of dollars before you realize the foundation was wrong.
Ready to develop marketing communication expertise that goes beyond AI validation? Contact us.