06 Apr Just Because You Can, Doesn’t Mean You Should. Using AI for Comms, Marketing and Content.
By BravoEcho
The launch of ChatGPT necessitated an AI crash course for marketing and communications teams. We had curiosity; fear of robot Armageddon; trial and experimentation that led to frustration and annoyance with the outputs; and light bulb moments when we figured out the right prompts to get it to work. Then there was the “use AI for everything” craze now hopefully being replaced by the more judicious “use AI where it’s most useful” phase, with the associated very real concerns about its impact on the workforce.
The general public has gone through a similar journey from AI curiosity to concern – with implications for its use in media, and we believe, comms, marketing, and content too. Research, conducted in the UK by Ipsos and the BBC in 2025 and released this past February, sheds light on where the public currently sees the limits of AI’s use.
As the report states, people “have moved from a position of cautious curiosity to one of lived experience. This has shifted the central question from what GenAI can do, to what it should do – particularly within the emotionally and culturally vital space of media.” Comms, marketing, and content exist in media, and trade in emotional and cultural currency, so to us it seems only right that we should take note of these learnings.
People appreciate AI’s utility – saving time, cutting down admin and process, helping with learning and accessibility, and unleashing creativity. But when it begins to infringe on what makes us human – our emotions, thoughts, tastes, and judgment – they fear it could “undermine trust, replace human creativity, and distort what’s real.”
If people experience something they think should feel distinctly, uniquely or even messily human, and it instead feels somehow off or machine-made, then they will reject it and trust its source less. Using AI is fine if it assists or enhances, but not if it is seen to replace human creativity, emotional expression, and editorial judgment.
Sounds great, but what does it mean? Well, the Ipsos/BBC research drew distinctions between the tasks people were least and most concerned about AI undertaking. They then grouped them into low, medium, and high stakes. And while their examples are from the world of media, their corollaries in comms, marketing, and content should be clear.
- Low Stakes – background tasks that save time and ease process. Examples include incidental music, animation and special effects, headlines for a human-written article, and an audio version of a human-written article.
- Medium Stakes – where AI is a springboard or a tool to enhance, but not substitute. Examples include generating ideas, recreating voices, summarizing human-written articles.
- High Stakes – where human emotion, credibility or originality matter. Examples, include completely AI-created shows, movies, podcasts, news content, and images. Also, live sports commentary (it’s emotional, spontaneous, and therefore, human).
In short, if it touches on human truths, identity or experience, AI should be used with much caution and only in very limited ways. The research made clear that this isn’t a blanket rejection of AI, but a desire for boundaries around its use. “Media is not just about generating output,” they wrote, “it’s about conveying meaning – and importantly for audiences, GenAI is not allowed to participate in meaning-making.” We believe that exactly the same is true for communications, marketing, and content.
Stepping away from the Ipsos/BBC research, we’re seeing a general trend towards real-life person-to-person interaction and analog hobbies. This might be due to the lingering trauma of COVID isolation; a rejection of the algorithms that have made social media more media and less social; or just younger cohorts finding their own digital-IRL balance. But whatever the reasons, people want the stuff that makes us human to feel real, honest, and unfiltered, and that means feeling it from other humans, not AI.