The meteoric rise of generative AI is upending the world of content marketing. Marketers must learn to separate the risks from the opportunities.
Over the past few months, my LinkedIn feed has been inundated with posts about ChatGPT, OpenAI’s cutting-edge large language model (LLM). No doubt, everyone else in my field has experienced the same.
There’s no denying that generative AI has become an unstoppable force that will change the world. It’s not a stretch to compare it to the invention of the printing press in 1436 or the first photographs in the 1830s or the birth of the modern internet in the early 80s.
However, there’s one fundamental difference. All these inventions took decades to reach the point where they radically transformed our day-to-day lives.
ChatGPT was launched on November 30th, 2022. Few people had even heard of the company before late 2022. It has since become one of the fastest-growing startups in the world, reaching an estimated value of $29 billion.
Of course, like other AI models, ChatGPT is the product of decades of research in machine learning, but in just a few months, it has started a technological revolution.
The launch of ChatGPT is the latest in the series of disruptive innovations in generative AI that shaped the technology landscape in the second half of 2022. The tech has also dramatically shaken up the world of visual media with the launch of image synthesisers like Midjourney, Dall-E (also developed by OpenAI), and the open-source Stable Diffusion. In addition, AI can generate audio outputs, such as background music, voiceovers, and sound effects. Even AI-created video is becoming a thing, as compute capacity catches up with soaring demand.
As a freelance content marketing writer, I’ve seen the whole gamut of emotions surrounding the rise of ChatGPT. Many creatives have even questioned the sustainability of their careers. Security experts have raised concerns about the implications of AI in social engineering and malware development. Journalists are worried about the inevitable proliferation of fake news. The education sector fears widespread cheating in coursework. The list goes on.
What does this mean for content marketing managers?
As any marketing manager knows, web content is the fuel that drives traffic, brand recognition, and industry authority.
The way marketers approach generative AI falls somewhere on a spectrum. On one extreme, we have marketers in the quantity over quality camp. They view tools like ChatGPT as a way to dramatically scale their output with minimal effort and at virtually no cost.
This February, I saw a case study referencing how one SEO expert, using a proprietary GPT-3 model, managed to grow their blog traffic from 0 to over half a million visitors in just 12 months by publishing thousands of AI-generated posts. It seems that the ‘expert’ in question has little understanding of the fact that visitor count is irrelevant if it doesn’t lead to conversions. Neither do they seem to realise that Google will only get better at identifying and penalising mass-produced, low-effort content, whether it’s written by AI or not.
On the other side of the spectrum, we have creatively minded marketers, including strategists, writers, designers, and others. Many such marketers feel that their jobs, and indeed the very concepts of creativity, originality, and integrity are under threat.
I believe that, for the most part, these concerns are legitimate, if only because it’s not hard to imagine the majority of business leaders being seduced by the supposedly limitless potential of AI. After all, every business ultimately exists to make money, and if AI can greatly reduce the cost of marketing, then it’s unsurprising that managers put so much hope in it. Moreover, OpenAI excepts to reach $1 billion in revenue in 2024, and over 72% of businesses already use or plan to use AI to generate content.
For content marketing managers, it is vital to understand both the risks and opportunities that come with integrating generative AI into their workflows. After all, there’s a massive difference between using AI to replace and using it to augment.
The modern marketer, regardless of their niche, must be both a technologist and a creative, in order to stay relevant. The extent to which they keep humans in the loop will have profound effects on the futures of the brands they represent.
What is creativity?
If you’ve watched the 2004 blockbuster sci-fi movie I, Robot, you might remember the scene where Detective Spooner, played by Will Smith, asks a robot ‘Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?’. The robot responds with ‘Can you?’, leaving Spooner embarrassed and dejected. Shortly thereafter, the robot draws a compelling picture of a decaying Mackinac Bridge over a dried-up Lake Michigan, based on a ‘dream’ it had.
The scene now looks quaint. It has since become a meme in conversations about generative AI. We’re now living in a time when so-called computational creativity is no longer sci-fi, but an assumed reality that challenges what it means to be a writer, artist, or musician.
The reality is that there’s no such thing as computational creativity. There’s arguably no such thing as artificial intelligence either, for that matter. What we’re witnessing is a mere simulation of creativity and, by extension, humanity and feeling. In the case of LLMs like ChatGPT, that means using statistics and probability to recognise patterns and relationships between words and phrases and create outputs based on those. Image, video, and audio synthesisers follow the same computational concepts.
The outputs can be incredibly convincing, and they’re only likely to get better. GPT-4 can now ace college and law school exams, and Midjourney has finally figured out how to draw hands. Does this mean generative AI is becoming creative? Not at all. It just means it’s getting better at doing what it has been trained to do. After all, an AI is only ever as effective as the people who train and work with it. And that’s why, whether you’re an AI model developer, a business decision-maker, or a creative type – you absolutely need to keep humans in the loop.
Robots learning bad habits from other robots
I’m going to contradict myself now. While it’s widely viewed that generative AI will only become more convincing and more human-like, there’s every chance of the exact opposite happening. Algorithms might improve, but the same can’t necessarily be said of the training material.
Generative AI models are trained on massive amounts of data, the vast majority of which has been scraped from the Web. Naturally, the quality of that content varies dramatically.
On the low end, we have millions of poor-quality affiliate blogs of the sort we constantly see in Taboola ads, hundreds of thousands of phishing websites, fake news websites, and an endless barrage of social media fakery.
On the other, there’s also a wealth of high-quality, value-adding content like authority pieces, academic research, and open-source investigative journalism. None of it’s perfect, simply because there’s no such thing as perfect. But good content has something that bad content doesn’t – it has value by being useful or entertaining to its intended audience.
Clearly, ChatGPT’s creators are conscious of quality control, otherwise its outputs would be utterly dismal. However, most model developers, including OpenAI, aren’t exactly transparent when it comes to where they get their training material. Accountability is lacking, which may partly explain why GPT models often output blatant mistruths and obvious biases. After all, an AI can’t tell what’s right and what’s wrong, and neither are they aware of the biases hard-coded into them.
In fact, AI isn’t aware of anything at all, because it’s not sentient.
The problem can get a whole lot worse. Given the sheer scale of these models and the huge amount of text data used to train them, it’s impossible for humans alone to determine what constitutes quality content and what is spam across the entire training data set. Of course, humans are hardly infallible either, since we all have biases of our own, along with those imposed upon us by societal and cultural norms and pressures.
For example, some claim that ChatGPT is ‘woke’ when it comes to controversial topics, while Elon Musk recently mooted the possibility of creating a competing AI – BasedGPT, perhaps? No matter which side of the cultural war one stands on, both extremes have deeply set biases. There’s never been a more powerful tool to amplify those biases – or those dreadful corporate culture cliches that have become so ubiquitous.
This begs the question of what sort of training material are future generations of AI algorithms going to learn from? Businesses and their content marketing teams are behind a great deal of web content – much of it bad, but some of it genuinely valuable, but will AI be able to tell the difference?
So what are the implications for GPT-5 and subsequent models? Will they be trained partially by AI-generated content? It seems likely, given that we can expect to see an ever-growing percentage of web content being written by AI, either wholly or in part. When that happens, AI effectively ends up taking on a mind of its own. It’s a concept known as drift, where the relationship between the input and output changes. Or, as I, Robot’s Detective Spooner puts it, ‘So, robots building robots? That’s just stupid’.
What does this mean for businesses and their content marketing teams? It means that those who decide to outsource the entirety of their content creation to AI will become part of the problem. That problem is the relentless dumbing down of creativity and originality. For brands, that means losing their voice in a mire of poor-quality, mass-produced content where creativity and originality are superficial at best. Brands that choose to keep humans in the loop, however, will differentiate themselves and deliver greater value to their target audiences.
A quick fix or a long-term marketing strategy
It’s often said that attention spans are decreasing. The reality is that it’s not attention spans that are decreasing, but patience. We’re now overloaded with content to the point we have no patience with short, low-effort blog posts that offer little or no value. We don’t want to see ads disguised as entertainment or informative content. Readers don’t care an iota about your SEO, and they’ll usually turn back if they see the same key words and phrases forced in, repeated over and over. Most of us are attuned to such things and, even if we don’t explicitly know it, we can usually tell within a few seconds whether to hit the back button or not.
This dilemma takes us back to the point of whether marketers care about quantity or quality or, in other words, quick results or long-term success. As I’ve said, traffic is irrelevant if almost all your visitors are slamming the back button within three seconds of landing on your website.
Generative AI blurs the lines somewhat. Content marketers can now use AI to create what, at least at first glance, looks rather good. There’s not a single spelling or grammar error, and it flows smoothly, even if it’s perhaps a bit more verbose and repetitive than it should be. Maybe it has even been fact-checked and improved by a human reader. But does it offer anything of value? Does it help or entertain?
Probably not. Since generative AI can only rehash what has been written before – to the point it becomes a massive echo chamber – it doesn’t offer anything new. The visitor might as well just turn to ChatGPT themselves to get the information they were looking for. With GPT models poised to become deeply integrated with search engines, that’s no doubt exactly what they’ll do in the near future.
If you want to stay relevant as a content marketer, you need to do something different.
AI and the evolution of spam
Until around 10 years ago, the quantity-over-quality camp relied on tools like article spinners, rather than AI. The process typically involved paying a dollar or two to someone on Upwork or a similar platform to write a low-effort article chock full of key words and phrases for SEO. They’d then go through the article, nesting synonyms and alternative keywords between brackets before feeding it into an article spinner. The spinning software would then churn out hundreds of ‘unique’ articles by randomly choosing one of the words or phrases in each nested group. Needless to say, the results were what Google aptly described in 2013 as ‘pure spam’.
Article spinning was nothing other than a way to game the search engines into believing that the content was original and therefore worthy of ranking. And, up to a point, it worked, at least insofar as increasing search visibility. That was until Google released its Panda update in 2011 and Penguin update in 2012. The content mills, which it must be said, didn’t necessarily use article spinners, but paid next to nothing for low-effort SEO writing, ended up losing nearly all their traffic overnight as well.
AI-generated content, at least that with little or no human input, is nothing more than a high-tech evolution of the same concept. Just as it always has done before, Google will catch up, and although it isn’t explicitly against AI-created content, it will be able to tell what’s spam and what isn’t faster and more effectively than most humans can.
Why generative AI is not a silver bullet for marketers
Marketers and other business decision-makers who actually care about their brands also care about what Google calls EEAT: expertise, experience, authoritativeness, and trustworthiness.
AI has none of those things, even if it can simulate them to a degree. To exhibit those traits in a transparent and honest way that actually builds and preserves brand reputation, you need to keep humans in the loop, you need strategy, and you need to think about the long term.
I’ll explain the how and why below:
Expertise
Generative AI can simulate expertise based on the quality of the training data fed into the model. With the exception of a handful of open-source projects like Stable Diffusion, no one even knows exactly where that training data comes from. It’s something that most AI model developers keep pretty quiet about, at least until they – or their users – start facing lawsuits for plagiarism.
It’s also important to remember that, contrary to popular belief, the entirety of content that’s publicly available on the web isn’t reflective of the entire human experience – and it never will be. A lot of great content, such as whitepapers and academic research, is gated behind login pages or lead capture pages. Moreover, a whole lot of expertise is proprietary, locked away in the minds of subject matter experts, some of which they may be willing to share with content creators who can seamlessly put their thoughts into compelling written copy.
Experience
It goes without saying that AI has no experience of its own. All it can do is improve its outputs based on feedback from end users. Perhaps the most glaring limitation of ChatGPT is that its ‘experience’ only extends to world events and knowledge collected up until September 2021. It’s not connected to the internet, so it can’t read and learn from content published since then.
That’s likely to change with future iterations, especially as AI chatbots become integrated with search engines. However, even when it does, it will only simulate experience as it simulates expertise by the training data fed into it. The risks of allowing generative AI to learn in near real time are very significant too.
AI is like a hive mind, finding recurring patterns and common relationships between concepts and basing its outputs on them. There’s no ‘thinking outside the box’, at least not on anything but a very superficial level. An AI can’t replace my 15 years of experience of freelance writing, learning, and simply existing, because I haven’t conveyed my every thought from every possible angle in writing. And I didn’t stop learning in September 2021 either.
Authoritativeness
AI isn’t really very intelligent. In fact, it doesn’t know anything at all. What it can do is detect patterns, trends, and relationships and distil them into actionable insights. That’s perhaps the best thing about generative AI from a creator’s perspective. I’ve no doubt that it’s a powerful tool for ideation and getting over the dreaded creative block that every writer and artist who ever lived has experienced. But that doesn’t mean it has authority.
AI isn’t authoritative, because it doesn’t have any true expertise or experience of its own and, of course, it doesn’t experience emotion. Real human readers generally don’t want to learn from robots which, in turn have learned from the imperfect people used to train them. Rather, they want to learn from people who are genuine experts in their fields and can connect with them on an authoritative and emotional level.
Authoritativeness is ultimately what separates a robust brand image from that of a low-effort affiliate marketer. Consider, for example, who you would rather trust if you’re looking for advice on investing in the stock market. Who would you rather listen to – a genuine authority in the field or a robot that’s just rehashed a bunch of posts from the Wall Street Bets subreddit?
Trustworthiness
Last, but not least, is trust. Unfortunately, the trust deficit is rising in the age of misinformation and mass-manipulation by unscrupulous politicians, businesses, and online grifters alike. AI will probably accelerate the proliferation of misinformation exponentially, just as it will social engineering scams and other forms of cybercrime. Since cybersecurity and its ancillary areas are topics I write about regularly for my clients, this is a risk that I’m very well aware of.
Every business, especially in the B2B spaces that I write for, is built on trust. Losing that trust can lead to enormous brand damage and, potentially, lawsuits and even closure. For example, Google’s shares recently dropped by a staggering $100 billion when its new AI Bard gave an incorrect answer to a question.
It’s well known that generative AI can produce factual inconsistencies, biased interpretations, and plain mistruths. Sometimes, AI ‘hallucinates’, providing confident responses that, while justified by their training data, is plain nonsense. That’s why marketers – or anyone else for that matter – should never publish content that hasn’t been properly vetted and fact-checked by subject matter experts.
3 predictions about the rise of generative AI in content marketing
From the standpoint of a creative like myself, it’s hard to take a particularly positive view of the rise of generative AI. Perhaps I’m too cynical. But if what people are saying on social media is anything to go by, many artists, writers, photographers, designers, and other creatives take a similar view.
However, it’s important to remember that almost any technological innovation has the potential to make the world a better place. Unfortunately, the opposite is often the case. I don’t envisage generative AI being any different. Companies like OpenAI claim to be setting themselves and their innovations up as a force of good, for the betterment of all humanity. I’m neither refuting nor supporting their argument, but I will say that such lofty claims conjure up an impression of immense naivety – not to mention blatant self-promotion.
The reality is that generative AI is potentially harmful. However, with pandora’s box now open, and the technology evolving rapidly, eschewing it is not an option. We need to learn how to benefit from AI, but to do that, we need to understand the risks.
What does this mean for content marketers? Well, for one thing, marketers have a moral and ethical responsibility to foster fairness and honesty in their advertising. Of course, there have always been those – spammers and scammers – who don’t care a bit about such things. The rise of generative AI will not only empower those unscrupulous types – it may also lead to a grey area in which otherwise well-intentioned marketers end up being duped into following practices per the predictions I’m going to talk about below.
#1. It will lead to a massive lack of trust
Marketers have the power to boost band trust – or utterly destroy it. There are no shortcuts to building trust, at least not in a sustainable way, simply because trust is an emotional concept. Sure, there are dishonest ways of building trust. Spammers and social engineering scammers use them every day, but no legitimate business wants to replicate their methods.
In the world of marketing, we constantly hear about the importance of humanising brands and personalising our strategies to meet the needs of specific target audiences. I’m sure I’m not the only one who sees the irony in relying on AI and automation to do those things. Yes, AI can help scale personalised marketing, but without keeping humans in the loop, it’s all just a big scam.
Customers no longer blindly trust organisations. This is especially true in B2B markets where purchase decisions usually go through multiple stakeholders. The issue with generative AI is that it can be incredibly convincing which, in turn, will make smarter buyers even more careful about who they do business with.
Consider, for example, a business leader looking for a dependable cybersecurity vendor. They visit a vendor’s website to learn more about their product and industry authority and expertise. Only they find that their content is shallow, generic, and lacking in value. Perhaps it has been written by an inexperienced freelancer hired for peanuts on Upwork. Taking it at face value, it might not be a complete deal breaker, although it hardly impresses either. Chances are, they’ll still look elsewhere.
Now, imagine our hypothetical buyer instead finds a whole lot of content, such as whitepapers and blog posts that seem, at a first glance, professionally written and free of errors. So, they delve deeper and, in doing so, they find that something doesn’t seem quite right. Perhaps it’s lacking in originality and depth or worse, it’s either plagiarised, riddled with factual errors, or both. The red flag of AI rises, and any semblance of trust vanishes.
This will have a compounding effect on content marketing overall. As we witness the inevitable deluge of AI-generated content, the trust deficit will increase exponentially. The only marketers and businesses that survive will be those that keep humans in the loop and centre their entire strategies around trust and honesty.
#2. It will lead to an increase in lawsuits
Plagiarism, misinformation, and fakery are three things that no legitimate business wants to be accused of. Talented and ethical business leaders and marketers go to great lengths to avoid these things. But that’s going to become significantly harder for those who give in to the promises of practically limitless marketing scalability and dramatically lower costs.
While their intentions might not necessarily be malicious, overreliance on generative AI makes accidental plagiarism and misinformation a whole lot easier. Plus, would-be customers, not to mention the law, doesn’t really care whether it’s accidental or intentional. The results, for the most part are the same – becoming another statistic in the upcoming barrage of lawsuits.
In fact, it has already begun. In November 2022, designer and programmer Matthew Butterick teamed up with a group of class-action litigators to file a lawsuit against Microsoft and OpenAI alleging that their AI-powered coding assistant GitHub Copilot relies on ‘software piracy on an unprecedented scale’.
In January, a trio of artists launched a lawsuit against Stability AI and Midjourney alleging that their AI models were trained on billions of images without their artists’ consent. ChatGPT is now squarely in the spotlight of major news outlets including CNN and the Wall Street Journal for using their articles to train their large language model. AI ethics and legal professionals are even talking about an upcoming legal doomsday.
CNET, one of the world’s biggest tech publications, was recently found to have published AI-written articles that were substantially plagiarised and full of factual errors. I think it’s safe to assume that CNET’s massive amount of content was included in ChatGPT’s training data set – and this is just one of countless examples. Punch in the same prompt in an AI over and over, and you’ll get a similar answer every time.
While the current and near-future lawsuits will no doubt focus on the developers of generative AI models themselves, their users should be worried too. After all, no marketer nor any other creative wants to start receiving DMCA takedown notices or having their brands plastered all over social media for plagiarism.
Currently, all content is copyrighted by default. Naturally, that means that the vast majority of training material fed into AI models, including image synthesisers and text generators, is likely copyrighted too. To demonstrate this point, some artists have successfully replicated their own works in AI with just a simple prompt. For content marketers and other creatives, this should be a warning of just how easy it is to unwittingly plagiarise existing content.
The inevitable rise of misinformation will likely also lead to increased legal scrutiny. Since AI has no way of definitively knowing what’s true and what isn’t, it can produce and perpetuate misinformation and bias, unless it has rigid human oversight.
#3. It will lead to a transformation of creativity
I don’t think it’s an exaggeration to say that the rapid and unprecedented rise of generative AI threatens to upend the world of creativity. Left unchecked, so-called computational creativity has every chance of usurping human creativity.
That’s not to say that computers will become better at creating than humans. What it does mean, however, is that the computer-generated outputs will, for a time at least, be a lot more convincing. As I mentioned earlier, the opposite might also happen, as AI learns from AI-generated content, thereby constantly dumbing down creative content to the extent the web becomes an even bigger echo chamber than it already is.
Many creatives are already looking to pivot into other professions in the hope that they might be more future-proof. While their fears are understandable, neglecting the creative path only serves to accentuate the problem.
That problem is a dystopian future in which everything we view online has a high chance of being utterly, woefully fake. Of course, we’ve already been heading in that direction for quite some time, as the countless clickbait articles, low-effort affiliate marketing websites, and fake social media accounts exemplify. AI just takes it to a scale that was previously impossible.
On the other hand, AI can also expose us to new insights and, in doing so, fuel innovation. It can free up time by automating repetitive, routine activities to allow us to focus on genuinely creative activities that require human consciousness. That’s the potential upside of generative AI, in which we work with machines to augment – rather than replace – our abilities. Moreover, just like the invention of the printing press and photography centuries later, the rise of AI can open up many new job opportunities – creative ones included. We can, of course, only realise these benefits if we keep humans in the loop.
There are lots of highly talented content marketers who aren’t writers or designers, but who may be tempted to turn to AI to outsource creative work. The problem with this isn’t just that it’s risky – it’s also time-consuming. An experienced professional will be able to do the job in less time and with less risk of plagiarising, misinformation, and dumbing down of the creative process. It takes just as long to fact-check and improve an AI-generated article as it does to write a decent article from scratch.
Managing risk in AI-driven content creation
After everything I – along with many others – have said about generative AI, you might be wondering if its worth bothering with at all. Others might simply not care which is, unfortunately, a common reaction as well.
I’m going to wrap this article up by emphasising the fact that generative AI presents enormous potential in areas like ideation, summarising, and outlining. But even in these cases, it depends on what you put into it. Enter the surging popularity of the very new job titles ‘Prompt Engineer’ and ‘AI Fact Checker’.
If you’re going to rely on AI to create your marketing collateral, then you absolutely need to keep humans in the loop from the beginning to the end of the process. You need people who are experts and have experience in your industry to work closely with AI from initial prompting to redrafting and fact-checking thereafter.
From my experience with ChatGPT, using it to actually write inbound marketing content simply isn’t worth the effort. It’s great for non-writing tasks, like generating ideas for titles and article structures and for summarising longer pieces. But that’s about the extent of it. Yes, it can create compelling content in certain use cases, but doing so requires a lot of careful prompting, no small amount of trial and error, and exhaustive fact-checking. As such, when it comes to writing, it doesn’t really save time.
Despite this, there is a strong case for marketing teams to use AI to assist them in their daily tasks. It can be immeasurably useful for overcoming creative blocks, or even getting answers to everyday questions – providing you fact-check said answers with a reputable source. There are no shortcuts when it comes to actually creating, and generative AI is no exception.
Content marketers and other creatives need to recognise the fact that AI, in any form, can only serve to automate entirely routine and repetitive processes and augment their workflows. However, using it to replace the creative processes that go into building a viable, long-term marketing machine is a slippery slope towards tearing apart a brand’s reputation from within.
A very interesting article Charles, although I am now well removed from journalism, but nevertheless understand the challenges of PR and business to business marketing.
Of course, as you clearly state IT power is now impacting areas of society previously untouched by it.
Great read! I found your insights on maintaining creativity and authenticity in the era of generative AI truly eye-opening. Your explanation of how marketers can leverage AI as a tool rather than relying on it completely is spot on. It’s crucial to strike the right balance between technology and human input to ensure that marketing strategies continue to reflect a genuine and creative touch. Thank you for highlighting this important aspect of modern marketing. Keep up the great work!
Best regards,
GPTOnline