“No one should trust a robot to conjure an award-winning campaign idea or draft a life-changing op-ed. At least, not yet.”
The biggest creative arts industry story to come out of 2023 so far has been the ongoing dispute between the Writers Guild of America (WGA) and the Alliance of Motion Picture and Television Producers (AMPTP). Negotiations have yet to yield any promising results and audiences are slowly starting to see the consequences with production on several new and ongoing projects put on pause. The casualties include late-night talk shows hosted by the likes of Stephen Colbert and Jimmy Kimmel, the next season of Netflix’s Stranger Things, and Marvel’s Blade, which was scheduled to start filming this year.
But what sets the WGA strike apart from other labour disputes has been the topic of artificial intelligence (AI) as one of the causes of the writers’ grievances. The WGA has proposed regulating the use of generative AI in writers’ rooms, and preventing it from being used as source material for adaptations. The AMPTP countered by suggesting that annual meetings to discuss advancements in technology take place, but the WGA has refused.
Hollywood is trying to establish what its relationship with AI looks and feels like. As other industries do the same, it’s worth considering how we – creatives and content producers in the marketing space – leverage human resources and use technology to produce high-quality content.
An exercise in convenience
Though ChatGPT captured the attention of the world and subsequently set off a tech gold rush, AI has been an active agent in the advancement of technology for the last 70 years. Its roots stem from the 1950s into the 70s when machine learning algorithms became more prolific and computers became faster and capable of storing more information. The 1980s saw the expansion of algorithmic toolkits, and in 1997, the world watched IBM’s Deep Blue AI defeat world champion Gary Kasparov in a game of chess.
Today, AI takes the common form of automation (and not giant, thinking chess-playing machines) and is powered by big data. We have chatbots who can walk us through step-by-step processes online, algorithms that personalise our online experience and social media feeds, smart home devices, and driverless cars (though that’s still a work-in-progress). For many, using AI is an exercise in convenience, either automating an everyday task or compiling and using data to solve problems and make decisions.
We want AI to do more stuff for us. And, as ChatGPT demonstrated, that can include writing.
Creativity is king
In a previous Clockwork blog post, I shared my experience with ChatGPT and concluded that the quality of the bot’s output was dependent on the quality of the input. What prompts you give it will determine how well written its answers are. I also concluded the technology in its then form was excellent for templates and archetypal forms of marketing copy. It can write you a bare-bones press release and give you the structure of thought leadership. But the stumbling block came in the form of original thought. Users need to have the idea for a piece of content they want before they can type it into the computer.
Creativity blooms when marketers incorporate new ideas and concepts into workable strategies. This is valuable – evidenced by the agency teams employed to come up with the right words and the right images to produce the right feelings for their clients and their clients’ customers. When it comes to marketing, creativity is not simply working to sell a product or service. It’s about creating experiences, shaping new narratives, and portraying brands and personalities in such an exciting, innovative and unique way that they stand out.
Here, AI can only compete to a point, and even then, the technology faces intense stigma from the people who would consume its content, not to mention paranoia. US tech outlet CNET received intense criticism when it emerged it was publishing AI-generated articles (and the outlet’s journalists are none too happy about it). Just recently, a university professor in Texas threatened to fail his senior students after using ChatGPT to erroneously conclude they had used the bot to write their assignments.
Public sentiment, combined with a lack of policy regarding AI’s regulation and governance worldwide, means we’re still deciding how we’d prefer to see AI used. Here’s how I would.
AI is a tool, not a replacement
For all the organisations that are considering investing in AI-based solutions to fulfil creative responsibilities, we don’t yet know what the cultural and socioeconomic implications of that trend are, nor how long the trend will last. In place of that, a better question to ask ourselves is, how should we use this technology to improve ourselves?
An AI application can be used to generate content templates such as press releases or feature article formats. It can be used to proofread copy and detect plagiarism, as well as check whether the content meets SEO guidelines to achieve maximum exposure. How tech giants are experimenting with AI technology and search engines (Microsoft has integrated AI technology into Bing while Google recently released its own AI chatbot, Bard), means writers can more easily research topics and access information without having to slog through a million query returns.
AI is not here to replace. It is here to augment, upgrade, and improve. It is a symbiotic relationship. No one should trust a robot to conjure an award-winning campaign idea or draft a life-changing op-ed. At least, not yet. Until we cross that road, we can rely on writers with years of experience, a through-and-through knowledge of the industry, and exciting ideas just waiting to be put to paper.