Each month we pose a question to the brilliant Peter Houston, co-host of the Media Voices podcast, who will answer in his fabulously inimitable way. This month’s question comes from editorial development director, Andy Cowles.
Housty, we have a problem
How do you feel about publishers using Dall-E AI technology to create original imagery for social content? Will this post-truth facility require some kind of taxonomy, in the same way we label ‘branded content’?
Recently, the press has been full of examples of content created by AI (artificial Intelligence), clever computer programmes capable of mimicking human content creators. From the text created by ChatGPT to the images created by DALL·E 2, the narrative is that robots are replacing creators.
For the most part, I’m pretty relaxed about publishers using AI to generate images.
Firstly, I see AI image generators as a useful alternative image source for art that can be used to illustrate everyday text. For me, there isn’t a great deal of difference between using AI and using stock images and my hope is that, once all the hype has died down, AI imagery will just be another style of incidental art that publishers can turn to.
As with free-to-use image libraries like Unsplash, if it increases publishers’ access to interesting art at low or no cost, then I see that as a positive. You can argue that if I go looking for a red kettle on Unsplash, I’ll be able to choose from a range of atmospheric real-world imagery. But if all I actually want is a red kettle, what Craiyon delivers is pretty passable for all but the most serious red-kettle connoisseurs.
Kettle image from Unsplash versus…
…Kettle image generated using AI.
Second, I agree with Nick Cave. While not all AI output is “a grotesque mockery of what it is to be human”, I simply don’t believe computer content will ever completely replace human content.
For truly original words and pictures we will continue to need the quirks and subtleties of the human mind. And when editors need something progressive, nuanced or original to run alongside their text, they will turn immediately to human illustrators and photographers just as they have always done.
AI can only manipulate what already exists and, because it can only draw on the pre-existing dataset of human knowledge, it regularly reinforces stereotypes and even sexist and racist conventions. Ask an AI to draw you a CEO and it’s likely you’ll get a white male.
And this is where I see a cause for concern – where AI is used lazilly, without human supervision, or maliciously to further negative stereotypes or create harm.
At the moment, it is reasonably easy to tell when software has been used to create an image – just look at the edges of that kettle. But where there is any risk that the ‘artificiality’ of an image could be in doubt, an AI label makes absolute sense. Transparency is crucial.
One thing that is certain – AI is not going to go away.
The investment that Microsoft alone is making in technology guarantees that. And if the rumours are true, it could integrate image generation software into its Bing search engine.For me, where any technology is used as a tool to enhance creativity it is a force for good. Where it is used to cut corners, mislead and spread misinformation the we should be enforcing strict labelling and even regulating to remove the worst offences.