AI ethics and the fake Mackenyu interview

What can Esquire’s AI-generated interview teach us about AI Ethics?

Once again with AI we find ourselves in the land of, “You can, but should you?” 

As has been reported elsewhere, Esquire Singapore was looking down the barrel of a deadline. They had the photospread but lacked an interview with Mackenyu, and so, “Harnessing our creative license, we pulled his verbatim from previous interviews and fed them through an AI programme to formulate new responses.” They further try to justify with this turn of phrase: “Nature abhors a vacuum, and in its place, a story fills the hollow.” 

They also made a declaration of their AI use before the “interview”: “The following interview was produced with Claude, Copilot, and edited by humans.

There are two levels I want to approach this, and I’m hoping that this allows us to think more about the kind of things that are getting published on the internet. First I’ll look at this as a longtime editor and writer. And then I’ll pull back and ask questions on the AI ethics level. 

Can you even call it an interview?

An interview implies quite simply sitting down and talking to someone. AI is challenging that definition because for example we are seeing job interviews being conducted by AI bots on humans. But definitionally, an interview should be one person talking to another. When you’re writing for a publication, you as a journalist are asserting that you in fact did talk to the interview subject and to the best of your abilities you are representing your interview accurately in your writing, or other medium. 

If you did not actually do the interview but you wrote something as if you did, I think it’s fair to call that a fabrication. Which is a nice way of saying you made it up. But one thing the Esquire team did was feed actual previous interviews into chatbots, and then edit the responses of the chatbots. 

Would there have been a way to deal with this deadline on an editorial level that didn’t use AI? Yes. You can write a profile. You do research, try to interview other relevant sources, you can even pull from other interviews and documents to build out your profile. An interview is always ideal, but you can still write profiles about subjects who decline the interview. It will “fill the hollow” in a more honest and thoughtful way than an interview with an AI chatbot would. 

It would appear that the Esquire editorial is all in on AI instead, being cheeky enough to allow the piece’s writing an intro about trying to get an interview and being ghosted before going into the AI interview. On the level of “we have to publish something” they checked the box. 

Now the question for the readers is, is this okay? There’s actually a fair amount of backlash–for example this video does a good breakdown and introduces good ethics questions–because if you’re a fan of Mackenyu, what does this chatbot regurgitation give you? And if you aren’t a fan, then this bot version of Mackenyu is not an accurate representation of him. Like a lot of AI, it flattens personality down to the middle. Is this just a piece of slop? Because even wrapped up in all this set dressing and actual photos of Mackenyu, it’s a pretty horrid read. 

When a reader sees the Esquire brand and expects one of their journalists to talk to a subject and they get this, what effect does that have? Does that cheapen the brand? Or maybe people don’t care? Or does Esquire think this is a cool new way to use AI in journalism and maybe if other subjects don’t show up for their interviews then they’ll just keep doing this? 

The Imitation Game 

If we begin to pull out and look at this beyond the singular “interview” we are introduced to an array of different ethical issues. If you think about the “self” and your “identity” enough, you’ll be very aware of the different kinds of identities that we occupy or perform. For example, we are performing certain identities when we are hanging out with our friends vs. when we are at a party with our extended relatives vs. when we are just with our significant others. Those are all us, but different versions of us. I actually wrote about this in reference to the Black Mirror episode “Be Right Back” and some of the ideas here become very relevant. 

In “Be Right Back” Domnhall Gleeson’s Ash dies in a tragic accident. In her grief, his partner Martha, played by Hayley Atwell, resurrects him based on his social media posts, messages, and other online activity. While this provides some comfort at the onset, as the episode progresses she becomes more aware of how he is a copy of and not the real Ash. 

When we begin to think of it in this way, we come back to Mackenyu and how: 1) had he participated in an interview that would indeed have been a performance, but you would’ve gotten the Mackenyu that the journalist got to experience; and 2) the AI reconstruction of Mackenyu is a bunch of interviews plus an LLM making assumptions about how he would answer those questions. Now academically or creatively there might be some merit in that kind of art experiment. But as a piece of published journalism, again the building of a bot version of someone without their consent, and then publishing the bot answers is ethically problematic. 

This is close to another recent issue. Superhuman, which runs Grammarly, launched a feature inside Grammarly called Expert Review. This allows users to have their writing reviewed by AI personas of famous writers and editors, including Stephen King, Carl Sagan, Kara Swisher, Casey Newton, Nilay Patel, and more. None of them gave consent. So this is a case of a company building and deploying bots of people without consultation, consent, and compensation and offering it to the world for use. The feature has since been pulled, due to both outrage and litigation. 

Again we go back to that idea that even if those bots had been built on the various authors’ and editors’ writing and work, that would not necessarily be an accurate version of them. It would be an approximation. And I’ll be honest here, I built a bot of myself from my writing, which I use sometimes to analyze and counter my own writing. But that’s me making the decision and using it for myself. 

In all these cases we have a lack of consent and then very public deployments. When we talk about ethical behavior, we should always have at the bare minimum consultation and consent. In addition, and especially for products that are revenue generating, creators should expect compensation if their work is used. I will keep saying, if you’re smart enough to build an AI system, then you should be smart enough to build compensation structures for those whose work you built on top of. 

Using AI to make up for deficiencies or holes is definitely tempting. But as with everything in AI, we need to stop and ask if this is how we want this technology to be used. Would you want someone making a bot of you without your consent? And what happens if they publish an interview with that bot, even with “proper declarations of AI use”?  

Read more...