Robo-Copy and the Replicant Effect

by | Content Writing and Copywriting, Tech Marketing

Something I’ve been raising concern about for the last year is the concept of “language homogeny.” It’s an academic way of saying “everyone sounds the same.” Mostly this comes up in conversation around using generative AI (GenAI) to write your messaging, copy and content — but the evidence of this happening across creative work began appearing before GenAI hit the scene.

According to my graduate research, its roots may actually be in predictive text. Yep — the predictive suggestions you get on your smartphone keyboard, and now in the body of your email editor, began chipping away at the creativity of our writing and the nuance we communicate through our word choice years before the GenAI boom.

Predictive text became a shortcut to communicate faster — not necessarily more clearly or persuasively. As this “shortcut” mindset began leaking into how we communicate in the workplace, it inevitably dripped into all facets of business communication: interpersonal, organizational, sales, marketing, you name it.

Basically, once people stopped valuing words as communication tools and started seeing them as boxes to check or levers to pull, they started relying on shortcuts to crank out the words faster, easier and cheaper.

Words were treated as widgets.

Generative AI just made those widgets faster, easier and cheaper to produce.

Enter: the robo-copy reaction. This is what I’ve dubbed the two-fold response to the deluge of generative AI tools and output.

  1. First is the response of companies: They immediately began outsourcing creative tasks to generative AI tools (whether or not they displaced human workers is another issue entirely — some companies did, and some did not).
  2. Second is the response of audiences: They began questioning the trustworthiness of the content they encountered online (the trustworthiness of the content itself, and the trustworthiness of the person or organization that produced it).

It’s this latter part of the two-fold response — the audience reaction — that I want to really dig into in this article.

Understanding the replicant effect and its impact in business

In the push for numbers, it’s easy to forget that marketing is a human-to-human communication activity. When it becomes overly mediated through technology — whether that’s overuse of automation or robo-writing — the result is the replicant effect.

Hancock et al. (2020) defined “replicant effect” as follows: In a system that mixes human- and AI-generated writing, the trustworthiness of writing perceived to be written by AI will decline.

Note the word “perceived” there.

Whether the writing is actually written by bots or not — if the audience believes there’s a chance that what they’re reading wasn’t written by a human, they’ll trust it less.

When they don’t trust your writing, they don’t trust YOU.

Look, AI has its legitimately good uses. But there’s a real cost to using it for writing. I’m seeing more and more online audiences question if something was written by a bot, and subsequently question whether they should trust it.

This happened on one of my own LinkedIn posts recently, in fact! I asked my followers to share their thoughts on a trend I was seeing in the AI research, and someone suggested in the comments that I might be a bot. In the end, the guy just wanted to see some of the research sources (which I happily shared in my response to his comment) — but I admit, the comment really got to me. He questioned the trustworthiness of my words, therefore he questioned my trustworthiness. I make no money from posting on LinkedIn, but many businesses do. This will have real, measurable financial impact on businesses as time goes on.

The cultural shift in content consumption

People act like generative AI has upended everything — but we were living with AI and its effect on human communication for many years before GenAI became our everyday reality.

Note the date of the study I referenced above: 2020. That was two years before ChatGPT hit the scene.

We are not in a sudden decline of communication. We are at the end of a long, slow, downhill slide.

I’m not one of those futurist thinkers — though I wish I were. I don’t pretend to know what’s coming, and I don’t make predictions.

But I can tell you what I see.

And what I see is that people are actively looking for clear, creative, interesting, trustworthy communication.

They’re tuning out the replicants.

The shift in content consumption behavior only accelerated with GenAI. The humans in our audiences have a deep desire for authenticity in communication — and they are assertively and directly asking if something was written by AI.

Fewer and fewer companies can truthfully answer “no” to that question. It doesn’t take a futurist to see what the effect of this will be over time.

However, having been written by a human isn’t enough to call content human communication …

The human element in communication vs the rising role of AI

Guzman and Lewis (2020) point out that historically communication has been seen as a human process, sometimes mediated by technology — but now AI is being designed to function as a communicator. Communication is no longer just mediated but also produced by machines. Add to this the accelerated role of automation, and this technological evolution surpasses our existing ethical frameworks.

In other words, we humans are currently deciding what matters to us in this new AI paradigm.

And overwhelmingly, the issue rising to the top is the human element.

Knowing that content (or communication in general) was written by a human isn’t enough for people to feel a human connection with the writer, however.

Think about all that mass-produced, cookie-cutter SEO content that flooded the web in the 10 years before GenAI. Did that feel human to you? Likely not. We all had this collective understanding that the content was written for an algorithm. Those of us who understood marketing also understood that those articles were written to specifications and highly templatized. Those words were on the page to do a job — and that job wasn’t “help the audience,” it was “make this page show up in Google Search.”

Now we have content written by AI for an algorithm. It’s robo-writing from both ends — written by robots, for robots.

But fixing one end or the other doesn’t necessarily inject humanity into the writing.

So what does?

In my experience, two things:

  1. Deeply understanding the human beings in the target audience, and writing to them, for them.
  2. Taking a narrative approach to writing. Our stories come from our human experiences, and they bring our very souls into the content.

There is a place for AI in this — but it’s not to do the writing or communicating for us. The place for AI is to help us become better communicators ourselves.

To recap

The increasing role of AI in communication is impacting trust. Our audiences are questioning what they’re reading, and they’re pushing back. The humans we’re communicating with are demanding authenticity and transparency on a greater scale.

We writers, communicators, and marketers need to show up as the humans we are. Maybe we have AI in our back pocket to help us organize our thoughts or find more vivid word choices — but our role is to be human.