Is Generative AI the Digital Tower of Babel? Pride Comes Before a Fall

Imagine a mysterious disease ravaging a remote tribe in Papua New Guinea. The brains of those affected degenerate rapidly, and scientists soon discover the cause—a ritual in which the dead are consumed. This disease, later known as Kuru, spreads as the tribe literally ate themselves.

Now, decades later, we see a similar phenomenon in the world of artificial intelligence (AI). No, our computers haven’t started eating each other (thank goodness!), but something disturbing is happening: Generative AI systems (hereinafter simply AI for readability) are increasingly “consuming” their own output, with potentially disastrous consequences.

Welcome to the world of 'model collapse' and 'shitification' - two-dollar words for a multi-billion dollar problem.

placeholder

The Digital Tower of Babel

Let’s go back to the Biblical story of the Tower of Babel. Mankind, overconfident and ambitious, decided to build a tower that would reach to the heavens. The result? A divine intervention that led to a cacophony of languages and the dispersion of humanity.

Fast forward to today, and we see a similar hubris in the AI world. We build ever larger, more powerful models, fueled by a seemingly endless stream of data. But what happens when that data stream runs out and becomes increasingly polluted with AI-generated nonsense?

Model Collapse: When AI Eats Itself

Let’s delve into the fascinating, if worrying, phenomenon of ‘model collapse’ coined by Shumailov et al.1. Imagine an AI that increasingly resembles itself, like a digital narcissus falling in love with its own reflection. Sound like science fiction? Unfortunately not.

Early Model Collapse: The First Cracks

In the early stages of model collapse, the AI system starts to subtly lose information, especially about rare events. It is like a painter who keeps copying his own paintings - with each copy, a little bit of detail is lost.

In concrete terms, this means that a generative AI model trained on its own output will produce less and less variation. Suppose we have an AI that writes news articles. Over time, all articles about rare events, such as a lunar eclipse or a political earthquake, would become increasingly generic . The nuances and unique details would slowly but surely disappear.

For generative AI systems, this means a gradual decline in creativity and diversity. The 'long tail' of possibilities becomes shorter, making the output more predictable and less interesting. And precisely because small datasets are filtered out of the AI system, this can lead to discrimination against minorities.

Late Model Collapse: The Digital Implosion

If we continue this process, we will enter the phase of late model collapse. This is where things really go south. The AI system loses the ability to distinguish between different types of information. It is as if our painter has now mixed all the colors together into an indefinable grey mass.

At this stage, a generative AI system produces output that is virtually identical, regardless of the input. Our news writing AI would now produce articles that are all similar, whether it’s about sports, politics, or the weather.

The implications of this are far-reaching. For example, a chatbot would give the same answers to completely different questions. An AI that codes would generate the same structures over and over again, regardless of the project specifications. The output becomes a kind of digital uniformity - predictable, boring and often just plain wrong. 

The vicious circle of self-reinforcement

The tricky thing about model collapse is that it has a self-reinforcing effect. As the output of the AI system becomes more homogeneous, the data on which the system trains also becomes more homogeneous. This accelerates the process of model collapse, leading to a downward spiral of quality loss.

For generative AI, this means that, without intervention, we may end up with systems that produce nothing but “noise” – output that seems statistically plausible but is substantively meaningless.

Shitification: The Downward Spiral of Digital Content

In the corridors of Silicon Valley and beyond, this process is also called 'shitification'. An inelegant term, but one that perfectly captures the essence: the steady deterioration of the quality of digital content.

Imagine this: an AI writes an article, another AI reads and ‘learns’ from it, and then produces another article. Repeat this process a few times, and before you know it, you have a digital equivalent of the Telephone game– minus the humor.

Light at the end of the tunnel

But don’t worry! There is hope. By being aware and critical of AI technology, we can reap its benefits without falling into its pitfalls. Here are some strategies:

  1. Keep the human in the loop: AI is a great assistant, but a terrible boss. Provide human oversight and ultimate accountability.
  2. Be critical: Use AI as a solution for specific problems, not as a magic solution for everything.
  3. Evaluate regularly: Recalibrate AI systems often with fresh, human-verified data. Think of a digital check-up.
  4. Diversity is key: Provide diverse and representative datasets. An AI trained on only middle-aged men will see the world that way too.
  5. Transparency above all: Make it clear when and how AI is used. Publish AI impact analyses, develop visualization tools, and write clear documentation.
  6. Bring the experts together: Encourage collaboration between AI nerds, policymakers, ethicists and domain experts. Diversity of thinking leads to better solutions.

A call to action

The challenges are big, but not insurmountable. At Highberg, we are passionate about the responsible use of AI. 

We offer:

  • Expert advice on AI implementation (without the hype)
  • Guidance in the development and application of robust, 'shitification- resistant' AI systems
  • AI Ethics Trainings (No, Not Boring!)
  • Advice on the use of transparent AI systems

Are you a policymaker, CIO or CDO? Let’s build a future together where AI is a trusted partner, not a digital Frankenstein.
We are writing the future of AI in our society. Not with an AI-generated script, but with human creativity, critical thinking and a good dose of common sense.

Contact us today. Together, let’s prevent the digital Tower of Babel from collapsing before it’s finished.

Related Insights

divider