All Watched over By Machines of Insanity

In Adam Curtis’s TV series “All Watched Over By Machines Of Loving Grace”, he outlines a future where machines create a static and powerless world. A recent paper published in Nature demonstrates how AI trained on data AI itself generates leads to model collapse. Maybe Adam Curtis was wrong! Maybe we are headed, not for a static world, but a nonsensical one.

The TV series explores the history of computers, cybernetics, the internet, and how its creators imagined a world where people are liberated by machines. Where people would be free to say what they want with who they want, where they could even be free from themselves and their identity. After all, nobody on the internet knows you are a cat. The film concludes that instead of liberating people, the internet revolution distorted and simplified our world views. Machines will give you what you like and not deviate outside of that further calcifying our beliefs, likes, and activities creating a static powerless world. It’s an amazing TV series that is worth your time. While it did not go into AI, it will give you the context to understand what is happening in the world and technology today.

A recent paper published in Nature called “AI models collapse when trained on recursively generated data” looks into training generations of models on synthetic data. As they trained models on the previous model’s output, within a few generations, the AI attains irreversible defects. It starts creating strange output and forgets rare-events. The AI gets dumber. They call this Model Collapse. They theorize over time, more and more online content will be AI generated. This seems obvious to everyone and I’m not seeing any disagreement we are trending this way. As the percentage of AI generated data rapidly increases, surpassing human created data, model collapse will happen within a small number of model generations. The researchers used pre-trained models and found this was true even with fine-tuning them. They observed that first the tails of the distributions, the rare-events, went away. Then over time the variance of the distributions went down and ended up almost point like, resulting in models that produced lots of repeated phrases. The concept of variance here is important. Diversity of content creates richness and intelligence, while lower variance, lower diversity, degenerates into nonsense.

The implications of this are serious. We have now seen several generations of new models being trained, with the content they produce growing exponentially. We have no regulations to mark content created by AI, or even partially created by AI. Given the exponential take-off of using this technology, it would not take long to start seeing problems.

Since people will be consuming this content, it will start poisoning, not just the AI well, but the human one. Even before the advent of this technology, there was a lot of discussion of people falling into content bubbles, where people only see content they want to see and ignore content that conflicts with their world views. These content bubbles have lower variance, lower diversity of ideas, and tend to continue repeating the same tropes. They are dangerous because they create people with strange, nonsensical ideas. The QAnon movement is a great example of a content bubble and what kind of culture it leads to. There seems to be a deep connection here between model collapse and content bubbles. After all, content bubbles are just people creating content and learning from that content, and only that content, reducing the variance of what they see and hear. Imagine now, the human race consuming quickly degenerating content created by AI. Imagine the strange, nonsensical ideas the next generation could have.

Adam Curtis was wrong. Computers are not producing a static world, but a degenerating one. That was true before AI with content bubbles which lead to strange group think like QAnon and other conspiracy theories. As generations of AI models train on their own content, they accelerate a global content bubble and we may end up with a generation of people learning strange ideas and forgetting rare-events. The voices of marginalized groups will literally be forgotten as their experiences are improbable compared to the rest. Their experiences are the tails of the distribution of human experiences, and with this forgetting we will forget ourselves and what the whole of human experience is.

Subscribe to Maxim Khailo's Writing

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe