1 min readWhen AI Is Trained on AI-Generated Data, AI Goes MAD

The article discusses how artificial intelligence (AI) is being used to both train and generate data for various applications, including machine learning models. It explains how AI systems can create synthetic data that mimics real-world scenarios. The article emphasizes the potential of this technology for various industries and research fields but also notes the importance of ethical considerations and potential biases in the generated data.

This article explores what happens if you feedback on synthetic AI-generated content to the same AI that made the content. This creates mangled and bland outputs, through a self-consuming loop. Researchers have referred to this output as “Model Autophagy Disorder” or MAD, to refer to the AI’s “self-allergy” which comes out only after five cycles of training.

Editor’s Note: This situation can be likened to the echo chambers in social media where people who hold the same worldview are trapped inside a digital prison where everyone agrees to the same reality. In these spaces, there is no room for debates and critical thinking. If AI blows up after five cycles, we wonder, what happens among humans if they are trapped in homogenous digital circles?

Read Original Article

Read Online

Click the button below if you wish to read the article on the website where it was originally published.

Read Offline

Click the button below if you wish to read the article offline.

Leave a Reply

Your email address will not be published. Required fields are marked *