2 Comments

We're going to be hearing a *lot* about Model Autophagy Disorder (MAD) in the near future. This is where the output from an LLM is fed back into it as input (or when LLMs consume the output from other LLMs). The result is a complete breakdown of the model into self-referencing feedback loops, similar to what happens when people, say, get all their information from Fox News. In other words, the worst thing that happens is not when LLMs generate incorrect output. It's when they believe that their incorrect output is *right*, and act accordingly.

Expand full comment

Joe, this was wild scenario. I saw the post you referenced. The threaded responses were wild too.

Expand full comment