I’ve tried to wrap my brain around the relationship that AI assistants like ChatGPT have with the facts, with their sources, with reality.
Here’s the best analogy I can come up with. (Note that I’m from Indiana, a very agriculture-based state, so if this analogy seems crude, my apologies … I’m just speaking the language of my people. 😂)Â
AI assistants produce information kind of like a meat grinder produces sausage.
At the top of a meat grinder, you can add all sorts of meat … all different parts of the animal … and it all goes into the meat grinder.
What comes out? Hotdogs. Sausages. Ground beef and sausage and … other things.
What would happen if someone questioned the butcher later and asked, “What is this little part I see here?”
Likely answer: the butcher would shrug and say, “I’m not exactly sure. It’s just sausage now.”
The AI models that power AI assistants like ChatGPT learn from a variety of sources. For example, some more reputable (published books and academic journals) … some less reputable (personal blogs and popular Reddit posts).
AI models don’t learn in a linear fashion like we’d prefer — with direct links back to everything it produces.Â
Instead, they’re more like our own brains. (Because their architecture is inspired by the human brain.) We file away lots of things in our memory and often say, “I can’t remember exactly where I heard this, but …”.
We wouldn’t cite “I can’t remember exactly where I heard this” from a friends, family or colleagues. Unless we can see the information in context, there will be doubt.