Where is the “AI Whisperer?”
The ability to clearly communicate our desires to AI is a prerequisite to realizing any big dreams we have for the technology.
The more abstract our request to an algorithm, the harder clear communication becomes. “Improve education.” “Help people find true love.” “Promote justice and peace.” “Further economic prosperity.” These requests are basically meaningless because of the ambiguity surrounding the language. The rub is that an ambiguous term like “happiness” can not be broken down into simple metrics and data points — to do so would compress the complexities of human desire into an elegant — but ineffectual — algorithm. We need tools to decode our deep, complex emotions, desires and values. Then, we need tools to encode what we find into AI, with nothing lost in translation. If we can’t do this, AI will have to use the incomplete information available to it, generating unsatisfactory results for users.
Let’s say I have a highly intelligent algorithm at my service. I ask it to “make me enjoy going to the gym.” Well, “enjoy” is abstract and ambiguous.
I have a couple of options. I could just tell the algorithm what I want (what I really, really want), or I could allow it to observe everything I do and determine what enjoyment means for me, with my guidance along the way.
The problem is that most people can not identify what actually makes them enjoy going to the gym. We lack the necessary self-honesty and reflection. It’s not that I don’t know what’s best for me — although that may also be the case — but that I can’t precisely articulate what’s best for me.
Ultimately, my very emotions and desires are complex algorithms in and of themselves — to decode them is complicated. I don’t know what would really make me enjoy going to the gym. I may have an inclination, but it’s probably wrong. We tend to be dishonest with ourselves about our true motivations, desires and emotions.
What if I instruct the AI to “make me enjoy going to the gym by reducing the physical discomfort I experience while exercising?” It would probably find me a drug that could either prevent lactic acid from reaching my muscles or block my brain from experiencing its effects. In this scenario, I’d quickly realize that even though cardio is no longer painful, I still don’t enjoy working out. So then I’d say, “I actually want to be entertained while I exercise, because I get very bored.” Then, the AI might provide me with a queue of entertainment options to keep me engaged while I use the StairMaster.
But I’d still be dissatisfied. It’s like a kid who cries to his mommy for food. She brings him food and he’s still crying. Then she brings him the teddy bear and he’s still crying.
For the AI to truly figure out how to help me enjoy going to the gym, it would need to more deeply understand who I am. Otherwise, the AI is forced to measure my “enjoyment” by accessible, surface-level metrics and data points like endorphin levels or — even worse — a “self-reported happiness score.” Or how about looking at the data for how much I was “smiling” as I worked out? I’m cringing. All data is not created equal. There may be smiles and endorphins and self-reported “happiness,” but I still don’t enjoy working out. Taken to its extreme, this becomes a chicken and egg problem — which came first, the happiness or the data reporting it? Data is most effective when it silently works to improve our lives, not when it’s constantly in our face.
Plus, there’s a difference between pleasure and fulfillment. To understand how to fulfill me, the algorithm needs to know me at a deep level. An analysis of my lifetime browser history and biofeedback is not enough.
At a certain point, the algorithm needs a more precise picture of my internal experience — how does the gym really make me feel? What matters is my internal experience and how I filter that. Maybe I enjoy the gym because there’s a cute girl who I get to chat with while I’m there? Or maybe I look forward to the habit of sipping on an energy drink before I arrive at the gym? Or perhaps I go to the gym because of social mirroring — the people I secretly admire have big muscles, so I want big muscles. Or do I go to the gym to blow off steam? Or to escape something going on at home? I would never be able to just tell the algorithm these things because I’m not totally aware of them. The subtleties that influence our behavior often elude us. We think we know ourselves, but we only possess a limited picture, filtered through the thick fog of ego. With lots of data, the AI might be able to uncover some patterns, but it still wouldn’t be able to explain itself and answer the question, “why does Aaryn enjoy going to the gym?”
Even if I could vividly articulate my internal experiences to the AI throughout the day (“right now I’m feeling X”), a lot would be lost in translation. For one thing, most people are just terrible at understanding and then verbalizing their authentic emotions and experiences. For another thing, language inherently limits the parts of our internal experience that we can express.
So, we have two hard problems. First, there’s the problem of decoding the algorithms of human emotion, desire and experience — what do you really mean? What do you really want? How do you really feel? We can get better at this through the use of tools that have existed for thousands of years — meditation, psychedelics, self-reflection, crisp prose. But we still need better tools — maybe biotech can help with that.
Second, there’s the problem of encoding into the AI what we find. An AI Whisperer needs to explain the insights to the algorithm, in a language and with metrics that it can understand.
To decode the complexity of human emotion, desire and behavior — this has been the job of religious prophets and poets and artists for centuries. Now, we need to encode those complex insights not into art, music, poetry or movements, but directly into a language that an algorithm can understand, so it can generate the art, music and poetry for us.
This is how the human need to be “creative” will shift. We’ll become translators, decoding the hieroglyphics of humanity’s values, emotions and desires, and then encoding them into a language that AI can understand. To do this, we’ll all need to become more honest and authentic. We’ll be forced to look in the mirror. We’ll have to hold up mirrors to our friends. They might not like what they see. But you have to know what you’re working with before you can transform. Michael Jackson was ahead of his time.
We’ve spent the past several thousand years getting to know ourselves. It’s time to look inward and begin to clearly communicate what we want to the gods that we’re building. Misunderstanding is the root of all evil.