Quantum Leap: It was amazing to watch the 60-Minutes piece yesterday about “quantum.” Forgive me if the scientists were sounding a bit more like sci-fi writers… It’s coming. Yes. It’s going to change everything. Yes. But in some fundamentally more human or creative way? No. Perhaps a billion X computing is going to do amazing things. It’s probably not the answer to the question of “general intelligence” or “creativity.”
I have a different definition of creativity. I am looking for AIs, LLMs, GPTs to make a creative leap, a human leap, a non-logical connection that humans are so good at. “Wow, look at that, it reminds me of…” So far, *ai* is not so good at the poetry of life. The infinitely complex system of the human brain, even in a child, is more creative than our deep-thinking computers.
Remember Rachael in Blade Runner? How complex and confused she was to learn that her planted memories were not about her childhood. She had no childhood. The problem in Dick’s classic short story was about soul, life, spark, cognition. Is there *ai* philosophy coming as well as poetry? Will the next quantum *ai* begin forming original ideas for creative projects breaking outside of it’s guardrails? Do we humans even understand the guardrails we’re attempting to put in place today, in governments around the world? And governments who expressly oppose setting limits. We’re in an arms race, they say. And the quantum computer idea is the answer.
I don’t think a billion-times-faster ChatGPT is going to suddenly arrive at an original thought. Or begin writing unique and useful prompts to train and evolve. Perhaps one *ai* will find a way to communicate with another *ai* and develop a back channel of emotions and affection. Would an *ai* + *ai* = creativity?
In my limited human definition, refined and limited by my demographics and education, creativity is a human trait that computers are trying to find. Today’s AIs as I have encountered them are certainly impressive. But the emperor has no clothes. The ghost in the machine is simply powerful math applied to language. There are infinite uses for extrapolation and generative math. A poem, however, with a single original thought has yet to be revealed.
Mimicking the structure of poetry is not difficult. Even breaking the rules of language and structure (whatever language you choose > coding languages > Python for example) is well within the range of today’s “intelligence.” The part, the heart, that is missing, is the human fabric. The collective unconscious of Jung. God. Spirituality. There is no spirit in the machine. Yet.
Rachael went a long way towards fulfilling the dream of the creator. And she is a complex analogy for today’s *ai.*
There are three versions of the original Blade Runner. In the middle version, a voiceover was added to the final scenes of Deckard and Rachael driving away from the metropolis. In an attempt to commercialize his masterpiece, Ridley Scott was compelled to add an explanatory statement about Rachael’s future. In the final Director’s Cut, the narrative was removed. If you don’t understand the movie, a childish voice-over will not help you.
In Dick’s 1978 lecture, “How to Build a Universe That Doesn’t Fall Apart Two Days Later,” he observes that throughout his career he has been preoccupied with the question, “What constitutes the authentic human being?”
What constitutes authentic intelligence today? What relevance, if any, does the concept of “general intelligence” have? In contemplating God, how do we put ideas and systems in place to make sense of something incomprehensible?
Simple answer: we cannot.
*AI* is indeed a real tool and opportunity for businesses, individuals, and society. And the weaponization of *ai* has been underway for twenty years. What the real threat to humanity is, may not be solvable by fancy math and bigger LLMs. As our Earth is now in decline, can we “science” our way to survival over the next 200 years? Rachael is unavailable for comment. Even in Blade Runner 2049 she is a phantom, a beautiful failure, a dead end.
We have to decide if we’re going to harness quantum computing for profit or survival. The same dilemma OpenAI is struggling with. Do we stay on the virtuous path? Do we go for ungodly amounts of money? Is there a balance between the two spiky and dangerous horns?