This is an automated archive made by the Lemmit Bot.
The original was posted on /r/singularity by /u/BeginningInfluence55 on 2023-06-18 03:37:54+00:00.
(Ironically, used GPT-4 to translate my gibberish and unstructured original text into something that is readable)
Understanding GPT-4's core function as a robust token-prediction mechanism is straightforward for many. However, a particular aspect of its functionality has piqued my curiosity: its uniform response time across queries of varying complexity.
Consider two vastly different questions:
"How many days are there in one week?"
"Can you describe the process of oxidative phosphorylation in mitochondria and its role in energy production at a cellular level?"
Intriguingly, GPT-4 takes approximately the same time to answer both. While the complexity varies significantly, the model's token prediction process seems unaffected.
This observation leads me to question the concept of intelligence within AI. Intuitively, wouldn't a genuinely intelligent system spend more time tackling complex problems than simple ones? If the pace of response remains the same regardless of the question's complexity, does it suggest a lack of nuanced understanding on the AI's part?
Recently, I encountered a thought-provoking definition of intelligence: "Intelligence is what you use when you don't know what to do next." Yet, in sharp contrast, GPT-4 behaves more like a calculator, seemingly always knowing what to do next without pause or deliberation.
I'm interested to hear your perspectives on this observation. How do we reconcile GPT-4's uniform response time with our traditional understanding of intelligence? Does the speed of response indeed correlate with the depth of understanding, or does AI challenge this notion?
I think this will change as we realize how much explicit deliberation affects the quality of the output (usually called chain of thought prompting). As the complexity of the queries grows, the more likely we'll want the AI to talk itself through the process of answering it, making it more on par with how humans tackle issues.