Deepest Search? There Is No Deepest End of the Intelligence Pool

Deepest Search? There Is No Deepest End of the Intelligence Pool

To quote Kevin Malone of “The Office”: “Where does it end with you people?”


We now have quite a few reasoning artificial intelligence models: OpenAI’s o1 and o3, DeepSeek’s R1, Google’s Gemini, Alibaba’s Qwen, and now xAI’s Grok 3. To simplify, these reasoning AI models are a turbocharged outgrowth of the old “let’s think step by step” chain-of-thought prompts that showed both future promise and immediate value years ago. Almost all these models also have their own version of a “deep research” tool that reason and search and produce thorough reports on a topic of your choice. These are again descendants of recursive agents who plan and use tools and evaluate themselves before they produce their output.



Many of these latest innovations have come in the post-training rather than the pre-training phases. That means we start with a fully trained model and tweak its activity to generate lots of “reasoning tokens” to allow it time to think before it acts.


To be sure, using words like think and reason and plan with AIs may be premature, as they could just be mimicking rather than truly experiencing. But if e-thinking leads to e-thoughts that we in fact truly appreciate, we might want to drop the “e-.”

Thinking is unlike other tasks. Thinking is not doing. Consider optical character recognition. It used to be a pain in the neck to convert PDF images of text into actual words. Some programs could do it mechanistically, but it would lack context and make obvious mistakes. Today, that task seems essentially solved. It is finished. Any new future AI models will not really do any better.

Similarly, tasks such as summarization do not seem to have much room for growth. Given a piece of text, there’s not that many different ways to mindlessly summarize it.

Thinking is different. Thinking has no end. There is no deepest end of the intelligence pool. It’s an infinity pool that’s truly infinite.

Even among humans, who all share the same potential for thought, we carefully select among types of thinkers and those who can think fastest and under pressure and in novel ways, or at least we try to, just like we select for athletes with improved speed, stamina, and, again, fast thinking: the ability to anticipate the opponent, train efficiently, and lead their teammates.

Malone, the lovable fictional accountant, was lamenting cupcakes in his quote. “Mini-cupcakes?” he said. “As in, the mini version of regular cupcakes? Which is already a mini version of cake? Honestly, where does it end with you people?”

With sweets, it seems to have ended with mini cupcakes. There are no subatomic nano cupcakes. One bite seems to be a natural bound.

But there is no bound for thinking. This seems straightforward but it also seems to have been forgotten when DeepSeek released R1.

Using a combination of many brilliant innovations, and among other accomplishments, DeepSeek was able to reduce the cost of inference or thinking by a substantial amount. Let’s say it was by a factor of about 10-20x.

This could be viewed as bad news if it means we are now closer to reaching the “end” than we were before. If light bulbs could suddenly be manufactured 10-20x cheaper, then light bulb manufacturers would likely be worth less, simply because their expected revenues would be slashed.

Ah, but what if the amazing decline in light bulb prices made people buy 20-30x as many light bulbs! Welcome to the world of the Jevons paradox: when the price of a product falls a lot, it is possible that demand for that product rises even more, so the total revenue actually increases.

That’s a possible outcome. But it again only applies to products and services with upper bounds. In other words, if OCR or summarization technology becomes 10 to 20 times cheaper, sure, maybe people will use it 20 to 30 times more. That’s possible, and helps explain why many AI models are “distilled” or reduced in size so they can fit on your laptop or phone or watch. A lower price for the same service is a great gift.

But thinking is not the same service. With thinking or deep research, you really need to be the best, not just the “same but cheaper.”

William Stanley Jevons documented the paradox that bears his name in 1865, writing about the surprising rise in the use of coal after the new efficiencies introduced by the steam engine. But perhaps the real driving economic insight for AI and reflection is not even named after the economist who documented it.

Nobel Laureate and economist Sherwin Rosen wrote about the economics of superstars in 1981. For context, that was three years before Michael Jordan was even drafted. Rosen’s insights countered the conventional notion that superstars were overpaid. On the contrary, he argued that scale and talent explained the large salary gaps between the very best and the rest. When a person can reach vast audiences simultaneously, and quality or talent matters, then superstars are born.

It helps explain why the world’s best teachers, nurses, and therapists do not typically make many orders of magnitude more than those who are not as good: those are all professions that at least until recently have been limited in scope. It’s hard to be a teacher or therapist to millions of people at the same time, but it’s easy to entertain millions of people at the same time as a singer or an athlete — or as an AI model.

So what does it mean for the AI and tech industries if DeepSeek or others are able to discover new efficiencies? If the efficiencies are secret and hard to reverse engineer, then the benefits accrue to the discoverer. But if the secrets are open, as most if not all of DeepSeek’s are, then it is a rising tide that lifts all boats in a race to an infinite horizon: there is no end point. People will likely always prefer by a lot the model that’s only smarter by a little.

Consider trading. If my model is able to only slightly outthink your model, it is possible that I would earn the lion’s share of available profits. Thinking speed and thinking depth may become as highly sought after as execution speed and market depth. Think how much effort goes into marginal improvements in portfolio construction or risk management or alpha generation. It’s neither ironic nor a surprise that DeepSeek grew out of a side project of a highly quantitative and technical hedge fund. The GPUs had already been being used to gain a market edge using AI. Small advantages can lead to massive gains.

We may now be in the AI equivalent of the basketball era of Rosen’s paper: In 1981, Larry Bird and Magic Johnson were taking the basketball world by storm. They are still some of the best players to have ever played the game. Our current AI models are perhaps in a similar stage. Basketball did not stop with the amazing innovations in efficiency that Larry and Magic brought: on the contrary, they set the stage for Michael Jordan, Kobe Bryant, Shaquille O’Neal, LeBron James, Stephen Curry, Nikola Jokić, and dozens of other superstars throughout the years.

We are likely nowhere near the end of what AI can do. It is quite possible there is no end.

What does that imply for managers, business leaders, investment professionals, technologists, and entrepreneurs? In a swimming competition with no fixed horizon, the winner won’t be the first out of the gate or even the one with the fastest starting speed. It’ll likely be the one with the most convexity, the most acceleration. What wins is improvement in the rate of improvement itself. Even constant learning isn’t enough: like strengthening a muscle or compounding interest, we need to learn more today than yesterday.

Reference:https://www.forbes.com/sites/philipmaymin/2025/02/21/deepest-search-there-is-no-deepest-end-of-the-intelligence-pool/

Reading next

DeepSeek launches new AI model as Trump cautions of ‘wake-up call’ to US industry

Leave a comment

This site is protected by hCaptcha and the hCaptcha Privacy Policy and Terms of Service apply.

Contact Us

Do you have any question?

Feel free to contact us