Nada R. Sanders, Ph.D., is an internationally recognized AI thought leader and expert in forecasting and global supply chain intelligence. Ranked in the world’s top 2% of scientists, she’s the author of 100-plus scholarly publications and seven books, including The Humachine: AI, Human Virtues, and the Superintelligent Enterprise.
The assumption is that achieving the benefits of AI requires companies to make huge financial investments. I have been an expert in business forecasting for almost 30 years and know that it is not the case. Throwing more data and money at the model only brings incremental success at best, and often deteriorates accuracy. Significant AI capabilities can be achieved with far less investment than previously thought. This is not a surprise, and enterprises need to take note. You can achieve significant AI success with far less investment than you think.
The recent success of DeepSeek, a Chinese AI startup, proves this point. In January, DeepSeek, released its latest model R1, with performance that surpassed technology developed by ChatGPT-maker OpenAI in its capabilities, while costing far less to use. The news made headlines worldwide causing US tech stocks to sink. DeepSeek’s founders reportedly created the AI with cheap less sophisticated chips but ended up with a more efficient process.
How is this possible?
Bigger is Not Better
The assumption that to build cutting-edge AI models requires huge amounts of money overlooks ingenuity and creativity. It is also crude, lacks finesse and problem understanding.
The basic tenant of any predictive modeling – including AI – tells us that to achieve good results we need clean data. This is very difficult to achieve with massive amounts of data used by large AI models. As a result we see hallucinations and errors. The proverbial “garbage in, garbage out” is very much the case here.
What set R1 apart was not just its performance—which reportedly matched that of leading AI models—but how it achieved this. The company used a modest number of less advanced AI chips at a fraction of the cost typically associated with such feats. DeepSeek attributed this accomplishment to innovative engineering techniques that compensated for the lack of high-end computational resources.
One of the most significant assumptions now under scrutiny is the belief that developing state-of-the-art AI models requires enormous financial investments in cutting-edge chips and expansive data centers. This has been a foundational principle for tech giants like Microsoft, Meta, and Google, which have collectively invested tens of billions of dollars in the infrastructure needed to create and operate next-generation AI models. OpenAI, for example, recently announced a $500 billion joint venture with Oracle and SoftBank to support its A.I. ambitions.
In stark contrast, DeepSeek reportedly spent a fraction of these sums to develop R1. While the company claims to have spent just $5.5 million training a previous model, even if R1’s actual development costs were ten times higher, it would still represent a monumental cost efficiency compared to other efforts. This revelation highlights that massive spending may not be the only path to cutting-edge AI.
DeepSeek’s achievement disrupts the “bigger is better” narrative that has dominated the AI race. It demonstrates that smaller, more efficiently trained models can rival or surpass the performance of larger, more resource-intensive ones.
This paradigm shift should be eye-opening for business leaders that they should invest in smaller AI initiatives that are focused and well curated, rather than the brute force of large AI models.
Achieving More with Less: Kasparov’s Law
Substituting clever engineering for raw computing power is nothing new. Think Kasparov’s Law, which was born out of the iconic 1997 chess match between Garry Kasparov, the world’s leading chess grandmaster and IBM’s Deep Blue supercomputer. Kasparov’s unexpected defeat by the machine was a pivotal moment, marking the first time he had lost a match—and to a computer, no less. Kasparov became determined to explore the dynamic between human intelligence and machine capability.
In 2005, Kasparov organized an advanced chess tournament that allowed teams of human players and computer algorithms to compete together. Surprisingly, the winners were not grandmasters paired with sophisticated supercomputers – but two amateur chess players using their ‘homegrown’ algorithm. Their success did not come from a powerful computer. Rather it came from their ability to effectively guide and manage their small algorithm. They strategically manipulated and “coached” their algorithm – enabling them to outperform much stronger players with large supercomputers.
From this, Kasparov formulated what is now known as Kasparov’s Law: an average human with average technology, combined with the right process, has better outcomes than a superior technology or a superior human. The principle underscores the importance of synergy between human experience and machine efficiency. It illustrates that the true power of AI is unlocked when it is thoughtfully integrated with human expertise.
The Key is Effective Human-AI Integration
Kasparov’s Law highlights that success doesn’t come from having the most advanced AI tools or the brightest human minds, but from designing processes that harmonize their strengths. The right process can transform ordinary resources into extraordinary results, whether in chess, medicine, manufacturing, or business.
In a recent conversation with Grandmaster Kasparov, he told me that machines will outperform humans in about 95% of cases. However, the critical edge lies in the remaining 5%, where human experience and judgment are the key factors. To fully leverage AI’s potential, business leaders must craft strategies that integrate AI into decision-making processes, complementing human strengths rather than replacing them.
Here’s how leaders can apply Kasparov’s Law:
1. Build Strong Human-AI Teams
Leaders should foster environments where AI tools and human expertise coexist and collaborate effectively. This means creating cross-functional teams that are trained to work together and use AI output. For example, in supply chain management, AI can predict demand patterns, while human managers adjust for variables like supplier relationships or geopolitical factors.
Team members need to understand both the capabilities and limitations of AI and have digital literacy to efficiently prompt AI. In turn, AI tools should be user-friendly, enabling non-technical professionals to interact with and interpret the output. Investing in worker training programs ensures that employees are equipped to use AI tools effectively while applying their domain-specific knowledge. Rod Harl, the CEO of Alene Candles, told me that ongoing training of team members is their “secret sauce.”
2. Design Processes That Enhance Collaboration
According to Kasparov’s Law, optimal results arise from processes that blend AI recommendations with human judgment. Leaders should develop workflows where AI provides data-driven insights, but final decisions rest with human experts. For instance, in customer service, AI can handle routine queries efficiently, while complex or sensitive issues are escalated to human representatives for personalized solutions.
In the medical setting, for example, AI can analyze vast medical datasets and suggest potential diagnoses. Doctors, however, are the ones who use their clinical experience and patient interactions to make the final call.
As Kasparov aptly put it, “AI is powerful, but context is king.”
3. Foster Continuous Feedback Loops
Successful use of AI relies on continuous feedback loops where both AI systems and human operators learn and improve. AI tools can enhance their accuracy and effectiveness by incorporating feedback from human decisions, while humans can develop greater trust and understanding of AI outputs.
In practice, this involves regularly updating AI models based on real-world outcomes and creating mechanisms for employees to provide feedback on AI performance. This dynamic process keeps AI systems adaptive and responsive to new data and evolving human insights.
Conclusion
Kasparov’s Law serves as a compelling reminder that the future of AI does not lie in the brute force of large data sets and computing power. It lies in the focused use of smaller AI models in conjunction with human intelligence. By building teams of workers with smaller and targeted AI models integrated through a well-designed process, companies can unlock the full potential of AI. And do so at a lower cost.