To ubiquity and beyond
AI: It's here. It's everywhere. What's next?
It’s hard not to notice how artificial intelligence (AI) has permeated our daily lives, as more and more machines try to anticipate our next move. AI technologies continue to evolve rapidly, with several ground-breaking developments from both industry leaders and new entrants. While ChatGPT has maintained its lead as the most popular consumer AI product, it is not yet a profitable business, and competition remains wide open for viable business models.
Until recently, AI development has found unprecedented success through training ever-larger models using ever-larger datasets. This 'training-time scaling' has propelled hundreds of billions of dollars in capital expenditures to achieve the model sizes necessary to compete at the cutting edge. This approach has been termed the 'bitter lesson' by researcher Richard Sutton because it seemed to imply that human knowledge and cleverness were inferior to simply throwing more data and compute at the problem.
Recent developments have challenged the notion that human ingenuity is no longer necessary. Training-time scaling of AI foundation models has appeared to hit a wall due to increased demand for, and scarcity of, new chips, energy and data. Researchers have refocused their efforts toward cleverly applying and refining the model they already have.
In 2024, OpenAI released Chat-GPT 'o1', a reasoning model that thinks before responding. Its inner workings involve refining its existing language model to have a conversation with itself. This 'chain of thought' mechanism gives o1 the ability to think before answering, as opposed to the prior design of blurting out the first response that came to mind. This innovation produced more accurate results to some challenging math and reasoning problems and opened a pathway to a new way for scaling to improve accuracy—'inference-time scaling.' A reasoning model tends to be more accurate if it spends more time thinking about its answer, so it is now possible to spend more time computing and producing a better answer after the model is already trained.
Another development along these lines was Chinese lab DeepSeek’s release of its R1 reasoning model, a competitor to o1, early in 2025. The release challenged US firms’ dominance in AI in what many commentators called a 'Sputnik' moment.
The R1 model learned to reason by training itself using reinforcement learning, which refines a pre-trained foundation model through simulated interactions that reward accurate answers. The success of this approach has created a third scaling pathway called 'post-training scaling,' where models are refined to become more capable but not retrained from scratch. DeepSeek also upped the high-stakes AI ante by releasing the model to the public under an open-source license.
These developments suggested that winners will not be determined solely by the size of capital expenditures but by fierce competition among incumbents and new entrants, where durable leads can be challenging to maintain. Even with significant infrastructure in place, companies will have to allocate resources correctly along three different dimensions of scaling. And even as capital floods into AI, sustainable business models remain an unsolved problem.
Some of AI’s strongest proponents claim that this technology will radically reshape the economy and have a far-reaching impact on society. Although it is far from certain whether these promises can be fully realized, today, this technology is already being felt in industries as varied as search, hardware and education. Competition remains fierce, and even the most well-resourced firms are not guaranteed success.
Investing in AI
In our view, taking a directional view on any single company or industry presents risks. We have come to know that not every company will feel AI’s impact in the same way. This is the basis of our approach to constructing risk-managed portfolios that have the potential to profit from AI.
We ensure that portfolios are diversified across these opportunities, limiting exposures to individual companies and industries, as well as groups of companies that face common risk exposures. This approach helps to mitigate direct exposure to any single winner or loser.
The current state of AI and its applications is constantly evolving, making today’s truths potentially obsolete tomorrow. We constantly assess new advancements in machine learning in consideration of integrating effective concepts to apply to our investment process.
Access the full paper: To ubiquity and beyond: Developments in AI (federatedhermes.com)