After years of intense experimentation, 2026 looks set to be the year AI is judged on what truly works. From how we're going to handle the crucial task of training AI agents and how we can remove the growing list of AI data center bottlenecks, to whether the AI bubble will finally pop and the chances of having fully autonomous robots in our homes, below are some of the trends and talking points that I expect to feature in the AI debate during the course of this year.
1. AI agents will be 'hitting' the RL gym
The expectations around how AI agents will transform the enterprise workflows in 2026 are sky-high. But one of the significant challenges the AI community is grappling with is how do we train those agents in a safe and cost-effective way?
One answer could very well lie in the rise of reinforcement learning (RL) gyms or RL environments, which are being developed by major AI companies as well as by a growing wave of new start-ups.
RL gyms are dedicated sandboxes or workspaces - essentially simulated environments - in which agents can self-learn how to make decisions through trial and error by performing workflows or a set of tasks over and over (potentially millions of times). Each correct decision is rewarded (or reinforced) much like rewarding an infant as it learns a new skill.
Imagine an AI agent working for an ecommerce store learning when to restock a product and how much to order. Through reinforcement learning in a simulated warehouse, it learns that ordering too early ties up cash - while ordering too late leads to stock-outs and lost sales. By repeatedly trying different ordering decisions and tracking the resulting cost and revenue outcomes, it can learn the ideal restocking pattern to ensure the right inventory levels to meet demand without wasting money.
The idea is not new. OpenAI introduced the concept of RL gyms in April 2016 with its OpenAI Gym, a suite of environments from simulated robots to Atari games.
In fact, RL environments were key to helping DeepMind's AlphaGo AI software famously beat the world's top player in the ancient Chinese game, Go. AlphaGo had played millions of games against itself using reinforcement learning to improve its win rate.
2. We aren't likely to see fully autonomous robots in our homes (just yet)
In late October 2025, the AI company 1X generated lots of excitement when it started taking pre-orders for NEO, a 5 foot 6 inch humanoid robot designed to take on daily household chores.
Described as a home robot with conversational AI capabilities, "NEO takes care of tasks around the house so you can focus on what matters to you."
Before anyone gets too excited, however, there is one important catch.
While it's true that NEO can perform some basic activities out of the box (like opening doors, turning lights on and off and fetching items), more complex tasks require a 'human teleoperator'. This is a remote worker who can see the customer's home "through the eyes of the robot" and temporarily takes control of NEO to perform the required tasks.
In effect, early adopters who use NEO are helping to train the underlying AI models.
By operating the unfamiliar, real-world environments of people's homes, under human operator control, the robot learns to perform increasingly complex tasks. It's helping bring a world where we can all benefit from a fully autonomous home robot a little closer.
3. Enterprise AI will move from pilots to scaled AI executions
In summer 2025, MIT released findings from two seemingly conflicting studies on the ROI enterprises are achieving from AI deployments.
The MIT Project NANDA study found that 95% of enterprise AI pilots fail to deliver measurable returns (though this figure has been contested by some researchers). At the same time, research from MIT CISR concluded that enterprises are making significant progress in AI maturity, with the greatest financial impact seen when companies move from pilots to scaled AI. Firms at more advanced stages of AI maturity outperform their industry averages financially.
According to the MIT researchers, AI initiatives fail when companies try to shoehorn generic tools like ChatGPT into the enterprise, even when they cannot easily learn or adapt to existing workflows.
By contrast, organizations that work with specialized AI vendors and strategic partners succeed around 67% of the time, while internal AI projects succeed about 33% of the time.
To move from pilots to successfully scaled AI, firms need to focus on use cases where AI can deliver clear, measurable value, rather than falling for vibe-based spending on flashy, generic tools. They also have to ensure that workers are empowered and that workflows and systems are adapted to fully capture AI's benefits.
In 2026, this tension will come to a head. Companies that spent 2024-25 running failed pilots will either learn fast or get left behind. Those that innovate around strategic workflows (rather than simply adopting generic AI tools), partner with specialized providers instead of trying to build everything in-house, and rigorously track and measure progress, will stand the best chance of success.
4. Tackling AI bottlenecks will become an even bigger priority
In 2026, one of the defining challenges for AI will remain finding ways to tackle the bottlenecks that can limit its expansion. This is about the industry grappling with how to deliver the necessary power, cooling, PCBs, storage, land, water, and other resources to enable the data center compute capacity to drive AI growth.
Not only is demand for data center processing power accelerating, but the distribution and type of data centers needed to support AI are likely to change.
Earlier demand was focused on AI model training, which requires high-capacity facilities that could be sited in remote, power-rich areas where grid capacity, land and water are more available.
Over time, the balance is shifting towards more inference-heavy data centers, which have a lower tolerance for latency. This shift means there could be a rise in smaller facilities located at the edge, including in metropolitan areas to reduce latency.
The challenge the AI industry faces is not just how to generate the massive compute power required, but also how to do it sustainably. What is the role for liquid cooling technologies, which are demonstrably more energy efficient than traditional air-cooled systems? And how will the penetration of clean energy progress, with renewables meeting half of the growth in data center electricity demand?
AI sovereignty is sure to be another prominent factor as countries place greater emphasis on deploying AI using local infrastructure, data, models and talent. This reflects a growing awareness that countries need to protect critical data, boost competitiveness, and reduce dependence on overseas technology providers amid ongoing geopolitical uncertainties.
5. AI bubble concerns will continue (but may not pop yet)
Are we in an AI bubble with the stocks of the major AI players massively overvalued? This is the question that's been asked 'on repeat' throughout 2025, culminating in an article published on Bloomberg that went viral among AI industry watchers on social media.
The piece highlights the growing wave of reciprocal investments, partnerships and deals between AI giants - raising concerns that the companies are increasingly being overvalued because of this complex web of interconnected transactions (rather than their value being driven by real market demand).
After OpenAI announced a $300 billion deal with Oracle in September to build out data centers in the US, the company then struck a separate $100 billion investment agreement with Nvidia. Oracle, in turn, is spending billions on Nvidia chips for those facilities, sending money back to Nvidia, a company that has become one of OpenAI's most prominent backers.
Most people who follow the industry now treat AI as overhyped rather than built on nothing. The debate is shifting away from whether there is a bubble and towards a more practical question: 'what kind of bubble is it, and what would it take for it to deflate?' The Bloomberg 'Circular deals' narrative captures the central concern: a dense web of investments, partnerships and procurement can push valuations higher than even before there's a real customer demand or profit to justify them.
That said, a crash in 2026 is far from inevitable.
Some analysts argue the bubble can keep inflating precisely because today's leading AI firms are not just purely speculative players; many sit on real, revenue generating businesses. The bigger risk may be a slower, Cisco-style outcome: enormous, premature infrastructure buildouts that outpace near-term utilisation, followed by a sharp reset in valuations instead of a sudden crash.
Even that kind of correction could be 'productive' in the long run - washing out weaker projects while leaving behind durable infrastructure and a small group of winners. In practical terms, 2026 is likely to be the year markets start demanding evidence: measurable ROI, proven monetization and careful spending on infrastructure, not just visionary narratives.

