Google DeepMind CEO Demis Hassabis has warned that rapid AI development could reach a significant choke point due to memory chip shortages. Despite recent breakthroughs, including Google’s Gemini 3.1 Pro model outperforming Anthropic’s Claude Opus 4.6 in benchmarks, the hardware supply constraints pose a major challenge.
The shortage of memory chips affects AI data centers that require thousands of GPUs and large-scale computing power to run complex models. This scarcity has driven up prices for electronics, including smartphones, and created a bottleneck for AI companies striving to scale their models.
Google benefits from proprietary chip design and manufacturing of Tensor Processing Units (TPUs), reducing some reliance on third-party suppliers like Nvidia. However, Hassabis emphasized the company is still fundamentally dependent on a limited number of key component suppliers, restricting its ability to meet demand for Gemini models.
This memory supply constraint is significant because expanding AI capacity hinges on access to high-performance chips. Without easing this bottleneck, further gains in AI performance and deployment could slow markedly, impacting innovation timelines and enterprise adoption.
The risk remains that continued shortages and rising costs could limit AI model sizes and speed, simultaneously affecting the global hardware market and AI ecosystem. Industry leaders have voiced concerns about the need for expanded chip production capacity to avert stagnation.
Going forward, the AI industry will closely watch developments in chip manufacturing and supply chain stability. Efforts to boost production and diversify suppliers will be critical to sustaining AI advancement momentum beyond current hardware limitations.