Akamai has announced the deployment of thousands of NVIDIA Blackwell GPUs to power its distributed AI inference platform across more than 4,400 locations worldwide. This expansion extends Akamai's strategy to deliver AI inference at the edge through optimized local GPU clusters on its global network.

The platform routes AI inference workloads to local GPU clusters, reducing latency by up to 2.5 times compared to centralized cloud alternatives. Akamai claims this localized approach can also cut AI inference costs by as much as 86% against traditional hyperscale infrastructure.

This deployment underscores Akamai's ongoing commitment to scaling its AI and inference ecosystem, building on previous launches like Akamai Cloud Inference and partnerships with firms such as VAST Data. The goal is to enhance AI performance by minimizing data egress and improving responsiveness at the network edge.

Despite the strategic move, Akamai's stock saw a slight decline of 0.76%, contrasting with gains in other software infrastructure companies on the same day. Historically, AI-related announcements from Akamai have resulted in modest and mostly positive share price movements.

Going forward, market watchers will focus on how effectively Akamai's edge AI platform competes with hyperscale cloud providers and whether it can drive broader enterprise adoption through cost and latency advantages. The industry's shift to edge inference makes these deployments a critical test of scalable AI infrastructure evolution.