How Cisco’s new chip links AI data centers thousands of miles apart
Cisco’s new chip connects AI data centers across vast distances
Silicon One P200 enables efficient long-distance AI data center networking
Cisco challenges Broadcom linking AI facilities with high-speed routers
As artificial intelligence (AI) continues to scale at unprecedented speeds, the infrastructure that supports it is facing a critical challenge: connecting AI data centers over vast distances without slowing performance. Cisco, the networking giant long known for enterprise routers and switches, is now making a bold play to solve this problem with its latest innovation – the Silicon One P200 chip.
SurveyThe new chip, integrated into Cisco’s 8223 router, is designed to make multiple AI data centers function as a single, seamless system. This is a significant leap in infrastructure design, enabling companies to train massive AI models across geographically separated facilities while maintaining ultra-low latency and high data throughput.
Also read: Gemini 2.5 Computer Use model explained: Google’s AI agent to navigate interfaces
Why distance matters in AI training
Modern AI models, especially large language models and generative AI systems, require enormous amounts of data and computational power. Training them in a single data center can be inefficient or even impossible. Many companies now distribute workloads across multiple facilities, often hundreds or even a thousand miles apart.

Historically, connecting distant data centers required dozens of networking chips, high power consumption, and complex routing protocols. Cisco claims that the Silicon One P200 replaces the equivalent of 92 older chips, reducing power usage by roughly 65%. In practice, this means AI operators can synchronize data faster and more efficiently, allowing models to scale without adding prohibitive energy costs.
The competition heats up
Cisco’s push directly challenges Broadcom, which has dominated the AI networking space with its Jericho4 chip. Broadcom’s solution is optimized for short- to medium-range data center connections – up to about 60 miles – and focuses on high-bandwidth memory and congestion management. Cisco, however, is targeting long-haul connections, a niche increasingly important as AI workloads expand across continents.
Also read: OpenAI’s AgentKit explained: Anyone can make AI Agents with ease
Cloud providers and hyperscalers are taking notice. Microsoft and Alibaba are reportedly among the early companies exploring Cisco’s new routers. The technology could also influence the next generation of AI supercomputers, which rely heavily on efficient interconnects to maintain performance across large clusters.
Power efficiency
Energy consumption is a growing concern in AI infrastructure. Many new data centers are located near renewable energy sources, sometimes far from traditional tech hubs. Efficient networking chips like Cisco’s P200 allow operators to place data centers in optimal locations for both cost and sustainability, while still maintaining the high-speed connections needed for AI training.
By reducing the number of chips required and cutting power consumption, Cisco not only improves operational efficiency but also addresses the environmental footprint of large-scale AI training, a topic increasingly in the spotlight among tech leaders and policymakers.
Cisco’s Silicon One P200 is designed to handle massive bursts of data with minimal latency. It supports high-speed packet switching, intelligent buffering, and advanced routing protocols that keep data flowing efficiently over long distances. Essentially, it acts as the backbone of a “global AI fabric,” letting geographically distributed data centers work in concert as if they were a single, local system.
This is particularly crucial for generative AI models, which require frequent synchronization of model weights and large-scale gradient updates. Even a small delay in these updates can significantly slow training or introduce inconsistencies. Cisco’s approach promises to minimize such bottlenecks.
Looking ahead
Cisco’s entry into high-end AI networking marks a significant moment in the industry. While Broadcom and others will continue to innovate, the ability to connect AI data centers thousands of miles apart efficiently could become a decisive factor for cloud providers, supercomputing facilities, and enterprise AI operators.
The Silicon One P200 illustrates a broader trend: the AI revolution is not just about algorithms and GPUs. It’s also about the unseen infrastructure that moves data quickly, efficiently, and sustainably across the globe. As AI models grow ever larger, the race for smarter, faster, and longer-distance networking chips is only beginning, and Cisco is staking a major claim.
Vyom Ramani
A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack. View Full Profile