AI’s New Connectivity Paradigm

The heartbeat and nervous system of the AI era

Artificial intelligence (AI) may dominate today’s headlines, but it does not stand on its own. Models and compute often capture the spotlight, yet they are only part of the story. AI operates within a much larger system. And in that system, connectivity serves as the heartbeat that keeps everything moving and the nervous system that allows it all to work together.

Here is the way to think about it: AI needs data for training. Data relies on compute for processing. Compute in turn requires connectivity for data exchange and transfer. When all three work together, AI reaches its full potential. Connectivity is more than a utility. In the AI era, it is the lifeblood that sustains intelligence and the wiring that allows it to function seamlessly.

LOOKING BACK: NETWORKS AS THE CIRCULATORY SYSTEM

The history of the Internet underscores this point. Born as a research experiment in the 1970s and 80s, early ARPANET links carried simple packets that enabled collaboration across labs. Data was scarce, compute rudimentary, and networks were narrowband, yet those early flows proved a new idea: Intelligence could be shared across distance.

The 1990s brought the World Wide Web (WWW), and suddenly, data was everywhere: on web pages, in e-commerce transactions, and through streaming media. Search engines and recommendation systems flourished, but only because connectivity enabled the transfer of bits from servers to compute clusters. Without wide-area networking, information would have remained trapped in silos.

The 2000s layered on two revolutions: cloud computing and the smartphone. Cloud services delivered elastic compute, while smartphones put connectivity in billions of pockets. Social media, ride-sharing, and mobile payments were not just software innovations; they were network-driven, thereby requiring real-time coordination between devices, users, and clouds.

 In the 2010s, deep learning led to the development of GPUs and massive models. But GPUs alone did not make AI practical. What elevated AI from an experiment to a global utility were networks: high-speed interconnects that moved training data across distributed clusters, and mobile broadband that delivered AI-powered services to billions of devices. The design centers of connectivity—including throughput, reach, latency, cost per bit, and reliability—were what allowed the organism to grow.

Dr. Vinton G. Cerf, Chief Internet Evangelist, Google, and Dr. Mallik Tatipamula, Chief Technology Officer, Ericsson Silicon Valley

FROM CIRCULATION TO COORDINATION

Connectivity’s role has expanded from circulation (moving bits) to coordination (enabling reflexes).

The Internet of Things (IoT) in the 2010s highlighted this shift. Billions of sensors became the “nerve endings” of the digital world, streaming telemetry from homes, factories, vehicles, and cities. Yet most IoT devices were passive: They sensed and reported but did not act in real time. This passivity I am not sure if it is referring to the (IoT) or the devices) exposed a gap. Connectivity was not just about circulation but also about neural wiring, which is essential for interpretation and coordinated responses.

Emerging concepts like the Internet of Senses aim to close this loop. By fusing sensing with communication, networks can become context-aware fabrics, transmitting what they detect in real time. Multisensory technologies—haptics, digital olfaction, even brain—computer interfaces—extend communication beyond sight and sound, thereby turning the Internet into a medium of experience rather than just information.

But perception without reasoning is incomplete. Here, AI agents emerge as the next step. Unlike IoT endpoints, agents are not passive. They perceive, reason, and act. Digital agents like copilots, workflow orchestrators, and trading algorithms live entirely in software. Physical agents, such as autonomous vehicles, drones, and industrial robots, bring intelligence into the physical world. Both require connectivity not just as a circulatory system but also as a nervous system, wiring cognition across devices, edge nodes, and clouds.

WHY CONNECTIVITY DEFINES AI’S FUTURE

Today’s AI ecosystem demonstrates this dependency vividly.

 At one extreme, foundation models contain trillions of parameters and run across thousands of accelerators in globally distributed data centers. Training such models requires high-bandwidth, low-latency interconnects like InfiniBand, Ethernet with RDMA, or emerging optical fabrics. Without these, multi-week training runs would be impossible.

At the other extreme, edge devices like smartphones, industrial sensors, and medical wearables now carry powerful NPUs and GPUs for on-device inference. Apple’s Neural Engine, Qualcomm’s AI Engine, and Google’s Tensor Processing Units in phones enable AI agents to run locally. But their usefulness depends on staying in sync with the cloud and peering through reliable connectivity.

What ties these extremes together is connectivity as fabric, spanning the system end-to-end:

  • Within data centers: Ultra-fast interconnects bind GPUs, TPUs, and accelerators for distributed model training.
  • Across regions and continents: High-capacity optical backbones move vast datasets and inference outputs globally.
  • At the edge: Wired and wireless access networks bring intelligence to people, machines, and environments.
  • Beyond terrestrial limits: Satellites and high-altitude platforms extend reach to underserved or remote regions.

This layered fabric ensures that data flows to compute when needed, compute delivers insights back in time, and intelligence emerges as a system rather than isolated silos. Without connectivity, compute is stranded. With connectivity, intelligence becomes collective—distributed across clouds, edges, and devices worldwide.

TECHNICAL CHALLENGES AHEAD

If connectivity is the heartbeat and nervous system of the AI era, then making it work at scale presents five challenges.

1 Ultra-low latency and determinism. AI tasks like autonomous driving, robotic surgery, and industrial automation require sub-millisecond responsiveness with predictable guarantees. While 5G URLLC is a first step, AI-native networking must integrate sensing, scheduling, and compute coordination far more tightly to ensure end-to-end determinism and real-time decision-making.

2 Bandwidth, fabrics, and data movement. Training trillion-parameter models produces exabytes of traffic, and today’s Ethernet-based interconnects and memory hierarchies cannot keep pace. Accelerators scale faster than I/O, which leaves compute cycles stalled waiting for data. Breaking this bottleneck will require co-packaged optics, silicon photonics, rack-scale integration, and memory disaggregation (e.g., CXL) to deliver multi-terabit-per-second throughput per node and move data as efficiently as it is processed.

3 Resilience and security. As AI becomes an integral part of infrastructure, connectivity fabrics must be resilient to failures and secure against adversarial interference. Techniques like multipath routing, self-healing meshes, and quantum-safe encryption will be critical.

4 Energy efficiency. From hyperscale data centers to radio access networks, connectivity is energy-intensive. AI-native networks must be designed with energy proportionality and sustainability in mind to ensure performance without compromising sustainability goals.

5 Orchestration and interoperability. Just as TCP/IP created a common foundation for the Internet, the AI era presents an opportunity to establish open protocols for agent identity, inter-agent communication, and workload orchestration. Today’s orchestration tools, such as  Kubernetes for cloud, MANO for NFV, and O-RAN RIC for RAN, operate in silos. Moving forward, AI-native systems can unify these into an end-to-end framework that spans cloud, edge, and devices in order to ensure seamless interoperability and preventing fragmentation.

LESSONS FROM HISTORY

History shows that breakthroughs in compute alone do not unlock progress: It is connectivity that turns isolated advances into global transformations. The supercomputers of the 1980s were powerful but niche, and only when they were networked through the Internet did intelligence begin to scale across the world. The smartphone’s true success also came not from its hardware alone but from the power of always-on connectivity that enabled entire ecosystems of applications and services.

AI stands at a similar moment today. Compute will continue to advance, but its full impact will only be realized when paired with robust, open, and ubiquitous connectivity. With this foundation, AI can grow into a planetary-scale utility: resilient, inclusive, and transformative for society.

CLOSING THE LOOP

When you step back, the pattern is striking:

  • Data without compute is meaningless.
  • Compute without connectivity is stranded.
  • AI without both is nothing more than an idea.

Connectivity has been there from the beginning—carrying packets, enabling mobility, and linking machines. In the AI era, it ensures that intelligence flows freely rather than remaining locked in silos. It synchronizes training across data centers, distributes inference to the edge, and coordinates agents acting in the physical world. In summary, connectivity is not just the foundation of AI. It is the pulse that keeps the system alive, and the nervous system that makes it intelligent.

ABOUT THE AUTHOR

Dr. Vinton G. Cerf is Vice President and Chief Internet Evangelist for Google. Widely known as one of the “Fathers of the Internet,” Cerf is the co-designer of the TCP/IP protocols and the architecture of the Internet. For his pioneering work in this field as well as for his inspired leadership, Cerf received the A.M. Turning Award, the highest honor in computer science, in 2004.

At Google, Cerf is responsible for identifying new enabling technologies to support the development of advanced, Internet-based products and services. Cerf is also Chairman of the Internet Ecosystem Innovation Committee (IEIC), which is an independent committee that promotes Internet diversity forming global Internet nexus points, and one of global industry leaders honored in the inaugural InterGlobix Magazine Titans List.

Cerf is former Senior Vice President of Technology Strategy for MCI Communications Corporation, where he was responsible for guiding corporate strategy development from the technical perspective. Previously, Cerf served as MCI’s Senior Vice President of Architecture and Technology, where he led a team of architects and engineers to design advanced networking frameworks, including Internet-based solutions for delivering a combination of data, information, voice, and video services for business and consumer use. He also previously served as Chairman of the Internet Corporation for Assigned Names and Numbers (ICANN), the group that oversees the Internet’s growth and expansion, and Founding President of the Internet Society.

Dr. Mallik Tatipamula is CTO at Ericsson, Silicon Valley, with a distinguished 35-year career spanning Nortel, Motorola, Cisco, Juniper, F5 Networks, and Ericsson. He has made fundamental contributions at the intersection of communications and networking, shaping the evolution of telecom networks from 2G to 5G and beyond. He has held visiting professorships at King’s College London, the University of Glasgow, and the University of Edinburgh, strengthening ties between research and practice.

Tatipamula served as an advisor to several start-ups, including Pensando (Acquired by AMD for $1.9B). He has co-authored two books, published over 100 papers and patents, and delivered over 500 keynote and panel presentations. He has been elected to five national academies, including as a Fellow of the Royal Society (FRS). His global honors include three honorary doctorates, the IEEE Communications Society Distinguished Industry Leader Award, Silicon Valley Business Journal CTO of the Year, and induction into the IPv6 Hall of Fame, among others.