Network architectures always directly reflect perceptions of what drives revenue. Many decades ago, the architecture of a telecom network, in a single domain, was simple.
Traffic passed between active switch nodes, within and across across domain boundaries. The business model was simple: people making phone calls, so the network was optimized for phone calls.
These days, a more-relevant diagram would focus on the packet network, since that now is the way most people interact with “telecom” networks. The core network largely remains the same, with traffic passed between provider domains at a packet network gateways or switches.
The legacy fixed network remains, but most of the revenue now flows over the IP networks (mobility, enterprise communications, video and other internet apps).
Along with new architecture, revenue models have changed. Voice still drives revenue in developing markets, but in developed markets voice and messaging are mature or declining revenue sources. Growth now comes from internet access and video entertainment.
Devices now include phones, but also feature sensors, PCs and tablets.
The more-important change is the separation of control (signaling) and bearer traffic (content, voice, messaging); the separation of transport/access and content sources and the “openness” of the whole network to third party access/traffic.
Technologists call this a separation of control plane (signaling and control) and data plane (bearer traffic, content).
Where once the telecom network was closed, it now is open. Where once the telecom service provider tightly controlled permissible apps and devices, now admission to the network is open to all third parties who comply with the network protocols and provide lawful applications.
What comes next might be different as well. As internet protocol has become the universal next generation protocol, all networks have become computing networks.
And most computing networks now are cloud based, for consumer apps, and for a majority of enterprise apps.
That reliance on cloud computing is predicted to grow in the coming 5G and internet of things era, primarily because many new apps require ultra-low latency.
But something new is coming. Some important cloud-based apps and revenue drivers will require ultra-low latency, meaning centralized cloud computing centers will not work. Instead, computing will have to be done at the edge
That architecture will be built to support new applications and revenue drivers dependent on ultra-low latency (more than bandwidth, even if bandwidth will be in gigabit ranges).
The key takeaway is that, starting with 5G, all wide area and local area networks will be built on ultra-low latency capabilities. Bandwidth will be Higher, but the key change–in terms of capabilities and implications for network architecture–will be the requirement for support of ultra-low latency apps.