The Future Of The Internet
Jeff Finkelstein, Exceutive Director - Advanced Technology, Cox Communications
The internet has become part of the daily lives of millions and millions of users. In fact, most of us would have a difficult time going through our days without it. We use it to communicate, research things, play games, watch sports, movies and home-made videos, handle our finances, babysit our kids and pets, shopping, food delivery, and more.
It is interesting that other than email, none of the ways we currently use the internet were considered when the original developers came up with the idea of creating an international network for communicating with others. At the time they were interested in creating an electronic version of the post office. What it has become is a way for us to interact with others at many different levels.
However, there are some concerns with the current architecture of the internet in that those services that require low-latency or high bandwidth cannot be guaranteed. The requirements of current and future applications need a deterministic and consistent network connection between end-points to deliver the applications both residential and business customers desire. For example;
• Virtual reality requires 25 Mbps or greater throughput while maintaining a motion-to-photon latency of less than 20 ms
• Industrial applications e.g. machine to machine or sensors to vehicles require between 25 μs and 10 ms
• Tactile(haptic) applications require approximately 1 ms
• Vehicular networks require between 500 μs and 5 ms latency
The internet as we know it today has no guarantee of packet delivery,let alone the deterministic latency required by bleeding edge applications.
So then, what does the internet need to look like for future applications? First, let’s start with the applications themselves. New media has large bandwidth requirements and requires changes from Mbps to Gbps to Tbps. Services need high-precision timing support, deterministic delivery,and best-guaranteed services. The
networks themselves need to cross not only the planet, but space as well, as well as being federated and trustable.
New media e.g. holograms could potentially use up to 19.1 Gigapixels which requires approximately 1 Tbps throughput. A hologram the size of the average human (77 x 20 inches) uses around 4.6 Tbps.
When it comes to video, we are seeing a shift from quantitative to qualitative digital information. Comparatively, a leitmotif comes onto the scene as we review the network performance requirements of the next internet: latency and jitter.
So why do latency and jitter matter? Simply put, time is money and money is time. For example, a one second slowdown per web page could cost Amazon $1.6 billion in sales a year. On Wall Street, a millisecond delay could cost $100 million per year. In virtual and augmented reality, a latency of >= 20 ms results in dizziness. These examples do not begin to address the requirements of future technologies like remote surgery, cloud PLC or intelligent transportation systems. Part of our challenge is a better understanding of what the requirements of future applications will be and how we can manage it at the network layer.
We need a new way to handle the requirements of time sensitive current and future applications. Today’s networks are designed for best effort traffic and depend on statistical multiplexing to scale the networks interfaces. The dilemma is that statistical multiplexing is not sufficient to meet the needs of these applications. We need a new way of handling packet delivery in absolute time. We need new user-network interfaces(UNI), reservation signaling (RSVP), new forwarding paradigm, an intrinsic self monitoring and correcting OAM, a business agreement between the interconnection providers and a business model towards monetizing the network end-to-end to fairly distribute the payments for traffic transit.
Today’s switches likely cannot handle the latency and jitter requirements of tomorrows applications. We need new silicon for switches that can understand the new protocols and have the performance necessary for new requirements. Today’s routers may have the ability to differentiate services to the level required, but without a better understanding of what those requirements are it is difficult to determine if changes will be required.
We have not even begun to discuss the possibility of a future network paradigm including NTN’s (Non-Terrestrial Networks),i.e. networks that extend into space.
Conclusion
A brief summary of today’s network versus the future state of the new internet.
There is much work to be done if we want to get ahead of the future requirements of the internet. A number of working groups are beginning discussions on these topics and more. Now is the time to get involved and have input into the definition and creation of the future internet.
New media e.g. holograms could potentially use up to 19.1 Gigapixels which requires approximately 1 Tbps throughput. A hologram the size of the average human (77 x 20 inches) uses around 4.6 Tbps.
When it comes to video, we are seeing a shift from quantitative to qualitative digital information. Comparatively, a leitmotif comes onto the scene as we review the network performance requirements of the next internet: latency and jitter.
So why do latency and jitter matter? Simply put, time is money and money is time. For example, a one second slowdown per web page could cost Amazon $1.6 billion in sales a year. On Wall Street, a millisecond delay could cost $100 million per year. In virtual and augmented reality, a latency of >= 20 ms results in dizziness. These examples do not begin to address the requirements of future technologies like remote surgery, cloud PLC or intelligent transportation systems. Part of our challenge is a better understanding of what the requirements of future applications will be and how we can manage it at the network layer.
We need a new way to handle the requirements of time sensitive current and future applications. Today’s networks are designed for best effort traffic and depend on statistical multiplexing to scale the networks interfaces. The dilemma is that statistical multiplexing is not sufficient to meet the needs of these applications. We need a new way of handling packet delivery in absolute time. We need new user-network interfaces(UNI), reservation signaling (RSVP), new forwarding paradigm, an intrinsic self monitoring and correcting OAM, a business agreement between the interconnection providers and a business model towards monetizing the network end-to-end to fairly distribute the payments for traffic transit.
Today’s switches likely cannot handle the latency and jitter requirements of tomorrows applications. We need new silicon for switches that can understand the new protocols and have the performance necessary for new requirements. Today’s routers may have the ability to differentiate services to the level required, but without a better understanding of what those requirements are it is difficult to determine if changes will be required.
We have not even begun to discuss the possibility of a future network paradigm including NTN’s (Non-Terrestrial Networks),i.e. networks that extend into space.
Conclusion
A brief summary of today’s network versus the future state of the new internet.
There is much work to be done if we want to get ahead of the future requirements of the internet. A number of working groups are beginning discussions on these topics and more. Now is the time to get involved and have input into the definition and creation of the future internet.