How Low-level Protocols Are Changing the Internet?

Spread the love

As you read this, packets and data packages come and go from the laptop, mobile or PC you are using to do so. The magic of your internet connection is based on various protocols such as TCP, IP, UDP, SSL / TLS, DNS, or HTTP / HTTPS.

All those acronyms define the way in which we communicate through the Internet, but the evolution of the network of networks has made these protocols change. They have done it and above all, they will continue doing it to guarantee two things: speed and safety.

 

What protocols use 3,500 million Internet users?

In 1998, the Internet had 50 million users accessing the services provided by 25 million servers (especially web and e-mail). That year, Vinton Cerf counted was the birthplace of ICANN (Internet Corporation for Assigned Names and Numbers), the association in charge of coordinating and managing this gigantic database that, among other things, encompasses all the internet domains that users visit. daily.
20 years ago things were very different, of course. The broadband connections were a utopia for most users, who began to get infected with the fever by those famous beeps that made the modems with which they connected to the Internet. The protocols that had been adequate in that tumultuous time before the dotcom disaster would soon have to face new challenges.

The biggest of all was the explosive growth of a network that became the new tissue of communications worldwide. The mobile revolution was added to the band connections, and that suddenly caused those protocols that Bob Kahn and Vinton Cerf began to define in 1974 with their ‘A Protocol for Packet Network Intercommunication’ (PDF) were somewhat lame.

In fact, all those who ended up using it in the network of networks needed improvements. It happened with the application protocols (DNS, FTP, TLS / SSL, HTTP, IMAP4, POP3, SIP, SMTP, SNMP, SSH, Telnet, RTP), transport (TCP, UDP), internet (IPv4, IPv6), ICMP, IGMP), or liaison (ARP). Many more emerged around those cited, but in all cases, the same thing happened: they were small, outdated … or both.

No wonder: according to the ITU (PDF) in 2017 our planet, with 7.3 billion people populating it, already used the Internet in an astonishing proportion: 84.4% of the population of developed countries is already connected, although the digital divide persists Only 42.9% of the developing countries and 14.7% of the least developed countries (LDCs) have access to the internet.

Internet grows, and protocols grow with it

The truth is that when the Internet began to conquer our world it did so thanks to protocols that logically were designed for that time and those needs. The needs – as we were saying – changed and those protocols had to be adapted.

Encryption
This was how the SSL connections ended up migrating (mostly) to the TLS protocol, as the HTTP protocol added new headers and methods to also end up migrating (gradually, we are still in it) to the HTTPS protocol, and in the important DNS protocol we ended up using its secure version, DNSSEC.

As they explained in APnic, those changes have been important, but there are several protocols that are posing important changes to the structure of the Internet. The main reasons for the appearance of these new alternatives are three:

1) Limitations on performance: the structural problems in the transport and application protocols used on the internet make our networks not as efficient as they could be. Users end up affected by issues such as those latencies that can condemn the user experience (or our online gaming sessions, for example), and these new protocols have an important focus on improving the performance of this network.
2) Customization of protocols: companies and organizations have tried to adapt these protocols to their needs, something that allowed the standards but has made some of those improvements or changes -even those made with good intentions- difficult the massive implementation of standardized changes in those same protocols. It happens with HTTP proxies that try to compress the data that is transmitted but that complicated the use of new generalized compression techniques, or the optimization of the TCP protocol in intermediaries (the so-called middleboxes such as firewalls or NATs) that can limit the options in protocols. of transportation used in communications.

3) Encryption of communications: the threats to privacy that we met after the publication of the documents leaked by Edward Snowden. That was the trigger of an increasing effort of companies to encrypt their communications to protect their users and customers, and that has also been a fundamental part of that change in protocols that already proposed improvements for some time, but that will go further. in the future.

The protocols that change, and how they are doing it

HTTP / 2

Http2

The heir to the throne of the HTTP protocol (Hypertext Transfer Protocol), its natural successor, is HTTP / 2. We have lived with this new version for some time, and curiously this project started with the work done with the SPDY protocol that Google created in 2012.
One of the fundamental ideas of HTTP / 2 is the multiplexing of requests in a single TCP connection, something that avoids gluing requests in a client that are blocking each other. This makes the connections take advantage much better, and thanks to the massive support of web browsers and servers is a protocol that already has a wide adoption.

This advantage is combined with other equally important ones, such as the fact that HTTP / 2 uses encryption by default (although it is not strictly mandatory, it does so via TLS / 1.2), something that inter alia avoids interference with intermediaries that assume that it is used HTTP / 1.1. It is not a version written from scratch of the current HTTP, and many of the methods and semantics are preserved, which has greatly facilitated the transition.

 

TLS 1.3

Unlike HTTP / 2, the TLS protocol (Transport Layer Security) in its version 1.3 is in full development. The draft was published in early March 2018, and its standardization is expected to be completed soon. That does not prevent some implementations of services and applications already support it.

Tls13
Although that numbering does not make it too clear, TLS 1.3 is an especially important version that would almost deserve a more rounded version number, perhaps a TLS 2.0. Be that as it may, this version welcomes ephemeral key calls.

That makes a potential attacker going to have it much more difficult when trying to intercept and decrypt a communication. The static keys that are used nowadays pose the problem of being precisely “perennial”, but with the ephemeral keys that problem disappears: they have an expiration and they change in each establishment of each communication, which makes the exchange process of keys is much more dynamic and harder to overcome by that potential attacker.

QUIC

As with HTTP / 2, this protocol is based on an initial Google project that has now gone on to monitor the IETF (Internet Engineering Task Force) with the name iQUIC. One of the keys is the use of HTTP over UDP (User Datagram Protocol) instead of over TCP, an “ordered” protocol versus that other more careless alternative that is UDP, which for example discards everything that TCP dedicates to it. error control.

Quic
With UDP, the packets are sent to the receiver without further ado, and the sender does not wait for the receiver to tell them if they have arrived or not: they continue to send whatever happens, which means there are no guarantees that the receiver is receiving all the packages. The protocol is often used in broadcast broadcasts and even in online gaming, and Google has already integrated support in Chrome.

Another key feature of QUIC is that it is encrypted by default. QUIC’s own structure causes it to pose an odd headache to the operators since it makes it impossible to try to estimate things like the RTT (Round Trip Time), a parameter with which operators can analyze and evaluate the quality and performance of your networks.

DOH

One of the current problems of the DNS protocol is that its design favors those who want to impose certain policies. Studies like this show how operators and large organizations use or can use that structure to, among other things, “hijack” communications (hijacking) and manipulate or spy on them.

Doh12

To correct that problem has arisen what could be the most notable of all these protocols. This is DOH (DNS Over HTTP), an alternative that takes advantage of the HTTP connection for DNS traffic, which eliminates those discriminators that allow us to do things like that hijacking of communications that we talked about.

This would prevent governments and organizations from easily blocking complete domains. The work in this protocol is still in previous stages of development, although here once again certain organizations, companies and governments may not view with good eyes a change that would limit their control over these networks.

Protocols against stagnation

The work behind all these protocols is, as we said, the one that will make the Internet continue to evolve and adapt to the new times and, of course, to the new needs.

Internet33

The problems that threaten the stagnation come from those who adapt and customize those protocols for their own use –ejem, AMP, ahem-. When a protocol can not evolve because its implementation in certain scenarios “freezes” its ability to adapt, it is said that it has ossified.

The TCP protocol is a good example of this, they explain in APnic. This protocol is so widespread in so many devices and intermediaries that have adapted it to their needs (blocking packages with TCP options that are not recognized, optimized control of congestion) and that makes it difficult for the standard protocol to evolve easily.

That joins those new needs in the field of privacy and the performance of those online communications. Communications that are based on these protocols and that should continue to guarantee interoperability. Changes are always difficult, but if we want the internet to continue growing with us, we will have to accept them. And the operators, organizations, companies, and governments, too.

You might also like