What is the point of the OSI layers. It feels like some academic come up with them and everyone has been going through mental gymnastics trying to work out how to fit the real world on to it. The proposed new model doesn't seem to make much sense either. How is IP integrity?
OSI wasn't just a layering model--it was a protocol suite. The thing is, the OSI stack lost to the TCP/IP stack. That's why it seems tricky to figure out how the "real world" fits in to it--TCP/IP networks literally don't fit.
OSI actually developed an entire set of protocols for each of the layers that adhered to the principles of their design. (Or at least they intended to, I don't know if they actually got around to tackling all of the layers). It is an entire networking stack that competes with the TCP/IP model, which was very much not designed in such a way, and thus does not mesh very well with it.
The OSI protocol stack never gained much traction. Some of them were repackaged on top of TCP/IP--X.500 became adapted into LDAP, and SSL certificates are derived from X.509. X.400 is the OSI competitor to SMTP, and still sees some niche usage (just not for email). There may be some other ones floating about in terms of the telephone network infrastructure.
Some large IP networks still use IS-IS for routing (IS-IS is OSI's version of OSPF, and runs directly on top of ethernet rather than being encapsulated IP, and is seen as easier to deploy).
For a time, in the late 1990s, some regional ISPs effective ran over X.25 --- they leased shared "backbones" running Frame Relay (which is OSI stack); this is sort of like the phone company runs a backbone that you and some of your competitors buy and run IP over for your ISP customers. There are probably still some offices and retail chains that use frame and thus the X.25 stack.
My handle (twitter, email, whatever) comes from X.25. If you were a kid growing up on the mean streets of dial-up Internet in the 1990s, X.25 was a big deal. :)
X is just the letter prefix identifying the series of ITU-T Recommendations related to “Data networks, open system communications and security.”
You can see a list of all 25 series (“W” is not assigned) and download most of the documents (including Rec. ITU-T A.12, Identification and layout of ITU-T Recommendations, which defines the letters) on the ITU’s website: https://www.itu.int/pub/T-REC
Many more people who have never heard of the ITU will certainly be familiar with the H series ("audiovisual and multimedia systems"), of which the video codec H.264 is well known.
To quote the ISO standard 7498 in which the OSI reference model is defined:
"This reference model provides a common basis for the coordination of standards development for the purpose of systems interconnection while allowing existing standards to be placed into perspective within the overall reference model."
It always made sense to me that the OSI model should be used to guide the development of internet standards. I have always doubted the second half of that statement, about understanding existing standards from an OSI perspective and I believe that is the mental gymnastics of which you speak.
Despite the mental gymnastics, out in the real world, me and my co-workers still find ourselves using OSI vocabulary to reason and talk about the networks we're working on.
If I'm told that some random black box is a "layer 3 switch" I already have a basic understanding of what it does and how to use it, even if I'm not familiar with the vendor. That's value for money right there, in my opinion.
Having some unified concept of them is nice for trouble shooting and communicating concepts; especially in network design. Helps you understand what depends on what and in some cases what certain protocols are limited by (see: broadcast domains). I personally prefer the 5 layer TCP/IP model to the 7 layer OSI model. To me the Application, Presentation, and Session layers blur into one.
The OSI Layers originated out of Telcos/PTTs. They more resemble a model of a Global Communications network akin to Videotex[1] than to how things work over the Internet. This becomes apparent when one looks into the history of Videotex and the structural makeup of Telcos/PTTs. Take a look here:
The point of the OSI model is to distribute responsibilities so that people can develop protocols that work together. Otherwise you get almost entirely proprietary protocols where every application using the network has to have 10 different configurations changes to get it to talk to another machine.
Divide and Conquer. They provide abstraction layers that you can individually reason about, individually implement, independently exchange... In the real world, no abstraction is perfect: There will always be "leaks" between the abstractions, optimizations that partially dissolve them etc., but it's a good principle and allows vastly more complexity than without.
The problem with the ISO/OSI model is less how to fit real network protocols into it, it's more that it actually was proposed as a model for real network implementations, but TCP/IP, which does not fit well into the model, grew and displaced most other protocols and became the de facto standard for networking. This happened for multiple complex reasons that are not all technical.
A great thing about the OSI layers, even if what we use doesn't exactly map to them, is that it provides a common pattern for a good way to approach things. Even if a protocol doesn't map perfectly to it, it's a really solid way to build up the abstractions for end-to-end communication.
I've recently been working with LoRaWAN at a low level, and it's a great example of a standard that could have used the OSI layer to make it suck less for implementers. Lots of the encryption and checksum pieces mix up which layer they're working on. Building a nice clean layer cake for LoRa is a disaster, and I've resigned myself to just having a single big state machine that takes care of everything.
OSI model is very useful mental model for troubleshooting networking issues. Having a frame-work for where in the stack a protocol lies allows you to verify and eliminate lower-level protocols as a culprit.
The TCP/IP model includes L2 vs L3 vs L4 (i.e. it's not limited to modeling just the TCP/IP portion it is named after), it doesn't include the rest of the cruft the OSI model does nor any expectations on how those layers were supposed to tie together (which never happened the way OSI laid things out anyways).
The author, David Dalrymple, used to be a “wunderkind” (got into MIT phd program when he was 14, etc), but lately he disappeared after briefly working for Twitter. I wonder what is he doing now.
This seems to be written by someone who spends a lot of time (and has a lot of knowledge) about high level application stuff but lacks an understanding of the existing network technology they are writing about or what purposes real world networking models need to serve. Apologies for the huge block of text, I'm not sure how to format this better.
> Data Link and Physical layers
For our purposes today, the Data Link and Physical layers are a black box (perhaps literally), to which we have an interface (the “network interface”) which looks like a transmit queue and a receive queue. These queues can store “payloads” of anywhere from 1 to 1280[1] octets (bytes).
1) The physical layer and data link layer almost always have different size and structure
2) Data link layer queues aren't defined by network layer payloads
3) IPv6 specifies a MINIMUM payload of 1280 octets
4) For most of the internet and private systems ~1500 is the standard payload limit due to the Ethernet standard.
> We would like a received payload to self-evidently be the same payload which was sent. Although the Data Link layer is supposed to provide such an assurance, various kinds of attacks on the system might invalidate this assumption. Integrity protocols mitigate these attacks:
- 1 Thermal noise, cosmic rays checksum hash TCP Checksum CRC-32C
1) TCP Checksums are not part of and do not validate the
data link layer headers.
2) Ethernet aleardy includes the FCS (CRC) as a form of check (as most data link layers have).
3) Physical layers also commonly have forms of integrity checking
1) I'm not sure where these "common implementations" are comming from on the data link/physical layers, AES encryption via DTLS/MACsec or encryption via DTLS are the standards here.
> A fully implemented Availability layer should provide unicast (deliver to a unique endpoint authenticated by a given public key, wherever it may be), anycast (deliver to nearest endpoint authenticated by a given public key), and multicast (a.k.a. pub/sub: route to all endpoints who have asked to subscribe to a given ID, and provide a subscription method).
1) Multicast sounds great until you consider the billions of devices across untold number of paths on the internet and look at the line that started the paragraph "We would like networked endpoints to be available to receive packets from other endpoints in a way that is robust to unannounced changes in network topology.". It's hard enough today to announce and learn unicast paths between network aggregated endpoints and the author wants to add multicast paths/typologies to the mix while throwing out the abstraction of the port in favor of service based networking?
2) [personal opinion] moving more "work" from ports (which are supposed to be an inside the END NODE thing cough ipv4 NAT cough) into "the network" almost never seems to sell me on being a sustainable/worthwhile tradeoff. The less your layer needs to know to deliver something the more flexible and upgradable communication becomes.
> Confidentiality layer
Ideally, we would like to not transmit any information to anything other than the destination endpoint(s). This ideal is not in general achievable on a public network, but some types of mitigation are possible: ...
1) A layer has to be more than "this is the encryption algorithm we use".
2) Encryption should not be a layer high up the stack, it should be something available at any point and able to be performed multiple times to allow levels of trust or defense in depth
3) Is there anything the author doesn't like about existing solutions for this layer like IPsec other than AES is most commonly used rather than ChaCha and similar variants?
> Non-Repudiation and/or Repudiation layer
> We would like for a receiver to be sure that a message they receive was sent by a given sender, and we would like for a sender to be sure that a given message was successfully received. Sometimes, we would also like for a receiver to be unaware of the location a message was sent from
1) This layer needs to be fully option as
a) many real-time messages don't want confirmation information to bog things down
b) repudiation of origin needs to be optional unless society agrees it's more important than performance (it won't)
> Transactions layer
[personal opinion] this needs to be part of the application layer functionality to prevent the ossification of implementations and protocols.
General comment on the topic - The OSI model wasn't meant to be viewed like it is today and as a result spends a lot of time defining how to stratifying things that don't need to be stratified in the modern world. One thing I'd like to tell EVERYONE about network is "stop thinking of network models as a set of 1 dimensional abstractions".
I'm not knowledgeable enough to understand a lot of the complexities or issues with this, but from what I do understand it seems like he's thought through a lot of issues and provided cool solutions for them.