There's a book out there that points to my website. I wish they had just mirrored it. My website has been dead for a few months, and some O'Reilly book has a dead link.
I just now realised that if the web required two-way links there would be no way to put them into books!
Some context:
Back when the www was first released the most common criticism was the lack of back links. This is such a stupid and obvious deficiency that it really wasn't worth even looking into a system that was an obvious stillbirth. It wasn't just the "experts" saying this, but so many of them did because it was just so dumb. So you've probably never heard of this "world wide web" thing -- not only were there only one-way links, but it had its own homegrown markup language dialect and instead of using an ordinary protocol like FTP or even gopher it pointlessly used its own http protocol.
(Also back then there was this research protocol called TCP/IP, which was another waste of time given that the OSI protocol stack was poised to dominate the networks just as soon as a working one was written. I wonder what the modern equivalents are).
"The Book": The Elements of Networking Style: And Other Essays & Animadversions of the Art of Intercomputer Networking, by
M. A. Padlipsky (1985)
The World's Only Know Constructively Snotty Computer Science Book: historically, its polemics for TCP/IP and against the international standardsmongers' "OSI" helped the Internet happen; currently, its principles of technoaesthetic criticism are still eminently applicable to the States of most (probably all) technical Arts-all this and Cover Cartoons, too but it's not for those who can't deal with real sentences.
Standards: Threat or Menace, p. 193
A final preliminary: Because ISORM is more widely touted than TCP/IP, and hence the clearer present danger, it seems only fair that it should be the target of the nastier of the questions. This is in the spirit of our title, for in my humble but dogmatic opinion even a good proposed Standard is a prima facie threat to further advance in the state of the art, but a sufficiently flawed standard is a menace even to maintaining the art in its present state, so if the ISORM school is wrong and isn't exposed the consequences could be extremely unfortunate. At least, the threat / menace paradigm applies, I submit in all seriousness, to protocol standards; that is, I wouldn't think of being gratuitously snotty to the developers of physical standards -- I like to be able to use the same cap to reclose sodapop bottles and beer bottles (though I suspect somebody as it were screwed up when it came to those damn "twist off" caps) -- but I find it difficult to be civil to advocates of "final," "ultimate" standards when they're dealing with logical constructs rather than physical ones. After all, as I understand it, a fundamental property of the stored program computer is its ability to be reprogrammed. Yes, I understand that to do so costs money and yes, I've heard of ROM, and no I'm not saying that I insist on some idealistic notion of optimality, but definitely I don't think it makes much sense to keep trudging to an outhouse if I can get indoor plumbing . . . even if the moon in the door is exactly like the one in my neighbor's.
Appendix 3, The Self-Framed Slogans Suitable for Mounting
On the occasion of The Book's reissuance, Peter Salus wrote a review in Cisco's Internet Protocol Journal which included the following observations:
Padlipsky brought together several strands that managed to result in the perfect chord for me over 15 years ago. I reread this slim volume (made up of a Foreword, 11 chapters (each a separate arrow from Padlipsky's quiver) and three appendixes (made up of half a dozen darts of various lengths and a sheaf of cartoons and slogans) several months ago, and have concluded that it is as acerbic and as important now as it was 15 years ago. [Emphasis added] The instruments Padlipsky employs are a sharp wit (and a deep admiration for François Marie Arouet), a sincere detestation for the ISO Reference Model, a deep knowledge of the Advanced Research Projects Agency Network (ARPANET)/Internet, and wide reading in classic science fiction.
In a lighter vein, The Book has been called "... beyond doubt the funniest technical book ever written."
Also, thanks a lot for the reference, strangely the first time I hear of this book! However, I have to most strongly disagree with the claim of "three layers is enough.", and you'll hopefully come to understand why, after checking the above comment. Which, by the way, DOESN'T speak in favor of OSI AT ALL - nor in favor of IP, for that matter. Two brief quotes from over there:
"[…]
This does not mean that we should be doing OSI. Good grief, no. …
[…]
————
[…]
[22] Someone will ask, What about IPv6? It does nothing for these problems but make them worse and the problem it does solve is not a problem.
Your timeline’s all wrong. TCP/IP the Internet built on it were already well established by the time the web was born.
There was nothing particularly special about HTTP or HTML, or even the concept of the web. What made it a success was the availability of a server reference architecture, and, more importantly, a browser. It was easy to try it out, see the value, and get up and running with your own server if you had something to publish.
Discoverability was a problem in the early days. There were printed catalogs of wesites! Backlinks might have helped, but clearly were not a fundamental requirement for the web’s success.
The fact the tcp (and it’s own institutional infrastructure) were already established is what made the whole OSI network effort even more enjoyably absurd. It was the last gasp of Big IT trying to take over the crazies. Most amusingly to me, it seemed only to be discussed in enterprise contexts and Very Important IT Journals. Such people were officially committed to deployment, while their own people were busy getting stuff done. IIRC the first nail in the coffin was the US military ignoring the naked emperor and officially deciding to stick with TCP. But by that time most people with real work to be done had ignore the whole OSI effort.
> Discoverability was a problem in the early days.
Yeah, I remember being at a conference in which a smart person (actually a smart person, no snark) said that discoverability would be over, as indexing the web would require keeping a copy of everything which, of course, is completely impossible. And we all nodded, because indeed, that did make sense. And about six months later, when altavista launched, it seemed only to confirm this belief.
You both get so much of this story so utterly… /not even/ __quite__ wrong, but more importantly, leave so much detail out that, if I didn't presume better(which I do! Would seem rather paranoid if I didn't.), I'd suspect lying by omission. All of this, which includes the story to follow, makes me—and I don't say this for exaggeration purposes, it really does have an emotional impact—very sad, although it doesn't surprise me, barely anyone realizes the true technological horror lurking deep in the history of the 'Internet'.
Please, consider looking A BIT more at the history of TCP/IP:
"Recently, Alex McKenzie published an anecdote in the IEEE Annals of the History of Computing on the creation of INWG 96, a proposal by IFIP WG6.1 for an international transport protocol. McKenzie concentrates on the differences between the proposals that lead to INWG 96. However, it is the similarities that are much more interesting. This has lead to some rather surprising insights into not only the subsequent course of events, but also the origins of many current problems, and where the solutions must be found. The results are more than a little surprising."
And here, a rather lengthy excerpt from later in the paper, as I suspect a lot of people might presume that the paper would go for some points it definitely DOESN'T go for:
"[…]
This does not mean that we should be doing OSI.
Good grief, no. This only implies that the data OSI had to work with brought them to the same structure INWG had come to.[10] OSI would have brought along a different can of worms. OSI was the state of understanding in the early 80s. We have learned a lot more since.[11] There was much unnecessary complexity in OSI and recent insights allow considerable simplification over even current practice.
OSI also split the addresses from the error and flow control protocol. This creates other problems. But the Internet’s course is definitely curious. Everyone else came up with an internet architecture for an internet, except them. These were the people who were continually stressing that they were building an Internet.
Even more ironic is that the Internet ceased to be an Internet on the day most people would mark as the birth of the Internet, i.e. on the flag day January 1, 1983 when NCP was turned off and it became one large network.
It was well understood at the time that two levels of addressing were required. This had been realized in the ARPANET when the first host with redundant network connections was deployed in 1972. The INWG structure provided the perfect solution. It is clear why the Internet kept the name, but less clear why they dropped the Network Layer. Or perhaps more precisely, why they renamed the Network Layer, the Internet Layer. Did they think, just calling it something different made it different?
[…]
————
[…]
[10] There was little or no overlap between SC6/WG2 and INWG.
[11] And if the politics had not been so intense and research had
continued to develop better understanding, we would have learned it a
bit sooner.
[…]
[22] Someone will ask, What about IPv6? It does nothing for these
problems but make them worse and the problem it does solve is not a
problem.
http://rina.tssg.org/ (I find it rather very strange the RINA folks and the GNUnet folks seem to each pull their own thing instead of working together, it very much seems like a—hopefully NOT inevitable—repeat of the very thing John Day describes in the slides & articles above…)
Something in the back of my mind & the depths of my guts tells me I should link the following here, albeit I remain completely clueless as to why, or how it could seem relevant to—& topical for—any of the above, so, I'll just drop it here without explanation:
There is this preoccupation in these documents with "applications" and "services" being an important part of the design of a network. I find it wrong to the point of troubling. Although granted I say this now only with the power of hind-sight.
With the power of hind-sight; Imagine an internet where Comcast, AT&T, et al, have such granular control over access to your infrastructure. Ala Cart billing based on each addressable service you happen to run for example. DNS hijacking on steroids as another. The development of new protocols for all but biggest "participating" organizations would be stillborn. Capitalism would have strangled the Internet baby in the crib long ago if we had "done it right."
The road to hell is paved with good intentions and these are some of the best intentions.
>Imagine an internet where Comcast, AT&T, et al, have such granular control over access to your infrastructure
They already do, tho, we call it port filtering & deep package inspection, and when they do that, people get mighty angry. ;)
Also, you seem to implicitly assume that they'd have come to gain as much power as they have now, but that seems doubtful, given how different this would work.
Also, I think you (unintentionally!) attack a strawman there - the point here consists more of matters like congestion control.
Besides:
I said what I said about hoping for cooperation between RINA & GNUnet for a reason:
RINA lacks the anti-censorship mechanisms of GNUnet, while GNUnet lacks some of the insights from RINA research.
And those anti-censorship mechanisms would make your point entirely moot.
You're probably right about the straw man, but I wasn't trying to make an argument against better models. Just rationalizing why what we got ain't so bad after all.
Yes, they do all that now but it's kludgy, easy to detect, and a much more obvious overreach of their presumed authority. See Comcast/Sandvine fiasco.
As to the implicit assumption of the Telco's powers under such a system. History is my biggest "proof". It's a reasonable assumption given economic and games theory. At best it wouldn't have been the Telco's directly that ended up with the control. Someone would have and the result would still be the same.
How about this: How many terrible government regulations about filtering and censorship that are technologically infeasible with the current internet been not only technically possible, but fully fledged features of an objectively better design?
Again I'm not arguing against research and better designs, just rationalizing what we got.
I don't agree: the OSI protocols were the classic camel-is-a-horse-designed-by-committee: heavyweight and looked like a pain to use. Looked like, as I never saw a working stack.
The IETF/RFC/Working Code/Interop(RIP) approach has given V4 incredibly long legs. At least the OSI model itself kinda survived.
A book (on Libgdx) uses one of my repos as a starting point, telling the readers to clone my repo and do certain changes. I've left the repo alone, bugs and all, as I think it's cool that people uses my code. But the authors never reached out or anything, I only discovered it by chance. I could easily by accident have invalidated their whole chapter.