Hacker News new | past | comments | ask | show | jobs | submit login

The fact the tcp (and it’s own institutional infrastructure) were already established is what made the whole OSI network effort even more enjoyably absurd. It was the last gasp of Big IT trying to take over the crazies. Most amusingly to me, it seemed only to be discussed in enterprise contexts and Very Important IT Journals. Such people were officially committed to deployment, while their own people were busy getting stuff done. IIRC the first nail in the coffin was the US military ignoring the naked emperor and officially deciding to stick with TCP. But by that time most people with real work to be done had ignore the whole OSI effort.

> Discoverability was a problem in the early days.

Yeah, I remember being at a conference in which a smart person (actually a smart person, no snark) said that discoverability would be over, as indexing the web would require keeping a copy of everything which, of course, is completely impossible. And we all nodded, because indeed, that did make sense. And about six months later, when altavista launched, it seemed only to confirm this belief.




You both get so much of this story so utterly… /not even/ __quite__ wrong, but more importantly, leave so much detail out that, if I didn't presume better(which I do! Would seem rather paranoid if I didn't.), I'd suspect lying by omission. All of this, which includes the story to follow, makes me—and I don't say this for exaggeration purposes, it really does have an emotional impact—very sad, although it doesn't surprise me, barely anyone realizes the true technological horror lurking deep in the history of the 'Internet'.

Please, consider looking A BIT more at the history of TCP/IP:

http://rina.tssg.org/docs/DublinLostLayer140109.pdf (Slides!)

http://rina.tssg.org/docs/How_in_the_Heck_do_you_lose_a_laye... Day, John - How in the Heck Do You Lose a Layer!? (2012)

Abstract:

"Recently, Alex McKenzie published an anecdote in the IEEE Annals of the History of Computing on the creation of INWG 96, a proposal by IFIP WG6.1 for an international transport protocol. McKenzie concentrates on the differences between the proposals that lead to INWG 96. However, it is the similarities that are much more interesting. This has lead to some rather surprising insights into not only the subsequent course of events, but also the origins of many current problems, and where the solutions must be found. The results are more than a little surprising."

And here, a rather lengthy excerpt from later in the paper, as I suspect a lot of people might presume that the paper would go for some points it definitely DOESN'T go for:

"[…]

This does not mean that we should be doing OSI. Good grief, no. This only implies that the data OSI had to work with brought them to the same structure INWG had come to.[10] OSI would have brought along a different can of worms. OSI was the state of understanding in the early 80s. We have learned a lot more since.[11] There was much unnecessary complexity in OSI and recent insights allow considerable simplification over even current practice.

OSI also split the addresses from the error and flow control protocol. This creates other problems. But the Internet’s course is definitely curious. Everyone else came up with an internet architecture for an internet, except them. These were the people who were continually stressing that they were building an Internet.

Even more ironic is that the Internet ceased to be an Internet on the day most people would mark as the birth of the Internet, i.e. on the flag day January 1, 1983 when NCP was turned off and it became one large network.

It was well understood at the time that two levels of addressing were required. This had been realized in the ARPANET when the first host with redundant network connections was deployed in 1972. The INWG structure provided the perfect solution. It is clear why the Internet kept the name, but less clear why they dropped the Network Layer. Or perhaps more precisely, why they renamed the Network Layer, the Internet Layer. Did they think, just calling it something different made it different?

[…]

————

[…]

[10] There was little or no overlap between SC6/WG2 and INWG.

[11] And if the politics had not been so intense and research had continued to develop better understanding, we would have learned it a bit sooner.

[…]

[22] Someone will ask, What about IPv6? It does nothing for these problems but make them worse and the problem it does solve is not a problem.

[…]"

http://csr.bu.edu/rina/KoreaNamingFund100218.pdf more slides!

And much more, here:

http://rina.tssg.org/ (I find it rather very strange the RINA folks and the GNUnet folks seem to each pull their own thing instead of working together, it very much seems like a—hopefully NOT inevitable—repeat of the very thing John Day describes in the slides & articles above…)

——

Addendum #1:

See also, for a network security perspective:

http://www.toad.com/gnu/netcrypt.html

http://bitsavers.informatik.uni-stuttgart.de/pdf/bbn/imp/BBN... see also Appendix H here, starting on PDF page 180

———

Addendum #2:

Something in the back of my mind & the depths of my guts tells me I should link the following here, albeit I remain completely clueless as to why, or how it could seem relevant to—& topical for—any of the above, so, I'll just drop it here without explanation:

https://en.wikipedia.org/wiki/Managed_Trusted_Internet_Proto...

(Interesting standards compliance section there, by the way.)


> You both get so much of this story so utterly… /not even/ __quite__ wrong

Sorry, but no. It is perfectly factual.

> leave so much detail

I was on mobile (hence the typo). I included the level of detail necessary to make my point.

Speaking of points – did you have one?


There is this preoccupation in these documents with "applications" and "services" being an important part of the design of a network. I find it wrong to the point of troubling. Although granted I say this now only with the power of hind-sight.

With the power of hind-sight; Imagine an internet where Comcast, AT&T, et al, have such granular control over access to your infrastructure. Ala Cart billing based on each addressable service you happen to run for example. DNS hijacking on steroids as another. The development of new protocols for all but biggest "participating" organizations would be stillborn. Capitalism would have strangled the Internet baby in the crib long ago if we had "done it right."

The road to hell is paved with good intentions and these are some of the best intentions.


>Imagine an internet where Comcast, AT&T, et al, have such granular control over access to your infrastructure

They already do, tho, we call it port filtering & deep package inspection, and when they do that, people get mighty angry. ;)

Also, you seem to implicitly assume that they'd have come to gain as much power as they have now, but that seems doubtful, given how different this would work.

Also, I think you (unintentionally!) attack a strawman there - the point here consists more of matters like congestion control.

Besides:

I said what I said about hoping for cooperation between RINA & GNUnet for a reason:

RINA lacks the anti-censorship mechanisms of GNUnet, while GNUnet lacks some of the insights from RINA research.

And those anti-censorship mechanisms would make your point entirely moot.


You're probably right about the straw man, but I wasn't trying to make an argument against better models. Just rationalizing why what we got ain't so bad after all.

Yes, they do all that now but it's kludgy, easy to detect, and a much more obvious overreach of their presumed authority. See Comcast/Sandvine fiasco.

As to the implicit assumption of the Telco's powers under such a system. History is my biggest "proof". It's a reasonable assumption given economic and games theory. At best it wouldn't have been the Telco's directly that ended up with the control. Someone would have and the result would still be the same.

How about this: How many terrible government regulations about filtering and censorship that are technologically infeasible with the current internet been not only technically possible, but fully fledged features of an objectively better design?

Again I'm not arguing against research and better designs, just rationalizing what we got.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: