Hacker News new | past | comments | ask | show | jobs | submit login

Why would the programming language have anything to do with it?



Some languages provide serialization of most anything by default, such as Lisp. Now, even in Lisp there are objects which don't make sense to serialize, including a TCP connection; however, the components thereof can be collected and sent across the wire or wherever else in a standardized way. The C language, in comparison, offers a few serialization routines for non-structured types, and that's about all.

So, my point is the ability to take running state, serialize it, and reinstate it elsewhere is only impressive to those who have misused computers for so long that they don't understand this was something basic in 1970 at the latest.


But this isn't state of the process image that needs to be serialised, it's state of the connection between two hosts and some kernel configuration on those hosts. Programming language doesn't play into it at all. Languages "such as Lisp" will have the exact same problem, for the same reason. Collecting all of the "components" of the connection and sending them to a different host won't make the other host start sending packets to the new recipient, or replay the in-flight packets (which is state on intermediate routers, different computers than the connected ones entirely), or fix the ARP tables on the neighbouring hosts. None of that is available, and certainly isn't writeable, to the host doing the serialising.

To play some silly semantics games, this isn't so much about _serialising_ a connection as it is about _deserialising_ the connection and having it work afterwards. That act has literally nothing to do with programming language.


> But this isn't state of the process image that needs to be serialised, it's state of the connection between two hosts and some kernel configuration on those hosts. Programming language doesn't play into it at all.

It is because UNIX is written in the C language that there are even multiple flat address spaces instead of segments or a single address space systemwide. The fact that the kernel exists at all is also due to this. It has everything to do with the implementation language.

> Languages "such as Lisp" will have the exact same problem, for the same reason.

Under UNIX, yes.

> Collecting all of the "components" of the connection and sending them to a different host won't make the other host start sending packets to the new recipient, or replay the in-flight packets (which is state on intermediate routers, different computers than the connected ones entirely), or fix the ARP tables on the neighbouring hosts. None of that is available, and certainly isn't writeable, to the host doing the serialising.

It may very well require some specialized machinery, but not nearly so much as one may think to be necessary.

> To play some silly semantics games, this isn't so much about _serialising_ a connection as it is about _deserialising_ the connection and having it work afterwards.

That's implicit. I needn't write of deserializing when writing of serializing, as one is worthless without the other, at least in most cases.

> That act has literally nothing to do with programming language.

Look at what Lisp and Smalltalk systems could do before UNIX existed and tell me that again.


> It is because UNIX is written in the C language that there are even multiple flat address spaces instead of segments or a single address space systemwide.

That is flat out wrong. C supports multi-programming in a system that has one address space (that includes the kernel too). Programs just have to be compiled relocatable.

You know, like what happens with shared libraries: which are written in C, and get loaded at different addresses in the same space, yet access their own functions and variables just fine.


Multics used segments and Lisp Machines had a single address space. UNIX breaks down quickly without multiple fake single address spaces for each program.

> Programs just have to be compiled relocatable.

Yes, and with unrestricted memory access, one program can crash the entire system.

> You know, like what happens with shared libraries: which are written in C, and get loaded at different addresses in the same space, yet access their own functions and variables just fine.

That is except when one piece manipulates global state in a way with which another piece can't cope, and at best the whole thing crashes. Dynamic linking in UNIX is so bad some believe it can't work, and instead use static linking exclusively.


> UNIX breaks down quickly without multiple fake single address spaces for each program.

So do MS-DOS, Mac OS < 9, and others: any non-MMU OS.

> Yes, and with unrestricted memory access, one program can crash the entire system.

That's true in any system with no MMU that runs machine-language native executables written in assembly language or using unsafe compiled languages.

Historically, there existed partition-based memory management whereby even in a single physical address space, programs are isolated from stomping over each other.

https://en.wikipedia.org/wiki/Memory_management_(operating_s...


> when one piece manipulates global state in a way with which another piece can't cope

This problem is the same with both static and dynamic linking.

And lisp too!

> UNIX breaks down quickly without multiple fake single address spaces for each program.

Citation needed. I don't think my programs very commonly try to go completely outside their address space. The closest thing I see is null pointer crashes, which are still not very common, and those would work the same way in a shared address space.

Edit: Yes, fork doesn't work the same. That's a very narrow use case on the vast majority of machines.


But this isn't about address spaces. They're moving connections between hardware hosts. It sounds like you're got your drum to beat but this isn't about that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: