Hacker News new | past | comments | ask | show | jobs | submit login

"Recompiling with Go 1.9 solved the problem, thanks to the switch to posix_spawn"

I never understood why so many people use fork() instead of POSIX spawn(). For example OpenJDK (Java) also does this as the default for starting a process. Which leads to interesting results when you use it on a OS which does do memory over committing like Solaris. Since the process briefly doubles in memory use with fork() your process will die with an out of memory error.




I thought of creating a fix myself way back, and the issue was that Go made use of system calls directly. You basically have to re-implement posix_spawn in Go. If you look at their change, it includes updates to chipset specific files, and the fix only seems to work on a CPU that reports as amd64.


I must say I didn't go to look at the sources of the patch, but what you say sounds so odd that I'll take the chance and suggest that perhaps the fact that in golang "amd64" is, for historical reasons, the name of the architecture more neutrally known as "x86_64", is the source of confusion (I.e. it doesn't just work on AMD or on CPUs that claim/report having a specific model/maker etc).

Low level syscall ABI is architecture dependent.


AMD64 refers to both Intel and AMD chipsets that are 64bit x86. While you are right that there is also the term "x86_64" in common use, AMD64 is the actually more standard name (as well as the term specifically used in the Go ecosystem eg $GOARCH env var and build parameters for cross platform sources)

Further to that point, I didn't detect any confusion from others in this thread that AMD64 excluded Intel chips. Where they were talking about AMD64 specific code they were saying that Go code targeting other architectures (eg arm, mips, s390x and ppc - to name a few. Go supports an impressive number of architectures[1]) would still use their respective fork() code rather than this new fix.

[1] https://golang.org/doc/install/source#introduction


amd64 is the original name of the instruction set. Intel did beat AMD to a 64-bit instruction set: that of the Itanium processors, IA-64. Itanium had performance issues and lots of errata. Most importantly, IA-64 was not natively backwards-compatible with x86 instructions. amd64 became the standard.

x86_64 is a common name for the amd64 architecture, and is a way to describe both the AMD and Intel implementations. In my opinion, amd64 is a less ambiguous name and is more historically accurate.

https://en.m.wikipedia.org/wiki/X86-64

Yes, I am aware that my point is undercut by the fact that the article title is x86-64, but I stand by my statement.


It's not just the article title. Follow the two footnotes in the "History" section of that article, to the press releases from AMD announcing the new ISA. They consistently call it "AMD x86-64" or "AMD's x86-64" or just "x86-64". The oldest snapshot I could find of the x86-64 web site (https://web.archive.org/web/20000817014037/http://www.x86-64...) also calls it x86-64. The most recent snapshot of that site, however, calls it AMD64; it seems to have changed sometime in the middle of April 2003.

That is, both x86-64 and AMD64 are historically accurate (2003 was early enough in the ISA's lifetime), but x86-64 is the earlier name.


Go uses system calls directly because the alternative in UNIX land is linking with libc.


Because every straightforward way of running an external command on Unix involves fork(). So someone wrote that API not thinking much of it.

Then shock horror they realize running a throwaway command is fork()ing the main process. But now everyone is too angsty to change it because someone out there might rely on the environment copy functionality, even when they shouldn't.


Because decades of written material about Unix says fork() is really cool (even though it isn't)?


fork was alright before other people tacked on multiples of cruft like threads and whatnot onto commercial unixes and they became mainstream. the current problem is that you don't want to have to copy all file descriptors if all you're going to do is call "exec" and reduce them to three: in, out, err.

for example, here's the caveats section from the macOS fork man page:

     There are limits to what you can do in the child process.  To be totally safe you should restrict your-
     self to only executing async-signal safe operations until such time as one of the exec functions is
     called.  All APIs, including global data symbols, in any framework or library should be assumed to be
     unsafe after a fork() unless explicitly documented to be safe or async-signal safe.  If you need to use
     these frameworks in the child process, you must exec.  In this situation it is reasonable to exec your-
     self.
That spells defeat :)

Earlier in the game, copy-on-write had to be created for the same reasons.


> the current problem is that you don't want to have to copy all file descriptors if all you're going to do is call "exec" and reduce them to three: in, out, err.

To be clear, exec does not necessarily close all but the first three fds -- by default all fds will be inherited. However, you can set the close-on-exec flag on each individual fd (in fact, that's what the Go stdlib does behind the scenes).

Search for FD_CLOEXEC in fcntl(2) and open(2) and you'll see what I'm referring to.

http://man7.org/linux/man-pages/man2/fcntl.2.html

http://man7.org/linux/man-pages/man2/open.2.html


fork() is a pretty simple way to be able to modify the environment for a process you will spawn. fork(), the child can modify its own environment using various orthogonal system calls, e.g. to redirect stdout/stderr or drop permissions, and then exec the target executable.

Threads throw a wrench in things. But fork() existed for decades before threads. O_CLOEXEC etc helps. Lots of command-line utilities don't use threads.

fork() isn't the fastest way - but in many situations it's not a problem, it's just convenient. In that respect it's somewhat like using python when you could have used go.


Another nice example is changing the working directory for the new process. With fork+exec, you can do a chdir after fork but before exec. With posix_spawn you're stuck with the working directory of the parent.


because posix_spawn() in linux often calls fork(). I just looked at the manpage now and it says under some conditions it'd call vfork() instead, but I don't remember that being the case when I last looked at this (6-7 years ago?)


Nowadays, posix_spawn() calls clone() on linux, with the CLONE_VM flag, behaving much like vfork() as far as I can tell.

That means the child and parent process shares the memory (until exec() is performed).

Especially if the parent process is multi-threaded this avoids a whole lot of pagefaults that would occur if using fork() when another thread touches memory, possibly triggering a lot of copy-on-writes in the time window between calling fork() and the child calling exec()

Code: https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/uni...


Typo: "does do" should read "doesn't do".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: