There is a substantial issue with this pattern for things like files -- the operating system imposes a max limit on the number of file descriptors you can have open.
Defer gives no guarantees about when that thing is going to run -- just that it is after the function returns. Because of this, you can get erratic behavior if you open and defer closing a lot; essentially openFile just stops working at some point.
This can happen with any limited resource and isn't theoretical; I had a friend who ran into this in long running process with a lot of IO.
Defer statements run exactly at the moment of a function return (in lifo order). They are equivalent to wrapping the rest of the function body in a try / finally. They have to, since you can change the return value in a defer (by using named return values).
A "defer" statement invokes a function whose execution is deferred to the moment the surrounding function returns, either because the surrounding function executed a return statement, reached the end of its function body, or because the corresponding goroutine is panicking.
There might have been something else going on with your friend's code. Maybe the function he was using didn't return before running out of file descriptors?
I somehow completely misunderstood that from the Go documentation when first reading it; thank you for clarifying that.
There must have been some other leaking going on. He was using a largish library when he saw the leak and eventual crash so maybe it was due to improper pooling or C api calls? I'll have to ask him more about it.
Defer gives no guarantees about when that thing is going to run -- just that it is after the function returns. Because of this, you can get erratic behavior if you open and defer closing a lot; essentially openFile just stops working at some point.
This can happen with any limited resource and isn't theoretical; I had a friend who ran into this in long running process with a lot of IO.