I'm all for questioning the validity of certain code patterns, but there's some issues with this post.
Functional Options (or config/option initialization) shouldn't really ever happen in a "hot path" where performance really matters, as these are usually one off steps at the time of construction/initialization. As with most things in Go, start with usability/readability then measure and tune when/where needed.
With that in mind, the author doesn't give a concrete example of when a Functional Option pattern might be used in a hot path, in which case, certainly agree there are better patterns to use.
Then adds the benchmarks which (ignoring function inlining) are relatively comparable for Functional Options vs Config Struct, with the notable increase when using interfaces (as with many things in Go). But these results are still on the order of ~100ns. I think more accurately they can be characterized as "Relatively" slow.
This is the proper frame to analyze this issue. If you're using Functional Options to configure a long-running http server once at startup, the cost is so small that you've already spent more money thinking about it for 1 minute than it will ever cost in compute runtime. But if you're using it once per request over thousands of requests, or once per record with thousands of records per request then maybe it's time to consider using a more lightweight configuration pattern.
If you approach every program with that attitude you will never have an RPC subsystem where nanoseconds matter. You will be trapped in a self-fulfilling process where nanoseconds don't matter because the system is slow.
The efficiency of initiating the call limits how many calls the program can initiate per second, per thread. The potential latency of the response is irrelevant.
> Functional Options (or config/option initialization) shouldn't really ever happen in a "hot path" where performance really matters, as these are usually one off steps at the time of construction/initialization.
That's not true at all. As just one counterexample, the place where I have spent the most time wrestling with functional options is with OpenTracing, where performance overhead absolutely does matter.
They said “shouldn’t” not “doesn’t”. A good rule of thumb is don’t use open tracing in any functions where you expect to measure anything less than 10ms.
I dunno about the Go community as a whole, but /r/golang discussions have been trending back to just using configuration structs, rather than any of the other fancy options proposed over the years.
One of the advantages it has is that it's simple, so it works with all the language mechanisms quite naturally. Do you want to factor out a particular set of three settings? Just write a "func MyFactoredSettings(cfg *ConfigStruct)" and do the obvious thing. Do you need more arguments for your refactoring for some reason? It's a function, do the obvious thing. No mysteries.
I am reminded of the function programming observation that functions already do a lot on their own and are really useful. Additional abstractions around this may superficially look neater in isolation but I have been increasingly suspicious of anything that makes it harder to take a chunk out of the middle of my code and turn it into a function, and despite the name, "functional options" kinda have that effect. (Go is obviously no Haskell here... not many things are Haskell... but it's at least a similar issue. Anything getting in the way of basic refactoring with functions should be looked at suspiciously.)
(I would also say that while you can refactor functional options, there is something about it that seems to inhibit people from thinking about it. Similar to "chaining" that way.)
I mean, of course they're slow, it's varargs (on the heap, garbage collected), dynamic function closures (on the heap, garbage collected), and a series of indirect function calls. You really don't need a bunch of benchmarks to tell me it's slow, I believe you. But as much as it pains me, and any premature optimizer, to write "...func(*config)", I don't see the problem unless you find it in a hot section of real code and then do real benchmarks on the code to solve a real problem; these blog post benchmarks are not helpful. I bet regexp.Compile is slow too, but I don't complain about it until I find it in a hot section of code.
This saves the stuttering of `Frob(FrobOptions{`, should have identical performance to that, with nicer syntax, and has a smooth upgrade path for all the sorts of things folks do with the command pattern (such as logging, dynamic dispatch, delayed execution, scripting, etc).
Don't understand why Go developers choose the most complicated solutions. Function returning function returning function is an awful style of coding that is hard to read. I see at least two simple ways to pass options:
1) named arguments:
createFoo(barness: "bar", bazness: True)
2) struct with default values:
createFoo(FooConfig{barness = "bar"})
Go might not have these features, but I guess it is easier to add them than to invent weird "function returns function" tricks.
With functional options the code needs to be duplicated: first, you need to define a field in a struct and then a functional option that sets that field to a given value. With ideas above no duplication is necessary.
Using currying by itself is fine and easy to understand. But if you write a function that returns a function that returns a function — that's not easy.
For example, compare these two functions:
function (a, b) { return a + b; }
function create_adder(a) { return function(b) { return a + b;}}
The purpose of first function is easier to understand. And if you rewrite last function according to modern trends, it can become even less readable:
Because the first is a function returning function, and you have to think how that works, and the second one is just a simple function that returns a sum.
Also, if you start a function with a word 'const' then it is less obvious that this is a function and not just a constant. In contrast, if you use a word 'function', then you understand what it is from the first word.
Go developers came up with this pattern to deal with the language's limitations. Go's core team would probably advocate for a mutable configuration struct with a magical interpretation of zero values, and/or a constructor-by-convention and advise users to "just not make mistakes".
If we had non-zero default values or named arguments, this pattern wouldn't exist.
Python just gives you a choice, use named arguments or refactor your API and use objects for options. It is up to you how you are going to use it.
What I don't like though is when all those options are hidden behind two-stars keyword argument and it is difficult to understand what options are allowed, what they mean, what type they have and so on:
Yes, it is perfectly readable. But: there is a function returning a function.
Haskell just decided to have a syntax (and type system) that can handle this without inducing nausea in the programmer.
As with everything, this comes with trade-offs. For example, Haskell's syntax (and type system) have a much harder time dealing with optional arguments. Something that eg Python does just fine, and that even C handles OK in the form of varargs.
Neither of those address the issue of scoped overrides of a setting. The benefit of the functional options approach is they return the inverse setting, so you can very trivially do scoped overrides for things like log levels. Eg:
Your examples seem to be reducing the problem space to exclusively object creation time. And in that case, yes named params or a struct with default values work great. But they work a lot less great when you're talking about changing an existing object, as now your default values can't just be the actual default values, but rather optional values since you need to distinguish between "set to a value that happened to be the same as the default" and "didn't set a value at all, so don't change it".
Is that really such a common case? Obviously it depends what you’re configuring, but I definitely would not expect that it’s typically OK to jump in and modify the configuration of an object that’s already in use. What if it does do some expensive one-off setup using the supplied config at object creation time?
If you really need a general scoped override, it could be done in the config struct approach just by copying and restoring the entire config. This might be expensive if the struct is big, but on the other hand, you could change multiple properties at once which doesn’t look possible in the function-based approach.
If the object has an API to do functional options, why wouldn't you expect it to be viable to do a scoped override? The API for dynamically changing them already implies they are available to be changed frequently.
As others have said here, it just feels really un-Go-like (although I'm not a Go user so maybe I don't have the right intuition). Do you really need such a high-level, abstract mechanism just to be able to turn on trace logging? The clunky "big struct of options" approach seems much more idiomatic, and doesn't have any immediately obvious downsides. (That blog post you linked says it's "unsatisfactory"; I guess there's more detail on that elsewhere.)
For scoped logging, the obvious approach would just be to add a "setVerbosity" method to your object. You can try to do it in the most general way possible, sure; but that feels similar to building lots of generic data structures and other abstractions, which is something that Go intentionally resisted for a long time (on the basis of keeping things simple and concrete, rather than building ivory towers of abstractions).
Anyway, the "Functional Options Are Slow" article reads like a post-hoc justification written by somebody who never liked functional options in the first place anyway -- i.e., the stuff in the final section. The real argument is "I think functional options are a bad idea for style and usability, but if that doesn't convince you, you can also look at the raw performance numbers, and it turns out it's really expensive too." Which I think is a good argument!
You're conflating developers who write in Go with developers who write Go. It's easier for me to write weird "function returns function" tricks than it is to fork a compiler.
The interesting part of this article was the subjective reasons to avoid functional arguments. As the article says you most commonly see them used in initialising something, the performance difference identified in the benchmark is unlikely to matter, it’s more a matter of preference and aesthetics.
Sure the relative difference between the fastest and slowest approach is 5x. But the absolute difference is still just 125.23 ns. For some perspective, 1 millisecond / 125.23 ns = ~7,985.
There certainly are cases where that slowdown matters. But for the vast, vast, VAST majority of applications, it is completely irrelevant.
Dunno, it's not unheard of for a program to do the same thing over and over again, sometimes millions of times. Every slow program consists of a bunch of individually fast instructions.
The main benefit is you can have configuration options without having to specify all values, and also have non-zero-value defaults. Lets say you had something like Sarama's config struct which contains 50 or so config knobs. The following is will lead to some terrible defaults:
Here, with this config, there is a config option `MaxMessageBytes` which will be set to 0, which will reject all your messages. What Sarama does is, you can pass a `nil` config which will load defaults, or:
and so on. This is ok but can be cumbersome, especially if you just need to change one or 2 options or if some config options need to be initialized. Also someone can still do &Config{...} and shoot themselves in the foot. The functional options style is more concise.
I used to be a fan of this style, and I even have an ORM built around this style (ex. Query(WithID(4), WithFriends(), WithGroup(4))), but I think for options like these a Builder pattern is actually better if your intention is clarity.
The problem isn't that users are accidentally using it. People are using it intentionally.
The problem is that some of the config fields are mandatory, and some are optional, but the developer can't tell whether the fields were intentionally set to a weird value or just not specified, and users can't tell which fields need to be set to not have strange behavior.
You would need to add a Verified field for each config option so you can tell whether users set it or it's a default.
Alternately, you can use pointers, but that gets clunky because you can't do
conf := sarama.Connect(Config{ ClientID: &"foo"})
You have to declare a variable above, and then reference the var.
Zero values are sometimes valid config options, but not what most people actually want to do. Structs cause issues because there's no way to specify just a couple of fields in the struct; the rest will get zero values.
1. If the verified field is not set, your program blows up at runtime. I don't think it's acceptable for programs to blow up at runtime for errors that can be caught at compile time.
2. Your example still has the same problem when it comes to `MaxMessageBytes`, what if I really meant to set it to 0? The `WithDefaults` function would eat that. You could use a pointer, but then that just complicates the API (you need to do new(int) as a separate variable and assign it
func DoSomethingVerbosely(foo *Foo, verbosity int) {
prev := foo.Option(pkg.Verbosity(verbosity))
defer foo.Option(prev)
// ... do some stuff with foo under high verbosity.
}
but I don’t see why that’s better than
func DoSomethingVerbosely(foo *Foo, verbosity int) {
prev := foo.setVerbosity(verbosity))
defer foo.setVerbosity(prev)
// ... do some stuff with foo under high verbosity.
}
It also mentions that it allows you to “set lots of options in a given call”. That, you could sort of accomplish by having the setProperty methods return the changed object, thus allowing chaining (e.g. foo.setVerbosity(v).setDryRun(true)).
This allows both that defer and setting lots of options in a single call.
Given the limited features of go, it’s a nice hack, but I don’t like it. To me, it doesn’t feel like it fits the philosophy of go.
Perhaps this is the kind of problem in which a macro system, like that in common lisp, allow you to solve the problem paying the prize in a little more compilation time but not in runtime.
Functional Options (or config/option initialization) shouldn't really ever happen in a "hot path" where performance really matters, as these are usually one off steps at the time of construction/initialization. As with most things in Go, start with usability/readability then measure and tune when/where needed.
With that in mind, the author doesn't give a concrete example of when a Functional Option pattern might be used in a hot path, in which case, certainly agree there are better patterns to use.
Then adds the benchmarks which (ignoring function inlining) are relatively comparable for Functional Options vs Config Struct, with the notable increase when using interfaces (as with many things in Go). But these results are still on the order of ~100ns. I think more accurately they can be characterized as "Relatively" slow.