Hacker News new | past | comments | ask | show | jobs | submit login
Go: Functional Options Are Slow (evanjones.ca)
80 points by zdw on May 25, 2022 | hide | past | favorite | 56 comments



I'm all for questioning the validity of certain code patterns, but there's some issues with this post.

Functional Options (or config/option initialization) shouldn't really ever happen in a "hot path" where performance really matters, as these are usually one off steps at the time of construction/initialization. As with most things in Go, start with usability/readability then measure and tune when/where needed.

With that in mind, the author doesn't give a concrete example of when a Functional Option pattern might be used in a hot path, in which case, certainly agree there are better patterns to use.

Then adds the benchmarks which (ignoring function inlining) are relatively comparable for Functional Options vs Config Struct, with the notable increase when using interfaces (as with many things in Go). But these results are still on the order of ~100ns. I think more accurately they can be characterized as "Relatively" slow.


This is the proper frame to analyze this issue. If you're using Functional Options to configure a long-running http server once at startup, the cost is so small that you've already spent more money thinking about it for 1 minute than it will ever cost in compute runtime. But if you're using it once per request over thousands of requests, or once per record with thousands of records per request then maybe it's time to consider using a more lightweight configuration pattern.


The functional pattern on every request is quite common. Think gRPC-go's withContext(withDeadline()) pattern.


If you are trying to shave nanoseconds off an RPC you have architectural issues.


If you approach every program with that attitude you will never have an RPC subsystem where nanoseconds matter. You will be trapped in a self-fulfilling process where nanoseconds don't matter because the system is slow.


It's physically impossible for nanoseconds to matter for remote calls - most individual servers are larger than one light-nanosecond.


The efficiency of initiating the call limits how many calls the program can initiate per second, per thread. The potential latency of the response is irrelevant.


> Functional Options (or config/option initialization) shouldn't really ever happen in a "hot path" where performance really matters, as these are usually one off steps at the time of construction/initialization.

That's not true at all. As just one counterexample, the place where I have spent the most time wrestling with functional options is with OpenTracing, where performance overhead absolutely does matter.


They said “shouldn’t” not “doesn’t”. A good rule of thumb is don’t use open tracing in any functions where you expect to measure anything less than 10ms.


Exactly, general rule of thumb: One shouldn't deal in absolutes ;).


I dunno about the Go community as a whole, but /r/golang discussions have been trending back to just using configuration structs, rather than any of the other fancy options proposed over the years.

One of the advantages it has is that it's simple, so it works with all the language mechanisms quite naturally. Do you want to factor out a particular set of three settings? Just write a "func MyFactoredSettings(cfg *ConfigStruct)" and do the obvious thing. Do you need more arguments for your refactoring for some reason? It's a function, do the obvious thing. No mysteries.

I am reminded of the function programming observation that functions already do a lot on their own and are really useful. Additional abstractions around this may superficially look neater in isolation but I have been increasingly suspicious of anything that makes it harder to take a chunk out of the middle of my code and turn it into a function, and despite the name, "functional options" kinda have that effect. (Go is obviously no Haskell here... not many things are Haskell... but it's at least a similar issue. Anything getting in the way of basic refactoring with functions should be looked at suspiciously.)

(I would also say that while you can refactor functional options, there is something about it that seems to inhibit people from thinking about it. Similar to "chaining" that way.)


> discussions have been trending back to just using configuration structs, rather than any of the other fancy options proposed over the years.

It may look a little more fat, and probably copies some fields that will end up in configuration but...

1. Go is very adept at copying large structures 2. A fully scaffolded struct is far easier to read than something hidden inside a function somewhere


I mean, of course they're slow, it's varargs (on the heap, garbage collected), dynamic function closures (on the heap, garbage collected), and a series of indirect function calls. You really don't need a bunch of benchmarks to tell me it's slow, I believe you. But as much as it pains me, and any premature optimizer, to write "...func(*config)", I don't see the problem unless you find it in a hot section of real code and then do real benchmarks on the code to solve a real problem; these blog post benchmarks are not helpful. I bet regexp.Compile is slow too, but I don't complain about it until I find it in a hot section of code.


My preferred idiom is essentially the command pattern:

    type Frob struct {
      SomeFlag bool
      AnotherArg string
    }

    func (args Frob) Do() FrobResult {
      // ...
    }

    // Later:

    res := Frob{SomeFlag: true}.Do()
This saves the stuttering of `Frob(FrobOptions{`, should have identical performance to that, with nicer syntax, and has a smooth upgrade path for all the sorts of things folks do with the command pattern (such as logging, dynamic dispatch, delayed execution, scripting, etc).


Very neat way to do this.


Don't understand why Go developers choose the most complicated solutions. Function returning function returning function is an awful style of coding that is hard to read. I see at least two simple ways to pass options:

1) named arguments:

createFoo(barness: "bar", bazness: True)

2) struct with default values:

createFoo(FooConfig{barness = "bar"})

Go might not have these features, but I guess it is easier to add them than to invent weird "function returns function" tricks.

With functional options the code needs to be duplicated: first, you need to define a field in a struct and then a functional option that sets that field to a given value. With ideas above no duplication is necessary.


> Function returning function returning function is an awful style of coding that is hard to read.

Although I hate functional options for their lack of discoverability, the idea that currying is "awful" is pretty far fetched.


Given that the parent comment says "hard to read", an obvious charitable interpretation of their comment implies "with Go's syntax".

Currying in languages that use different syntax is orthogonal to the point being made here.


Using currying by itself is fine and easy to understand. But if you write a function that returns a function that returns a function — that's not easy.

For example, compare these two functions:

function (a, b) { return a + b; }

function create_adder(a) { return function(b) { return a + b;}}

The purpose of first function is easier to understand. And if you rewrite last function according to modern trends, it can become even less readable:

const create_adder = (a) => (b) => a + b;


No-one would actually do this in real code though - this pattern is typically used for partial application.


How is (a) => (b) => a+b less readable than (a,b) => a+b?


Because the first is a function returning function, and you have to think how that works, and the second one is just a simple function that returns a sum.

Also, if you start a function with a word 'const' then it is less obvious that this is a function and not just a constant. In contrast, if you use a word 'function', then you understand what it is from the first word.


That's only because you're not used to currying.

A function is “just” a constant.


Go developers came up with this pattern to deal with the language's limitations. Go's core team would probably advocate for a mutable configuration struct with a magical interpretation of zero values, and/or a constructor-by-convention and advise users to "just not make mistakes".

If we had non-zero default values or named arguments, this pattern wouldn't exist.


As a former Python dev, default arguments are nice but they get abused to hell.

Need to add functionality to something? Don't think! Just add an argument with a default to the current behavior, all the way up and down the stack.

Now you have just one API that does everything! Just set 20-40 parameters to decide the behavior.


Python just gives you a choice, use named arguments or refactor your API and use objects for options. It is up to you how you are going to use it.

What I don't like though is when all those options are hidden behind two-stars keyword argument and it is difficult to understand what options are allowed, what they mean, what type they have and so on:

def example(**options): ...


There would still be reasons to have configuration structs if we had named arguments.


> Function returning function returning function is an awful style of coding that is hard to read.

Hey, that's basically how any function with more than one argument is implemented in Haskell. (Look up currying.)

It's not so much that this style is 'awful' in any universal sense; it's more that Go is terrible, terrible host language for anything in this style.


Haskell hides this complexity and allows you to write just:

add x y = x + y

There is no functions returning functions here, just a normal addition. It is perfectly readable.


Yes, it is perfectly readable. But: there is a function returning a function.

Haskell just decided to have a syntax (and type system) that can handle this without inducing nausea in the programmer.

As with everything, this comes with trade-offs. For example, Haskell's syntax (and type system) have a much harder time dealing with optional arguments. Something that eg Python does just fine, and that even C handles OK in the form of varargs.


Neither of those address the issue of scoped overrides of a setting. The benefit of the functional options approach is they return the inverse setting, so you can very trivially do scoped overrides for things like log levels. Eg:

   prevVerbosity := foo.Option(pkg.Verbosity(3))
   foo.DoSomeDebugging()
   foo.Option(prevVerbosity)
(better yet, use defer to restore)

Your examples seem to be reducing the problem space to exclusively object creation time. And in that case, yes named params or a struct with default values work great. But they work a lot less great when you're talking about changing an existing object, as now your default values can't just be the actual default values, but rather optional values since you need to distinguish between "set to a value that happened to be the same as the default" and "didn't set a value at all, so don't change it".


Is that really such a common case? Obviously it depends what you’re configuring, but I definitely would not expect that it’s typically OK to jump in and modify the configuration of an object that’s already in use. What if it does do some expensive one-off setup using the supplied config at object creation time?

If you really need a general scoped override, it could be done in the config struct approach just by copying and restoring the entire config. This might be expensive if the struct is big, but on the other hand, you could change multiple properties at once which doesn’t look possible in the function-based approach.


If the object has an API to do functional options, why wouldn't you expect it to be viable to do a scoped override? The API for dynamically changing them already implies they are available to be changed frequently.

The source of this pattern per the article https://commandcenter.blogspot.com/2014/01/self-referential-...

The example used is trace verbosity, which is both useful to do a scoped override and cheap to do.


As others have said here, it just feels really un-Go-like (although I'm not a Go user so maybe I don't have the right intuition). Do you really need such a high-level, abstract mechanism just to be able to turn on trace logging? The clunky "big struct of options" approach seems much more idiomatic, and doesn't have any immediately obvious downsides. (That blog post you linked says it's "unsatisfactory"; I guess there's more detail on that elsewhere.)

For scoped logging, the obvious approach would just be to add a "setVerbosity" method to your object. You can try to do it in the most general way possible, sure; but that feels similar to building lots of generic data structures and other abstractions, which is something that Go intentionally resisted for a long time (on the basis of keeping things simple and concrete, rather than building ivory towers of abstractions).

Anyway, the "Functional Options Are Slow" article reads like a post-hoc justification written by somebody who never liked functional options in the first place anyway -- i.e., the stuff in the final section. The real argument is "I think functional options are a bad idea for style and usability, but if that doesn't convince you, you can also look at the raw performance numbers, and it turns out it's really expensive too." Which I think is a good argument!


You're conflating developers who write in Go with developers who write Go. It's easier for me to write weird "function returns function" tricks than it is to fork a compiler.


Go definitely has these features and it’s the standard way to pass config.

CreateFoo(FooConfig{bar: “baz”})


The interesting part of this article was the subjective reasons to avoid functional arguments. As the article says you most commonly see them used in initialising something, the performance difference identified in the benchmark is unlikely to matter, it’s more a matter of preference and aesthetics.


One thing that I find nicer with functional options is building tree-like data structures.

My command line parsing library uses them to declaratively build CLI apps with arbitrarily nested subcommands.

Some examples at https://github.com/bbkane/warg/tree/master/examples


Oh that's nothing. In Java I came up with a way to pass config options using method references as keys, so you can write something like:

  createFoo(with(FooOptions::enable, true), with(FooOptions::size, 7));
And processing the options involves serialising each method reference to work out what it is!


Sure the relative difference between the fastest and slowest approach is 5x. But the absolute difference is still just 125.23 ns. For some perspective, 1 millisecond / 125.23 ns = ~7,985.

There certainly are cases where that slowdown matters. But for the vast, vast, VAST majority of applications, it is completely irrelevant.


Dunno, it's not unheard of for a program to do the same thing over and over again, sometimes millions of times. Every slow program consists of a bunch of individually fast instructions.


If your hot loop is doing configuration setup then your code has bigger problems than this particular pattern.


You could also be configuring many things.


Indeed, but you should probably do that right before looping


Right, but if you needed to configure a million different things, wouldn't you also loop the configuration process?


What are the benefits that this functional options style offers? Is it that you can add more options without having to define a new field in a struct?

One could even add all the With methods to the struct to get some fluent/builder pattern.

Edit: is it so that you don’t need to instantiate a new struct at each call site?


The main benefit is you can have configuration options without having to specify all values, and also have non-zero-value defaults. Lets say you had something like Sarama's config struct which contains 50 or so config knobs. The following is will lead to some terrible defaults:

    NewConsumer("kafka:9043", Config{ClientID: "foo"})
Here, with this config, there is a config option `MaxMessageBytes` which will be set to 0, which will reject all your messages. What Sarama does is, you can pass a `nil` config which will load defaults, or:

    conf := sarama.NewConfig();
    conf.ClientID = "foo"
    conf.RackID = "bar"
    NewConsumer("kafka:9043", conf)
and so on. This is ok but can be cumbersome, especially if you just need to change one or 2 options or if some config options need to be initialized. Also someone can still do &Config{...} and shoot themselves in the foot. The functional options style is more concise.

    NewConsumer("kafka:9034", WithClientID("foo"), WithRackID("bar"))
I used to be a fan of this style, and I even have an ORM built around this style (ex. Query(WithID(4), WithFriends(), WithGroup(4))), but I think for options like these a Builder pattern is actually better if your intention is clarity.


I don't write a huge amount of Go, but why not something like:

    conf := sarama.WithDefaults(Config{ ClientID: "foo", RackID: "bar" })
To prevent accidental use, add a "Verified" field that must be set before the configuration is accepted.

You could also use the above style in a single shot with NewConsumer if you didn't need to retain the configured struct after creation.


The problem isn't that users are accidentally using it. People are using it intentionally.

The problem is that some of the config fields are mandatory, and some are optional, but the developer can't tell whether the fields were intentionally set to a weird value or just not specified, and users can't tell which fields need to be set to not have strange behavior.

You would need to add a Verified field for each config option so you can tell whether users set it or it's a default.

Alternately, you can use pointers, but that gets clunky because you can't do

    conf := sarama.Connect(Config{ ClientID: &"foo"})
You have to declare a variable above, and then reference the var.

Zero values are sometimes valid config options, but not what most people actually want to do. Structs cause issues because there's no way to specify just a couple of fields in the struct; the rest will get zero values.


The problem with your solution is:

1. If the verified field is not set, your program blows up at runtime. I don't think it's acceptable for programs to blow up at runtime for errors that can be caught at compile time.

2. Your example still has the same problem when it comes to `MaxMessageBytes`, what if I really meant to set it to 0? The `WithDefaults` function would eat that. You could use a pointer, but then that just complicates the API (you need to do new(int) as a separate variable and assign it


The blog post that introduced it (https://commandcenter.blogspot.com/2014/01/self-referential-...) mentions

  func DoSomethingVerbosely(foo *Foo, verbosity int) {
    prev := foo.Option(pkg.Verbosity(verbosity))
    defer foo.Option(prev)
    // ... do some stuff with foo under high verbosity.
  }
but I don’t see why that’s better than

  func DoSomethingVerbosely(foo *Foo, verbosity int) {
    prev := foo.setVerbosity(verbosity))
    defer foo.setVerbosity(prev)
    // ... do some stuff with foo under high verbosity.
  }
It also mentions that it allows you to “set lots of options in a given call”. That, you could sort of accomplish by having the setProperty methods return the changed object, thus allowing chaining (e.g. foo.setVerbosity(v).setDryRun(true)).

This allows both that defer and setting lots of options in a single call.

Given the limited features of go, it’s a nice hack, but I don’t like it. To me, it doesn’t feel like it fits the philosophy of go.


Your chaining example and the defer example can't both work together, since they both rely on different return types for SetVerbosity.


I think that there is no benefits, it is just a workaround for lack of named arguments and default values for struct fields.


It also means there's no worry about breaking changes to a `Config` struct


Breaking changes are good. They force people to consider the change.


Perhaps this is the kind of problem in which a macro system, like that in common lisp, allow you to solve the problem paying the prize in a little more compilation time but not in runtime.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: