I had a hard time getting your examples into a state that worked.
Even after that, I don't see how what you are doing improves over a switch. It actually looks less readable to me, with no gain in functionality at all.
Your example is actually working, in that you've initialised a field type of an empty string which isn't a valid enum value.
I agree this is slightly less readable than the native switch, but it means the compiler can tell you when parts of your code try handling all enum values but don't, in fact, handle them all.
For the default switch you'll only find out at runtime, which isn't very useful when developing.
Does that make sense? The compiler warning in the article is probably the best example of what I mean here.
You're right, which is why we don't need types, I guess?
I'm being facetious, but if you're using a statically typed language you've already bought into types helping you across your codebase being worthwhile.
Equally, this situation is one where the testing point doesn't work. Your tests for the existing functionality won't, obviously, be testing the new functionality you're creating.
So they won't be able to catch that one part of your codebase doesn't support the new feature, while this solution can help you with that!
Let's say you have a gun at home. This doesn't mean you have to shoot it every time the door bell rings.
>Your tests for the existing functionality won't, obviously, be testing the new functionality you're creating.
Well... duh, that's why write test in the first place. To test new functionality. In fact some people prefer writing test before the functionality in question.
>this solution can help you with that!
While making the codebase (subjectively) less readable at least. This maybe a viable option for a pet project but I wouldn't be happy to have it for a code review with the only reason being "now you don't have to test it" (you will have to test it anyway most likely)
You are presenting it as an exclusive "or" question, when in fact you just do both.
You add a variant, and the compiler will lead you to all of the places where you need to handle that variant.
Or you remove a variant, and the compiler will tell you about all of the places where your made assumptions about such a variant existing are now invalid, including the tests that you wrote when you implemented the variant in the first place.
Welcome to compiler driven development.
Of course it helps when sum types and pattern matching are a first class feature of your language.
I think we're cross talking. One of the primary advantages of this approach is that when you add a new value to that switch, your compiler will tell you about all the parts of the codebase that need updating to handle it.
Then you can use the compiler errors to help you write the tests for the new feature.
You might say you'd prefer to just search the codebase for parts you need to update, and you're ok with forgetting the odd thing and finding out when it's broken after shipping. That's ok, but I'd prefer to have the compiler help me do this rather than doing the work myself.
That's irrelevant to the task at hand. The linter can't tell you if your type switch can handle every `interface{}` that might be passed into it some day.
With generics coming in go 1.18, we can finally make a sum type abstraction that is a bit friendlier than the previous alternatives.
Obviously having this in the language would be best, but just having a construct that can check you've handled each case is a big improvement in my mind.
This is just a case of non-idiomatic programming in the first place, and it's not even non-idiomatic "Go" programming, it's non-idiomatic for object-oriented languages in general. The correct solution to this sort of problem is more like this:
type OptionParser interface {
ParseValue(data string) (*Value, error)
}
type OptionField struct {}
func (of *OptionField) ParseValue(data string) (*Value, error) {
return &Value{OptionID: data}, nil
}
type NumericField struct {}
func (nf *NumericField) ParseValue(data string) (*Value, error) {
value, err := strconv.ParseFloat(data, 64)
if err != nil {
return nil, errors.New("invalid value, must be a number")
}
return &Value{Numeric: value}, nil
}
and so on. A non-Go language would use whatever the local Abstract Data Type abstraction is. The parseValue function just melts away entirely, you just call a method on the value you have in hand. And now, instead of having already spent a great deal of design budget on this abstraction, and having a hard time integrating it into any future abstractions you may want to make, you're using language constructs in the way they were designed, and there's more space here to make further adaptations. Generics could even help with some of those perhaps, but they're not needed here.
It's possible the Value type could be helped a bit with generics, but it's not clear. Probably the old-school existing "sum types" techniques would work just fine: http://www.jerf.org/iri/post/2917 But "a field per possible value" isn't necessarily a bad choice, especially if it's only a handful of fields. Sometimes it's the best performing implementation, even if it may seem skeevy at the programmer level, because it involves the least indirection even if it seems to involve the largest (in RAM size) objects, and being able to pass them around by value rather than pointers or interfaces means inlining may end up doing more for you too. In general, nowadays, you're better off using 64 bytes in one place than you are using 16 bytes here and 16 bytes somewhere else through a pointer, even if 64 > 32.
It is helpful to understand the expression problem to understand what's going on here: https://en.wikipedia.org/wiki/Expression_problem In short, OO excels in adding values that implement an existing set of operations, functional programming excels in adding operations to existing sets of data types. Forcing sum types here into a language that doesn't really want them means that you're taking a techique that is good at the wrong dimension (you are adding more data types here that fit into the existing operations, what OO is already good at), then having to twist it around in a way it isn't really comfortable with to get it to work with what you want (in this case it'll manifest as needing to add this code to every method, of which I'm sure you have more than the ones shown; in properly-applied FP you get usually get the advantage by adding many fewer clauses to your pattern matches, often just one), when the correct solution was already there.
While I often complain about trying to jam FP into OO without understanding in the context of map/filter/reduce-based programming, this is actually the even more profound difference between the two. It's nice to have the FP toolkit around in an OO language for when you encounter a problem it can help with, but when you're in an OO language and right in OO's wheelhouse, use the OO. Don't jam the FP in because FP is Platonically better or something. It actually has well-characterizable weaknesses vs. object orientation, and at least as it stands now is absolutely not universally better at everything.
(Some up-and-coming languages claim to have solved the expression problem. I have not evaluated any of these claims. More power to them if they are correct!)
I disagree with you that this is non-idiomatic Go code, but that's by-the-by: no one can agree on what the idiomatic way really is, anyways.
On your solution: you're right in that you can define separate types for each of these values, then have them implement a common interface. But that ends up getting pretty weird for our codebase, as we pass the types around between a variety of packages and re-typing them so you can implement the handler code in that specific place leads to some really awkward code organisation, and I think would be a net worse.
> You also have no compiler support for detecting that you missed a case. Again, Go's m.o., so shrug.
The original article is just about getting us compiler support for catching the missing cases, in a way that we can easily change existing switch statements to support it.
It shouldn't be offensive, and it definitely didn't take any of our 'design' budget: it's very simple, and took less than an hour to put together and adopt it in the codebase.
Sounds like it's not your cup of tea, but it is a nice improvement for us :)
I was talking about a Value type being a sum type. Your widget type shouldn't be a sum type at all.
For your new widget, you define an interface and you call a method. You do get full compiler support for ensuring that the method exists because if you try to pass the new widget type in to something that defines the interface, and it does not implement the interface, it is a compile-time error.
This is absolutely unidiomatic. You pay for implementing a foreign paradigm into Go code, then you pay for that foreign paradigm not doing the task you're asking it to do very well, where the native paradigm does it smoothly and correctly. This is doubly-non-idiomatic code; it's non-idiomatic Go and it's non-idiomatic FP!
Sorry, but interfaces are not a replacement for sum types. Trying to shoehorn every situation where you might want to deal with multiple different types of things into figuring out what common interface they could implement is a pretty terrible pattern (although interfaces are great where they are actually applicable). The Go solution of using reflection on empty interfaces and type switches when the interface pattern fails instead of sum types is a hack that forces you to throw away type safety.
Go <1.18 often feels like it is an half-finished language, except there are a lot of people ready to jump down your throat about how it's actually good that it is half-finished. Real "ignorance is strength" vibes. And I'm someone who writes Go every day.
As far as I'm concerned, the killer feature is Result type error handling, which forces the calling code to deal with the error — as in your sample code.
Sum types have lots of other benefits, but what I really care about is having Result types instead of exceptions or returned error codes where the error handling can be omitted. Even if you choose to "unwrap" or similar, having high confidence that you can audit a codebase and at least know that every handleable error known to the APIs has been checked, that's invaluable. (Panics which represent non-handleable errors notwithstanding.)
> Result type error handling, which forces the calling code to deal with the error
That only works if you remember to deal with the Result type itself, which is something you're likely to forget to do if you prone to forgetting to handle values when they come to you.
Which means that as far as I'm concerned, the killer feature requires that the code not compile unless the error is handled. Even if the handling is just to discard the error, so long as I can audit the codebase and discover all the places where an error was discarded.
That requires a language to have a special concept for errors, which is generally considered a bad idea. The Result type itself exists to try to not introduce a special error concept in the language, but rather allow humans to build up their constructs that only makes sense to humans, rather than machines, using lower level language features.
Realistically, if you are prone to forgetting to handle the state in your application then you will be prone to forgetting to consider values of all kinds, including the Result itself. That is the achilles heel of the Result type, leaving it to be far less useful than it seems like it should be on the surface.
A language that will fail to compile if you forget to handle all the states of your application sounds wonderful, but I don't know of any languages that have even come close to solving that problem. That is an excruciatingly hard problem to solve, and maybe isn't even solvable without human-level intelligence.
warning: unused `Result` that must be used
--> src/main.rs:7:5
|
7 | foo();
| ^^^^^^
|
= note: `#[warn(unused_must_use)]` on by default
= note: this `Result` may be an `Err` variant, which should be handled
The quick fix is to change `foo();` to `foo().unwrap();`, which panics if the call returns an Err variant. But seeing `unwrap` in the code lets me know where an error may occur. That's better than the silent failure we get in many other languages when a result code is left unhandled or a potential exception isn't wrapped in a `try` block.
Yes, like I said you can bake additional features into the language, but that is beyond the topic of the Result type. Go, with some modification, could allow the same type of mistake even without Result's existence.
It doesn't have to be fully specific to the error type, you can have a "must use" attribute that can be applied to any type, and then support for that in the compiler. This allows it to be used for other things like Option as well, and anything else that should cause an error if ignored.
Certainly you can bake monadic-type behaviour into the language, but at that point the Result type is superfluous and provides a weaker interface than what language constructs could provide on their own.
Except notably sum types and pattern matching are language features that are not assoicated with Java at all. They are more closely assoicated with ML and functional programming languages, and not with Java.
Java only got something approaching sum types in Java 17 with sealed classes, which came out in 2021.
Standard ML has had this feature since the 1980s!
You might want to consider that there are type system linages that exist that are quite different to the one presented by Java.
Just curious on this, what about this do you think is a mistake?
You could do this before without generics, you'd just be unsafely casting types (which can panic at runtime) or assigning to a closured variable, which is quite indirect.
Yeah, I think this is personal preference at this point.
I find TS to be pretty great, especially in how you can use the type-system to help you build your code. If your goal is not that, then we'll be aiming for different things.
> is heavily discouraged both by best practices and by the api
On this though, I'm unaware of generics being discouraged? Given they haven't even been released yet, and the amount of effort that's been put into building them, I don't think there's any reason to believe they're discouraged.
Generics were originally not included because the Go designers felt that it was not necessary, and things could work without them.
Even though this opinion has changed, I don’t think it’s been completely reversed, just weakened. So I still think “Idiomatic Go” would be to avoid generics, except in some specific situations. To me, Generics feel like “smart code” in any situation except the most basic and the predominant mindset I’ve come across is that Go code should be dumb and easily understandable (if verbose).
I think the situation with TS is different. I also like TS (but haven’t used it as much), but tend to write different code with it than I would with Go. I don’t think there is any one clear winner and I find it more helpful to write code as expected for the language chosen, rather than trying to apply the ideas I loved from every language to every other one (as a user, obviously language implementors should be open to inspiration). Otherwise I would probably spend a significant amount of time poorly reinventing Rust Results in every language I use, which would temporarily confuse everyone else who read my code and not be as robust.
That is the thing, Go could have learned from the past and been designed properly fromt the get go instead of accumulating hacks to fix design shortcomings.
I think (at least from my perspective) the issue here is that sum types are not first class citizens supported by the compiler, for example non-exhaustive matching in Rust would result in a compile time error. Implementations like this are a mere approximation - and takes a while to grep exactly what is going on (at least for me).
How does this affect performance? Maybe sticking with switch statements and writing a linter of some sort to do exhaustiveness checks would be a better idea?
I should say, we have absolutely no performance requirements, at least in terms of what this will do.
If we could add 10% developer efficiency for 20% performance hit, we'd take that in a heartbeat. The nanoseconds this construct might cost are just irrelevant to us, our customers, or our bottomline.
Obviously if you're writing some low-level performance sensitive code then this won't be free, but I expect most people are in our situation.
Even after that, I don't see how what you are doing improves over a switch. It actually looks less readable to me, with no gain in functionality at all.
https://gotipplay.golang.org/p/xpuAIC5xdL_7