I don't agree on the fact that lombok has brought us out of the dark ages. We used to use it, but it has some drawbacks. One of them, it's an additional dependency. This, for a simple thing such as pojo, seems a bit overkill to me. The additional amount of time used to write some cose isn't worth the risk of additional bugs hidden in having one more dependency.
With Lombok I always felt that it was trying to fix a symptom while it ignored the underlying problem.
If you don't like to write out getters and setters in your code, then consider not to implement them at all. Just use value classes with public fields. Or nowadays you can also use records.
Some people use the argument that the advantage of getters/setters is, that you can add custom logic to them. But IMO that is an antipattern. And even more so if you then hide that enriched setter or getter using Lombok. Code should indicate to the readers what is doing, it shouldn't try to hide stuff that could be important.
I feel that when I programmed in C++ before I went to Java, it was much more common to include more logic in your setters than just assigning the argument to the field. This is now almost twenty years ago, I was less experienced and the team I was on where I was writing C++ was small and inexperienced too, so maybe that's a misconception. But certainly, in Java the additional logic in setters is frowned upon; but so are public fields. So, it's really a cultural thing in that community. Thank goodness for records :)
I never particularly liked Lombok either but I do appreciate what it does. Which is to remove a lot of boilerplate code related to setters, getters, hashcode, and equals functions, builder classes, etc. Writing that kind of boiler plate manually increases the likelihood of bugs, inconsistencies and allows other weirdness to creep into your code base. Also it's rather stupid work to do.
These days, I use Kotlin, which removes the need for Lombok while still being able to play nice with it. The most recent version of Kotlin actually added a compiler plugin for Lombok annotations that makes it easier for people with legacy Java code bases with Lombok to introduce Kotlin to their code bases.
Kotlin doesn't have direct support for lenses but it is a pretty popular feature with some frameworks. Arrow has an implementation for example. And Arrow is of course inspired by Scala. I've also used the Fritz2 framework for browser UIs. It uses compile time generation of lenses. I think they are a bit of a double edged sword. Looks like a lot of complexity for not a whole lot of gain to me. I like keeping things simple instead. This stuff does have the distinct taste of over engineering to it.
I hope they will find a better solution, as this proposal will break encapsulation.
Let’s say you model allowed transitions via business methods and thus restrict certain state changes. You can either serialize or deserialize state or perform an allowed transition. If wither becomes a language feature, it will allow forbidden state changes that cannot be caught by validation in constructor, which will allow deserialization of states A, B, C, but will not know that A->C is forbidden, thus making the following possible:
var r = new R(A);
r = r with { state = C; } // passes, resulting in untraceable error much later
Nope. If records were structures, they would not be allowed to have methods or overloaded constructors or even default constructor. The most obvious example of encapsulation in records is validation of state.
>They still don’t have encapsulation, all members are readable on purpose.
Public state does not mean no encapsulation. The purpose of encapsulation is to bundle state and behavior and hide them both behind an interface, but that interface can offer read access to state. The key here is behavior.
>constraints can be upheld
My example above demonstrates a constraint that cannot be implemented in a constructor.
It can't but that's not due to the proposed solution for withers.
The cause is Records themselves. Record are immutable on purpose and you want to add a mutation constraint whish will not play nicely.
The proposed constraint will be triggered in this case :
var r = retrieveFromDb(); (value is A)
r = r.with(C) -> throw an exception
But this only work if B is generated by the wither. If I deconstruct the record manually and reconstruct with C manually, it'll work. So this offer no garantee that this transition will never occur.
I would even argue that this constraint can't be implemented at the level of the class, at least if the class is only a data carrier without external dependencies.
>>My example above demonstrates a constraint that cannot be implemented in a constructor.
>It can't but that's not due to the proposed solution for withers.
This has nothing to do with withers. It can't be implemented simply because it is a constraint on a specific transition of state. There's no transition of state in constructor.
>Record are immutable on purpose and you want to add a mutation constraint whish will not play nicely.
The purpose of immutability is not to create constant objects, but to prevent side effects from sharing mutable objects: state modifications are possible, they are simply reflected in a modified copy of object. That also means that my argument stands also for entities modeled as classes with final fields, it has nothing to do with specifics of records.
Indeed, the fact that we have a constructor from which we can build any valid state and that we can deconstruct an immutable object means that we can bypass transition validations, but that will require some extra effort from developer and explicit demonstration of intent compared to simply using `with` block.
Compare this:
var rA = new R(I1, I2, I3, A); // deserialization, e.g. from persistent state
// verbose, explicit intent to create a copy in state C
var rC = new R(rA.i1(), rA.i2(), rA.i3(), C); // error occurs later
this:
// no semantics, no validation
var rC = rA with { state = C; } // error occurs later
and this:
// clear semantics, validation of transition
var rC = rA.onSomethingHappened(C); // exception thrown now
I can't find anything in the linked eg-draft that would indicate that the error would "occur later". In fact, it explicitly says:
Note too that if the canonical constructor checks invariants, then a with expression will check them too. For example:
record Rational(int num, int denom) {
Rational {
if (denom == 0)
throw new IllegalArgumentException("denom must not be zero");
}
}
If we have a rational, and say
r with { denom = 0; }
we will get the same exception, since what this will do is unpack the numerator and denominator into mutable locals, mutate the denominator to zero, and then feed them back to the canonical constructor -- who will throw.
Ah, I see what you mean. It's not about that proposal, but about the fact that record constructors cannot be private (at least not for a public record). That's because records are meant as product types, or nominal tuples, which don't break encapsulation but exist to represent the notion of an unencapsulated tuple. That's their job: to be unencapsulated data with everything that enables.
Now, you may ask why they don't also do other things, and I guess its possible that in the future we'll allow private record constructors, but there's less need for that, because Java already has a construct for encapsulated state -- ordinary classes.
Regardless of what records were meant to be, they are designed in a way where they do offer encapsulation (which by definition means not only the state, but also the behavior of an object, and it does not mean that we always hide the state - only that we control the interface to it). Records can have business methods and can have non-trivial constructors, so they are basically a syntax sugar for special form of classes, not tuple structures. If you can use them to encapsulate certain forms of behavior, e.g. by declaring methods for state transitions, then with{} block will be a change in their interface. Do I expect that new version of language will change interfaces of my data structures? Hell, no! This feature must be designed as opt-in solution, following the example of Iterable and foreach loop and requiring explicit declaration of wither interface. What kind of declaration could that be?
Imagine uniform declaration of intent for records and classes like in this example:
public record Point with (int x, int y) {}
public class Order {
public Order() with (OrderStatus status, Instant timestamp) { … }
public @(OrderStatus status, Instant timestamp) { … }
}
Here we explicitly say that Point generally supports „with“ block for all fields and Order supports deconstruction to status and timestamp and construction of a new object with the same fields. This way existing code retains the interface but can be easily modified to support the new syntax.
> they are designed in a way where they do offer encapsulation
They are very intentionally designed to represent unencapsulated data. Records can have non-trivial constructors, but they all have a public canonical constructor, and while you can do strange things in your constructor and accessors (we needed that for technical reasons), the JEP/Javadoc/tutorials warn you against doing so, and that the reasonable assumption is that you don't.
The invariant is that if you have a record and deconstruct it using a deconstructing pattern, then you can also reconstruct it to get an object that's equal to the first by using the public canonical constructor. You can break that invariant, but libraries are allowed to assume that you don't.
> If you can use them to encapsulate certain forms of behavior, e.g. by declaring methods for state transitions, then with{} block will be a change in their interface.
But you can't and so it won't. All (public) records have a public canonical constructor that you can use regardless of "state transition" methods, with or without withers. The relevant point, again, is not the "with" feature, but the publicness of the canonical constructor. You cannot limit the construction of a record to a state transition method even today, because you can't hide the canonical constructor.
There are certainly classes that do need private constructors, but if they do, then those classes are not records (we may expand the role of records in the future, but so far they're specifically designed to not allow that so that the reconstruction invariant is maintained).
> Imagine uniform declaration of intent for records and classes ...
There's no need to do that for records, because they have a public canonical constructor, and that is the constructor that's used by the feature.
Well, that is exactly the problem, as I already explained a few times in this thread. Constructor cannot validate state transitions, so the “with” statement or the block surrounding it will have to do it, breaking encapsulation and likely requiring multiple copies of this code.
But if that’s the case, then a record is the wrong data type for what you’re doing anyway, since you can call the primary constructor at any point with those same un-validatable values.
It’s not “with” that’s problematic. If a record doesn’t work for what you’re trying to do, just use a class.
Though I will say that this is why I generally think ML’s (i.e. OCaml, Standard ML) approach to encapsulation with modules is generally superior to using classes for encapsulation.
Have a look at the proposal. Half of it is dedicated to regular classes, which makes sense, because record is just a syntactic sugar in Java at the moment.
What I meant was that we don't adopt a feature just because some other language has it, let alone from languages that don't have the same philosophy as we do of trying to have as few features as we can get away with. Records are overall more valuable than properties (due to various reasons, from serialization to interaction with collections). But once you have records with "withers" the question becomes how valuable is it to also have properties. It doesn't seem like it would be helpful enough to overcome the drawbacks.
BTW, C# didn't add properties as a "lesson." Properties in C# and JavaBeans have the same pedigree: RAD UI composer tools of the 90s [1]. They came to C# by way of VB. So really, Java would be adopting a VB solution, and there needs to be a good reason for that.
A year ago I already said all the good reasons. There are many reasons why things like Lombock exist. There are reasons why people loathe writing interminable chains of builder methods.
But sure. None of these are good reasons because something something Visual Basic.
What Java will inevitably end up with is a yet another half-assed approach that only exists for a small part of the language and is not applicable to the rest of it.
> BTW, C# didn't add properties as a "lesson."
That's not what I wrote. This is literally from your link: "Digression: learning from C#"
And look. Right below it, emphasis mine
--- start quote ---
The C# approach was sensible for them because they could build on features they already had (default parameters, properties)
--- end quote ---
And look. "Extrapolating from records" section basically says: Java has nothing, and will need to rebuild everything from scratch for this not to be a half-assed solution. Oh well.
> The C# approach was sensible for them because they could build on features they already had (default parameters, properties)
You've misunderstood. Once you have properties it makes sense to go a certain way. If you don't, it makes more sense to add records and not properties.
Now you don't have to agree with the Java team's decisions. Programmers rarely agree on much. But I think you should at least appreciate the irony that if we had added properties, I would be responding right now to another equally annoyed person complaining that we're not learning the lessons of Go and Zig and Rust, which have refused to add properties, and couldn't we see what an obviously stupid idea it was.
> But I think you should at least appreciate the irony that if we had added properties, I would be responding right now to another equally annoyed person complaining that we're not learning the lessons of Go and Zig and Rust
Most likely not.
Meanwhile C# could build on the strength of what they have in the language. And Java, and I cannot repeat it enough times, will have a half-assed solution applicable only to a small part of the language while keeping the inanity of manual `.of` methods, manual interminable chains of builder methods, and other half-assed solutions everywhere else.
While denigrating other languages and their decisions.
Even the link you provided clearly states this: "And, everything we can do with records but not with classes increases a gap where users might feel they have to make a hard choice; it's time to start charting the path of generalizing the record "goodies" so that suitable classes can join in the fun."
Again: while others have built on the languages features they have, and they are immediately propagated through the language with little to no additional effort, "Java does not adopt strategies from less successful products", and "keep the language conservative". And yet here we are, "learning from C#" and struggling how to figure out a simple (for some definition of simple) addition so that it works with the rest of the language that has languished in the "conservative" land for too long.
Edit.
I also wonder how many of the things that are currently placeholders will be required and will surpass anything C# and other "lesser languages" have come up with: factory, __byname, __deconstructor etc.
Out of curiosity: given that virtually all features in Java were adopted from other languages (almost always less popular ones) so there clearly is no aversion to that, and given the level of experience and success of the Java team, not to mention their clear interest in Java's success -- i.e. they have both the motivation and ability to do what's best for Java -- what possible reason do you think they have to do something that to you seems so obviously wrong?
Moreover, why do you conclude that propertied are the only right choice, seeing that Java is far from being the only language without them? You should, at best, conclude that some language designers like them and some don't.
> Even the link you provided clearly states this: ...
That refers to pattern matching and withers. Records don't have properties either.
> what possible reason do you think they have to do something that to you seems so obviously wrong?
We are all human. That's the main reason.
That's why instead of a unified way of creating collections (and lists and arrays) you need to manually write out `.of` methods and hope that the authors of the library that provides collections (whether built-in or external) provided those methods.
That's why instead of a unified way of creating objects you need tedious manual builder patterns, manual getter/setter boilerplate or code generation.
That's why <hundreds of low-hanging fruit for DX>
That's why the proposal couldn't re-use existing language features (like C# did in the first approach, and as is acknowledged in the proposal).
So you will end up with what is essentially an object initialisation syntax... but only available in this one construct. And will then spend another five years trying to bring it to the rest of the language, again in a very limited capacity. Because something something "conservative language" and "less successful languages".
> Moreover, why do you conclude that propertied are the only right choice
Surely you see that those who have come to the opposite conclusion can be equally convinced that it is you who has made what seems to them an obvious mistake for the very same reason.
If we're honest, we should acknowledge that since there is no actual empirical evidence showing one way is superior to the other here, and since experts have come down on both sides, then there's probably a strong aesthetic component, in which case it makes for a language to remain true to the aesthetic principles that have proven successful for that particular language.
> Because something something "conservative language" and "less successful languages".
You keep missing the point about the "less successful languages." All of Java's features come from less successful languages. I was merely saying that the fact that some languages have a feature is not a reason to adopt it. It's just that if that language was doing better than Java, then there would at least be some social merit to the argument "you should do it because they do", but otherwise it's not an argument at all because other languages don't do it, so there's simply no guidance there.
"You should do it because some people really want it" is similarly unhelpful because whatever "it" is, there's usually a similar number of people who want the exact opposite, and just as insistently. That is why different languages end up making different choices.
Neither of these is an argument at all. All they mean is that other options exist, but they don't help choosing among them. That some languages do one thing and others do another, that some programmers want one thing and others want the opposite is a given.
Now, from my personal perspective, the uniformity and power you want are better served by ADTs than by properties, it's just that you haven't appreciated the extent of the power that ADTs bring when used as intended (it's not really a distinctly separate construct), nor appreciated the significant downsides that properties bring along with their benefits, that are quite measly in comparison to ADTs. Then again, there clearly is no consensus on this, just as there is no consensus on just about anything, and whatever choice is made, some will be unhappy.
Some other things you want may well end up being features in Java, it's just that we believe other things take priority. As for those that won't, our desire to minimise language features if we can help it may be a purely aesthetic choice, but it's one that's worked well for us (just as the opposite may work well for other languages). Without any good evidence that we should abandon it, the fact that some languages don't share our aesthetics is surely not a sufficient reason to abandon it.
I don’t think properties add anything that getters/setters don’t already give you (other than slightly different syntax). I think the “right” way to replace the need for builders is to add support for named arguments.
> Except that you have to write them out by hand for every property you want in your code. Or generate them.
Our analysis has determined that the vast majority of getters and setters are used in classes that are better replaced by records anyway (with all the benefits records bring that go far beyond conciseness, such as safe serialization and correct interaction with collections). The remaining cases are not numerous enough to pose a large enough problem that justifies an ad-hoc language construct, but could be further helped by a more general mechanism, such as concise method bodies (https://openjdk.org/jeps/8209434). The result is a smaller number of much more powerful features.
In other words, rather than making getters and setters easier to write, we simply get rid of the need for most of them altogether, which not only saves us a language feature, but also the downsides that setters and getters (or properties) have. We preferred tackling the problem at its source by asking what causes the need to write so many getters and setters in the first place (Java's lack of good data manipulation constructs) rather than treating the symptom (writing getters and setters is tedious).
Not that I want to switch out one compiler plugin for the other, but Immutables gives you withers, if I'm not mistaken. I've not been much into withers, until today that is. But for everything else, Immutables has been doing a pretty decent job at my work outfits.
JDK 8 is now a minority, and if we count only projects that are still undergoing significant development it's a significantly smaller minority. The main reason some projects are still stuck on 8 is because they use old libraries that have made breaking changes, and don't have resources to adapt their use, let alone take on new ones.
Maybe in the open source world? In some closed source company codebases that are known to me, there were still million lines of code on JDK8, since efforts to port everything to a newer JDK don't necessarily enjoy a high priority.
As an application developer in such an environment one then might even not have the possibiltiy to pick the JDK version of their choice on their own, since as you mentioned there might be dependencies which haven't been updated. In such an environment solutions which don't require a JDK upgrade (Lombok, Kotlin, etc) can certainly help developers.
If the upgrade is feasible, I would definitely use the new builtin language features.
Java 8 is still definitely used in applications that no longer see much development, and it will be some time for before it mostly disappears, but those applications don't benefit from new libraries either.
The way I heard it, Java 9 breaks Spark 2.4 badly with no good workaround to disable Jigsaw (which we otherwise don’t care about), and that’s is a problem for a lot of data pipelines in our monorepo. It sounds like other large projects also had a lot of trouble working around Jigsaw, especially for proxying and mocking, and you are blocked at least as long as any of your dependencies are blocked (which was also one reason the Python 3 migration was so slow and painful).
A lot of code broke on Java 9, but that had little to do with Jigsaw (accessibility remained unchanged from 8 until JDK 16). Code broke because it was not portable and relied on internal implementation details that then changed. Still, Java 8 is now a minority (most applications had to upgrade their non-portable dependencies to new versions).
At least the lucky ones having Android 12 as baseline can now make use of Java 11 subset (because as usual Google only takes the parts of standard library they care about).
I’ve worked for 2 companies so far which maintain a bit of Android code and neither used Kotlin. GitHub search for “#android language:Java” reports 49K repositories, while “#android language:kotlin” reports 25K repositories. GitHub is biased towards greenfield projects, not legacy corporate codebases, so I’d expect the ratio of Java to Kotlin code in non-open-source software to be much higher. I wouldn’t be surprised if it’s more than 10 times higher.
In fact, no, it is not a dependency; your code will never see Lombok, only its effects.
Lombok IS a compiler extension. It exploits one of the weirdest features that was introduced in Java 1.6: You can instruct javac to call a custom extension whenever it encounters a certain Annotation. This annotation doesn't need to be on the Applications classpath either.
Weirdly, I haven't see this feature widely exploited by anyone _until_ lombok came around.
Lenses are cool, but they make me wonder: how many different paths of a deep immutable data hierarchy are transformed in an application to make this abstraction worthwhile, as opposed to replicating the entire traversal at each location? If the number is small then, however cool, this abstraction may be more trouble than it's worth. Just because you can reify some concept in an elegant composable construct doesn't mean that you should in the sense that it saves you a significant amount of effort.
One way the number of paths could be (arbitrarily) large is if the transformations originate in user input, and the user can choose to arbitrarily transform any node in the structure. But then that input isn't typechecked (because it's not part of the program code), and requires interpretation and validation, so you might as well do the process of interpretation, validation, and transformation in one go using reflection. You don't lose the typechecking that you don't have in the first place, and the result is even more general.
Now if you don't have reflection, lenses could make the interpretation of an input path much simpler (a switch of one level at a time), but Java does have reflection. So lenses solve a problem, but the question is: how big of a problem is it (especially in a language with reflection)?
The real power IMHO comes once you start using traversals, which are like lenses but focus 0..n elements instead of exactly 1, and compose well with themselves and with lenses. (It gets better still after that, as you add in Folds, Isos, Prisms, etc., but we'll leave those for now)
Once you have a traversal that can pull out immediate children of the same type (e.g., "given a DOM node, traverse all of its children"), you can use a library of transformations like Haskell's Control.Lens.Plated module from package "lens" and write queries and transformations over arbitrary structures in a very compact way.
I have used this a few times: some examples include walking complex documents to extract particular information from tables, or rewriting every "import" node in a syntax tree, but leaving the rest of the program untouched.
To support Traversals in Java under that get/set form, you would probably need get/set members that were functions to/from T and Array<T>, and then you'd have to write separate compose operators for each composition:
- lens + lens = lens
- lens + traversal = traversal
- traversal + lens = traversal
- traversal + traversal = traversal
This is one reason that Haskell's "lens" package has that funky type alias for Lens instead of a record-of-functions, and a mature lens library is one of the main reasons that Haskell is my favourite general-purpose programming language.
You _can_ force such a take on optics into Java. Someone found an implementation of profunctor optics in Minecraft's DataFixerUpper: https://www.reddit.com/r/programming/comments/9lyplq/microso... That subthread has some really good meaty comments in it, if you're interested in this sort of thing.
> (e.g., "given a DOM node, traverse all of its children") you can use a library of transformations ... and write queries and transformations over arbitrary structures in a very compact way.
But in Java you'd do that with reflection. So the question remains, how many such different queries/transformations you actually have (in the source code, where type checking is available, rather than user input) to make the effort-saving worthwhile?
Maybe you would rather do that with reflection, but I'd rather do something that isn't fundamentally the same tools that create security vulnerabilities. Every time you can not-use reflection is a win.
I think there’s some misunderstanding between the two of you. Reflection is indeed the sane and convenient solution for working with structures when type is not known at compile time (e.g. general purpose library for DI, ORM, security or serialization). It’s definitely not the way to manipulate object trees when type is known in compile type.
However the point about complexity is valid: is it really worth patching strongly coupled solution violating SRP and breaking encapsulation with some design pattern for an anti-pattern (setters)?
Records were specifically designed for safe reflection (and safe serialization). What causes security vulnerabilities is so-called deep reflection, i.e. the use of setAccessible, and can then violate various invariants. That's not what I'm referring to here. Reflection ≠ deep reflection.
Remember not to think about it "statically". If you're comfortable with lenses, then the cost of introducing more precise types and modelling all your states goes down, so you do more of it. E.g. maybe it becomes worth using a separate type for pre- and post-validation versions of some datastructure where before it wasn't, because you need to be able to traverse both versions with the same code, but now you can.
True, but that only pushes the question of value down the line.
I'm curious about lenses because Java did have a serious problem that required a solution: working with "simple" data correctly was difficult. The chosen solution was ADTs, so we did buy into that. But the approach being explored for transforming records (https://github.com/openjdk/amber-docs/blob/master/eg-drafts/...) only works one level at a time rather than for an entire path. So I wonder how valuable it would be to have a solution for paths. If the answer is that it's mostly valuable for an approach we haven't bought into yet, then we might not need to consider it just yet.
I’ve been playing with this idea around the time generics arrived in Java and never found a good use case for it. Number of transitions an object may have can usually be counted by fingers on one hand, meaning that you can have a method for each one and retain encapsulation, which lenses break by design.
> Just because you can reify some concept in an elegant composable construct doesn't mean that you should in the sense that it saves you a significant amount of effort.
The problem this is trying to resolve is creating copies of a complex immutable class with some deep fields changed. I do have to wonder for a case like this if using a mutable object with a deep copy function wouldn't be a better solution, rather than adding extra magic to do it more easily as possible...
Edit: The code in the article would become something like
The issue is that this potentially copies a lot of unnecessary information. Also, if you depend on object identities (reference equality), deep copying will recreate objects even if they aren't supposed to change.
Furthermore, if you want to chain two update functions, you'll deep copy parts of the object twice times.
Lenses are "smart" in the sense that they only copy what's necessary.
This sounds eerily similar to persistent data structures (eg in FP contexts like Clojure): updated values share as much[1] as possible with their inputs, and create new values only[1] when they differ. Granted in FP contexts, this is primarily an optimization, transparent[2] to the programmer actually using those values in a program.
From the perspective of such a programmer, I can’t think of a scenario where I’d want both value and reference equality semantics for the same objects. Is it reasonable to assume that the “smartness” here is likewise focused on making immutability perform well, rather than on use cases where value and reference equality are simultaneous considerations?
1: Handwaves away implementation details. General cautions about abstractions leaking apply.
Yeah, it's also originally from the FP side. There it started I believe because people were looking how to easily make immutable updates to deep structures, since the normal way is pretty boilerplate heavy in comparison to the mutation way of just chaining accessors for an update.
And you're close, it's about using immutability to make value equality cheap by making reference equality a proxy for it. If you don't use mutations, value equality implies reference equality and is thus equivalent (since reference equality already normally implies value equality). That means you can get away with just a single pointer comparison in comparison to completely traversing both structures. This is e.g. what React does to determine whether arguments of a component have changed.
(Well, at least I also struggle to think of a scenario where both types of equality are semantically important).
A business method on order would do the same without any unnecessary overhead and ensuring proper encapsulation and consistency of the state that cannot be achieved with getters and setters.
I believe the intention is to allow operations on multilayered business objects to be composable. So your approvalConfirmationUpdated() method may actually be made up of a sequence of smaller operations, each of which I might want to use elsewhere in the application. Lenses allow this to happen succinctly, and works better with eg the Java streams API.
I got that, my point is that such composability is rarely better than good old OOP with encapsulation. Enumerating all possible state transitions on object interface leads to better design than procedural programming (and composable operations are procedural programming).
I don't think I understand. We're talking about how to implement this. That method will still have to use deep copying + mutation, lenses or something else internally.
Lenses are an abstraction of what you mentioned. For a single example it's of course overengineering. The benefit is that e.g. when you have a bunch of methods like that, you can avoid duplicating the code that is responsible for copying the inner layers. Otherwise if you e.g. add a layer, you have to touch all those methods.
If you look at my example you will notice that it has zero lines that would be duplicated in real life scenarios, because it does not perform a deep copy (a benefit of using immutable objects).
Imagine you changed it from an array of approvals to a single one like in the original example - you'd need to make a change in every method of that type (replacing the map). That's code duplication, it's just not really obvious yet because there's only two layers to pass through. Per layer you need a constructor call (or map to copy & modify the array).
As a more obvious example, if you want to modify a.b.c.d.e (which isn't unrealistic), you'll need to call the constructors of A, B, C and D. If you don't use lenses, this is the code that will be duplicated. You can spread it between the classes or do all that in a method of A, but if you want to also modify a.b.c.d.f, you'll need to duplicate all that code (add another method to A, B, C, D that each calls the constructor).
With lenses, you define once how to access d from a and then any modification of d can happen through that. If the structure changes, you only need to do the changes once by modifying the lens.
1) change in cardinality is such a change in domain that lenses won’t solve it. There will be much bigger changes in business logic probably making original code obsolete regardless of used pattern.
2) deep tree modifications of the kind that you mentioned indicate problems with architecture. Why would you need the whole typed tree (not DOM or something, but object tree) to modify a tiny leaf of it? If you touch multiple leafs with root as closest integration point, why your data model is designed like that, pointing to strong coupling of different contexts?
3) most importantly the code that can be reused, can be extracted to a private method of an entity where the change occurs. Calling nested constructors on the root object breaks encapsulation — it should pass the message to nested objects instead.
If you have an immutable structure, you can't just modify the leaf of it. Unless you're arguing that deep, immutable structures are an issue in themselves?
Oh, we cannot modify it, of course, but we are talking about the ways of producing a modified copy.
The part of the argument about deep immutable structures is just a side remark: building them for modification is likely pointing to a problem with design, but that’s not the most important part.
The important part is that it is better to call business methods than externalizing the whole thing with deconstruction-modification-construction to a lens or some other code, when you have the full power of OOP to do it right.
Kotlin alone won't implement these features for you. You'd end up with something like this:
val order = loadOrder()
val approved =
order.copy(
approval = order.approval.copy(
confirmation = order.approval.confirmation.copy(updatedOn = Instant.now()),
status = ApprovalStatus.Approved
)
I think I prefer this explicit deep cloning over the automagically generated lenses in standard code, but I can see the appeal. The problem is that mutated copies of immultable objects with deeply nested properties that need to be updated just requiresa lot of code.
On the other hand, if I were to design the application in Kotlin, I wouldn't use such a deeply nested object graph if I was going to constantly duplicate theses objects. I also think constantly creating and destroying immutable objects is a sign of immutability being used in the wrong place; I don't know why you wouldn't simply mutate state if the goal is to mutate the state. With Kotlin, but also Java, you could then very easily achieve these changes.
Lenses are available as a package for both Java and Kotlin so the concept is hardly language specific. People find a use for them in both languages, even if I don't see the point, and that means just switching to a better language isn't the solution here.
I like to call this "direct code". It does exactly what it looks like it does.
Lenses do not, they introduce a whole abstraction around something as simple as get/set. For this reason, the Kotlin code above would be much preferred IMO.
I agree, but the Kotlin code isn't that different from the Java code in the blog. The biggest difference is that Java lacks the necessary built-in support and relies on Lombok to provide the necessary copy helpers, but I can't think of a reason for a Java project not to use Lombok.
I see Kotlin as better Java, but with the right compiler extensions (Lombok, Manifold) you can turn Java into a pretty modern language as well, if you're not writing legacy code in Java 8/9/11. The specific approach this article is about doesn't really put Kotlin at an advantage, nor does it place it at an advantage. You might as well write the Lens pattern in Python or Rust, the idea is still the same.
For the example provided I would encapsulate that logic in the object itself returning a confirmed copy as a return.
And as someone else pointed out I also much prefer explicit code where you easily can understand what is going on…
I agree that I'd approach this in the form of (`val completed = order.approveAs(currentUser, Instant.now())` or maybe `val completed = orderService.approve(currentUser, Instant.now())`) myself. That doesn't require any specific programming language either. I was just using the example in the blog.
However, I don't see Kotlin as a language that sticks to explicit code where you can understand what is going on. Between extensions, companion objects, infix methods and other syntactic sugar, I would say Java is much better in terms of code explicitness.