Hacker News new | past | comments | ask | show | jobs | submit login
Dart 3.1 and a retrospective on functional style programming in Dart (medium.com/dartlang)
146 points by mdhb on Aug 17, 2023 | hide | past | favorite | 194 comments



> It’s typical to listen to this stream of events and use chained if-else statements to determine an action based on the type of the events that occur.

You'd think something like directory watching would have a clear set of events that would make nice objects with consistent meanings, but in my experience file watching gets crazy complicated, and can have all sorts of edge cases.

Just take a looked here for all the various edge cases that crop up: https://github.com/paulmillr/chokidar/issues

Then you have linux, windows, macos, and maybe you want to abstract over some underlying implementation like chokidar vs fb/watchman vs webpack/watchpack. Every new OS release could also cause things to change. A big leaky abstraction.

So usually its going to be a bunch of if-else statements hacked together to get around edge cases, and have to be revisited later on.

Any attempt to abstract this into objects, just obfuscates things. And OO forces you to name things, when in fact they might be un-nameable. `FileSystemModifyEventExceptWhenXAndYAndSometimesZ`.

The behavior might rely on a series of events together, so the object hierarchy must be re-worked.

OO has this rosy idea that we just have to come up with the perfect hierarchy, but things change in unexpected ways, and everything must have a descriptive noun.


For what it's worth, I've written plenty of code using file watching. I did a build system that needed to watch the file system to pick up file changes.

You're absolutely right that file watching gets really hairy really fast. Sometimes you won't get events for files. Renaming directories can make things get all kinds of weird. Sometimes you'll get a flurry of updates for the same file and you need to debounce. It's a mess.

But the actual atomic primitive events themselves I've found to be fairly simple. It's basically just write, delete, and rename. I think modeling that as a sum type of a sealed class hierarchy works fine.

Most of the problems are a level above where the set of events is hard to parse meaningfully.


I've long thought it was less OO that was problematic, and more the idea that you can make a taxonomy that covered your entire problem domain.

Worse, people seem to get caught up in the taxonomy for the sake of the taxonomy.


Well, OO (as practiced, not what Alan Kay had in mind) is all about making a taxonomy.

And the real problem I think is not all domains map to hierarchical relationships. Often it's forced, like trying to force push a square peg in a round hole.


I mean, yes, but this is like complaining that so much of functional programming as pushed by strangers on the internet is about building your own algebra over the data in your system.

Both ideas can be great, but I push more for both of them to be used in modeling the system. If there are different sections with "agency" in your system model, OO can make a ton of sense to work with those parts. If you have data getting passed around and worked with a lot, FP makes a ton of sense.


>I mean, yes, but this is like complaining that so much of functional programming as pushed by strangers on the internet is about building your own algebra over the data in your system.

I think it's not the same, because a taxonomy is a thing with limited descriptive power, and ultimately leading into a more or less arbitrary mapping, forced upon one's program design.

Whereas "building your own algebra over the data in your system" has unlimited descriptive power, and is an abstraction over anything we do when we process data in a program.

So, if we take "going to places" as a metaphor for programming, the first is like giving you the laws of motion and the ability to move in 4 dimensions as a tool (algebra), and the other is like telling you "you need to use a truck to travel, regardless of if you're going from LA to San Siego, from the bedroom to the WC, or from one floor to another" (OO).


Apologies, you seem to think I'm throwing them both under a lot of the same bus. That was not my full intent, and I will gladly cede that academia pushed OO far harder and to worse results than many functional things were pushed.

That said, I think you are giving "algebras of data types" a bit too much credit here. The vast majority of the coding that is done will not be made better by elaborate ways of coding things above and beyond a relational language. Having an algebra is a happy convenience when it works. But I have been bitten by far too many failed attempts at making them work to think it is "unlimited descriptive power."


Yeah, talking more about algebra vs taxonomy in general, than about the data type algebraic facilities in various (functional or not) languages (which could be quite limited for non-algebraic reasons). I mean any transformations we do on the data still consistute an algebra, even if it's not encoded/enforced in the type system.


The thing is that OO forces taxonomy on you, especially if you have to deal with lots of inheritance.


It doesn't have to, though, if you are willing to not try and encode the world. Type systems, in general, seem to hit the same problem. If not, why?

Keep the hierarchies shallow and be willing to go with descriptive definitions over prescriptive ones, and I think things can work out some. You can have prescriptive interfaces that say what all has to happen for some work, but that should be much tighter in how it is put together.


>Keep the hierarchies shallow and be willing to go with descriptive definitions over prescriptive ones, and I think things can work out some.

In that case you're just doing encapsulation with extra steps.


I'm not sure I follow? Any chance I can get you to expand on that?

By descriptive, I'm less worried about hiding things. Though, yes, that is certainly a hallmark of how people talk about OO. I mainly don't like getting into debates about where in an inheritance tree something would go. Even if I do think there are values in small hierarchies. The most bang for buck is when they help someone implement a new part of the system. The last bang for the buck is when you are doing a lot of coding just to keep an inheritance tree looking a certain way.


>I'm not sure I follow? Any chance I can get you to expand on that?

I mean, if you're trying to avoid hierarchies or keeping them small (like, say 2 levels most), then what are you using from the OO paradigm, except using classes as some kind of a bundle for related methods and for encapsulation?

I'm not saying hierarchies are good or OO is good. Just that OO-while-avoiding-hierarchies doesn't seem worth to be a paradigm. Might as well ditch OO altogether, there are other ways to get the things you're keeping from OO then (e.g. Rust or Go style).


OO with minimal hierarchies is basically leaning into metaphor for modeling techniques. It is akin to how a large number of informal models have worked for a long time.

If you are focused on just the code organization side of the implementation, I can mostly agree with that. Even if I have grown fond of Java's much more predictable nature on where to find things. Modules or any other organization would have probably been fine.


The hierarchy always looks nice on the whiteboard but always breaks down as soon as you start to write it out in code.


Directory watching would be more of a "finite state machine" kind of handling than "reacting to a set of possible fs events" handling.


From the article:

Functional languages generally implement algebraic data types by pattern matching over the cases in a sum type to assign behavior to each variant. Dart 3 achieves the same with pattern matching in switch cases, and takes advantage of the fact that OO subtyping already naturally models sum types.

I like that they're saying this part out loud, while languages like C# have been able to do this for years but they never came out and spoke about it in this way. It might have made folks get excited about C# when everyone was really excited about how nice Rust's enums are (they're still exciting to me though).


Does it (Dart, C#) support exhaustivity checking for such matching over subclasses?


I don't know about C#, but, yes Dart has quite sophisticated exhaustiveness checking over sealed class hiearchies, including hierarchies that are DAGs and not just trees. It also handles record/tuple types and generics, nested arbitrarily deeply. And patterns that call arbitrary getters on objects.

For example, if you have this class hierarchy:

    sealed class A {}
    sealed class B1 implements A {}
    sealed class B2 implements A {}
    class C1 implements B1 {}
    class C2 implements B1, B2 {}
    class C3 implements B2 {}

    //       A
    //      / \
    //    B1   B2
    //   / \  / \
    // C1   C2   C3
Then it understands that this switch statement is exhaustive:

    void test(A a1, A a2) {
      switch ((a1, a2)) {
        case (A(), C1()): print(1);
        case (C2(), C2()): print(2);
        case (C3(), B2()): print(3);
        case (B1(), B2()): print(4);
      }
    }
Because they cover the Cartesian product of all possible combinations of subtypes like:

            a1
                   A
                  / \
                B1   B2
               / \  / \
             C1   C2   C3
    a2     +----+----+----+
         C1|      1       |
      B1<  +----+----+----+
    A<   C2|    |  2 |    |
      B2<  +    +----+ 3  +
         C3| 4       |    |
           +----+----+----+
This can extend to arbitrary dimensions based on how many record fields there are.

Figuring that all out was quite the challenge! The full design is here:

https://github.com/dart-lang/language/blob/main/accepted/3.0...


thanks tons for that amazing comment. Helped me tremendously. Illustrations are incredibly helpful in cases like this.


You're welcome.

The real challenge is that pictures like this only work for simple two-element tuples. Once you have more fields in the tuple, or nested destructuring, the manifold of the value space the patterns have to cover quickly gets into higher dimensions that are basically impossible to visualize.


Yeah but you helped me over the initial hump. Much easier to get to the more involved cases now.


Bravo! This is awesome.


https://dart.dev/language/class-modifiers-for-apis#the-seale...

that first paragraph, i believe, addresses your question, with excellent example from munificent in sibling comment.


Not sure about Dart, but for C# I don't think it can (due to classes being open by default).

If anyone is interested in a deeper dive on C# and adding Discriminated Unions, this interview by Nick Chapsas with Mads Torgersen is a great discussion: https://youtu.be/Nuw3afaXLUc?t=3939 (note, I'm pretty sure this time stamp is the right part, but the discussion might start before this. I'm at work and can't listen too closely to the video atm).


After re-watching, exhaustiveness is not a feature yet. If you look above to munificent's comment, you'll see that Dart has it! very cool!


yes, you declare your class 'sealed' (introduced in Dart 3.0)


I'm all for ADTs but the reasons the article gives are all wrong, and in fact I think it makes the code a lot worse as a result.

Object programming doesn't mean "inheritance trees", that's just the (IMHO overly simplified) flavor of object programming that we call "OOP". (I'm omitting a full rant from here, which I ought to put on an article at some point.) Object programming is all about state and behavior, just like functional programming; it only differs in how it lays out the behavior namespace (object programming uses inheritance of behavior through lookup chains, which puts the object in focus; functional programming inverts that and uses the structure of the object to dispatch, putting the function in focus).

Grouping behavior together in a single function like the article suggests is, IMO, an anti-pattern and one that I consider to be one of the great disadvantages of a functional programming style. Bundling behavior together means the behavior is out of the control of the object; the object's identity now becomes a critical part of the operation being performed. IMO, putting the focus on behavior is the wrong approach here.

Don't get me wrong, I'm all for some features in functional programming; immutability is great for being able to make assumptions about program state and re-entrancy, for example. But I think the tendency to "lock" behavior into functions and not letting objects define their own behavior is a major shortcoming.


Making everything an object is a cute idea but the fact of the matter is, most of the time data is just data, there's no associated behavior.

Lets say you have a Player object who has an Inventory (can also be an object) and an inventory contains items like Swords and Bows etc. When a player wielding a sword takes a swing at an enemy, a particular animation has to play, and sword swing and sword hit sound effect has to play.

Which object contains the code that performs the rendering of the animation? Is there an "Animation" object that contains it? Does it call some setPosition/setRotation object in the sword and the player? Does the Animation object know how to call the appropriate Metal/Direct3D/Vulkan APIs or is that another object?

Or is the right solution to have an animation subsystem that doesn't care at all about "players" or "swords" and is only concerned with mesh data and how to transform it? Answer: the latter, which is what every game engine that wants to be even remotely efficient does.

That's not to say the Smalltalk idea of building larger systems out of smaller systems communicating via message passing is bad or anything, the problem is the OO idea of tying data to behavior. There isn't a single OO codebase out there without POD objects.


> Making everything an object is a cute idea but the fact of the matter is, most of the time data is just data, there's no associated behavior.

Data is just objects that don't happen to have any behavior.

You can look at it as there being a continuum between data and behavior:

* Purely just storing raw uninterpreted bits in memory and letting anything read and poke at them.

* Storing raw data in memory but having an API that ensures that the data is in a valid state and maintains invariants.

* Encapsulating data behind an API that gives you flexibility to change the representation without breaking users.

* Attaching additional behavior that happens to use that data directly to the type that stores the data.

Object-oriented languages support the full continuum, but the language itself certainly doesn't require you to go all the way for every type. Different kinds of data and different domains will naturally fit certain points on this continuum.

When you say, "There isn't a single OO codebase out there without POD objects", I consider that a feature of object-oriented languages: They are flexible enough to model a range of state/behavior policies.


I really like this this perspective but I think it’s more multi dimensional.

Often data only needs to be read so new data can be produced. There’s no need for encapsulation here. If you want to ensure that you’re dealing with data that has certain properties then you can do that with types, schemas or simply read it.

Encapsulation at a lower level is needed when you mutate state or otherwise own resources and possibly multiple things need to be updated in tandem. In this case you want operational semantics and not look at the innards of the data. So the interface you want describes behavior.

And then there is another level where you only care about coordination of messages. The interface is not the data/state behind whoever produces or reads them. You don’t even care about behavior anymore.


An object without behavior is not an object, it's a self-contradictory statement.

Objects are a pattern that appears in all languages (if you are implementing a plugin API of any sort you need to have hidden data and a shared interface). A language having OO features only makes this easier. Same with FP, you can do OO and FP in C, all FP means is that your functions don't have hidden side effects.

If you make a Point class with getters and setters for its internal data, you've just emulated a record type in a really dumb and inefficient way. The whole "you could replace the internal representation with polar coordinates" example never ever happens.


An object without behavior is simply an object with zero methods. You don't have to treat that as a special case in any way.

Historically, Simula-67 replaced records (in competing derivatives of Algol 60) with classes, precisely because it was clear to the designers that an object is a fundamentally different thing.

And yes, of course nobody actually does this kind of thing with Point. The whole Java-style getX/setY business was never what OO is about. A well-designed OO language shouldn't make you encapsulate data until the moment you actually need non-record-like behavior.


Have you ever heard the expression "a rose by any other name would smell just as sweet"? Contorted to programming it means syntax and semantics are different things and changing the syntax does not change the semantics.

In the case of objects, lets say I agree with you that an object with no behavior or an object with no data are still (semantically) objects. What then, isn't an object? Am I doing object-oriented programming if my program is a set of objects with no data and a single non-virtual "run" method (semantically functions) being applied to objects with no behavior (semantically records)?

You see my point? Yes you can model functions and records (syntactically) with objects but are they actually (semantically) objects? If I can do "object-oriented programming" in C which has no class keyword to speak of, what am I creating? I'm building (semantic) objects out of (syntactic) combinations of structs of function pointers (semantically interfaces) that accept a void* "this" parameters (needed to encapsulate the object's internal data).

In haskell (a purely functional programming language) the main function returns IO (), a monad representing an imperative computation. The language itself, conceptually, never executes this monad, the environment does, so it's "pure". But that's just a cute trick, main is an imperative procedure no matter the tricks the language uses to hide this fact.

A (semantic) object is a representation of some concept (abstraction) with its own internal data (encapsulation) that can respond as it wishes to a set of messages (polymorphism). You are only programming in an object-oriented style if the majority of your code is built out of such objects.

Inheritance used to be part of the family and is often still taught as being a core component of object oriented programming but it's widely regarded as a terrible idea these days and you can certainly program in an object-oriented style without ever modelling an inheritance relationship.


Whether to treat records as objects with null behavior, or to treat objects as records with behavior, is a matter of perspective. Both definitions are equivalent for all practical purposes.

To me, what makes an object fundamentally distinct from a record is the notion of object identity + dynamic dispatch based on the same (encapsulation is an optional feature). Now you might say that an object with only data fields doesn't do dynamic dispatch, but that really depends on the language - e.g. in Eiffel, fields can be replaced with methods without breaking clients precisely because the dispatch is always dynamic. Thus every data object is polymorphic, even if that polymorphism is mostly unused in practice. C# still has the distinction, but it's so blurred with auto-properties that it can be treated largely the same. The Java dance with manual get/set to enable polymorphism is IMO rather an indicator that it is not a well-designed OO language - and it is very regrettable that it popularized this hack as "what OO is about".

I also don't think that sensible OOD requires the "everything is an object" mentality in terms that you describe. Indeed, even in the purest OO languages like Smalltalk, e.g. arguments are passed effectively as a tuple, so it still has the concept of plain data that methods pass around and return - it can't be methods all the way down. Making those tuples explicit doesn't fundamentally change anything.


There is in fact associated behavior; it is just moved off the object in an effort to avoid the after-effects of many languages which implement "OOP". Behavior doesn't have to be part of the object directly, in fact I support your idea that objects can be inert, but behavior inheritance is still an extremely useful concept even within the scenario you propose.

As for your specific example, you don't have to add the behavior of actually playing the sound or animating to your `sword` or `player`; I never suggested that the objects themselves having to perform effects being an inherent part of object programming. Maybe you can send `player wieldedWeapon attackSprite` (or `attackMesh`, I'm fond of 2D platformers better :) and then the scene can ask the animation to render itself, same with `player target hurtSound` and `player wieldedWeapon attackSound` and then the sound object can be asked to play itself with a given `audioContext`. Here, we're being as declarative as can be and still giving the objects the freedom to behave however they want within the bounds of the contract they adhere to.

So an object programming style does not, in fact, preclude you from making "POD objects"; it only says that the behavior of an object should be exposed through the object itself, rather than reflectively deciding on what the object should do from the outside.


No, I disagree, because the right behavior is not determined by a single object but by a combination of factors. For example when the sword strikes an enemy all of the following need to happen:

- The check that the "enemy" is even a valid target (who decides that? The player? What if certain enemies are immune to slashing attacks? The sword then? What about the enemy making the decision? What if it depends on the armor the enemy is wearing, does the armor decide?)

- The calculation of the position of where the sword impacted the enemy (who calculates that? it depends on the animation calculation results, the size and shape of the sword and of the enemy, etc.)

- The sound that plays depends on the type of equipment the enemy has (leather armor vs steel armor) and on the weapon of the player (a sword or a hammer would have different sounds).

- The actual playback of the sound depends on all of the decisions above (the animation, the collision, the materials involved in the collision).

At no point, in any of this, are there messages between these objects. The entities that have to make these calculations depend on multiple data sources, none of which belong to them, that is, the behavior is not associated with the data.

An animation doesn't render itself (rendering requires a rendering context, mesh data, texture data, shaders, etc., who owns them?). A sound doesn't play itself. Collisions don't calculate themselves.

Inert objects don't inherit behavior because they have no behavior, systems do. And systems are effectively "manager" objects (AnimationManager, AudioManager, CollisionManager, etc.), widely considered a bad practice in OO circles.

Systems should indeed support behavior inheritance (I might want multiple different implementations of an AudioManager for different targets or with/without surround sound support, etc.) and since their internal data is hidden (i.e. operating system resources), they are very much objects and working with them is OO programming.

But the rest of the system is not OO, it's very much plain old data being computed upon by plain old functions, and that has nothing to do with OO nor does it need any kind of inheritance.

Even in Smalltalk you see this issue happening. Ints and Doubles respond to messages to implement arithmetic, but for example reading an integer from a stream is a method of the Number class that takes an integer radix and a Stream as input. Why is it not a method of the integer radix or of the stream? Why can't different streams decide how to parse themselves into integers? Because the whole concept of associating behavior here is silly. readFrom is a function.


Thanks this is a great example of the kind of problem you run into with OO designs. State is a graph and you very often need to transform a whole subgraph of it in one operation. Traditional OO makes this very difficult.



I cannot upvote that series of blog posts enough. It is exactly what I'm talking about.


Perfect!

What you’re describing are great examples of data oriented or relational problems.

In these cases you really don’t think (or can’t) in terms of objects and interfaces, but in terms of relational (or equivalent) data that you see, combine and manipulate from a bird’s eye perspective.

You don’t just care about the individual pieces and their integrity. Your focus are their relationships, which you build on demand.


Excellently said. You've put into words things I've felt for a long time but never got round to articulating


Let's go point by point.

> No, I disagree, because the right behavior is not determined by a single object but by a combination of factors

Sure, and you can ask one or more of the objects involved about all of those factors!

> The check that the "enemy" is even a valid target (who decides that? The player? What if certain enemies are immune to slashing attacks? The sword then? What about the enemy making the decision? What if it depends on the armor the enemy is wearing, does the armor decide?)

Answer: All of them! Let's walk through it step-by-step:

- The player wants to attack the entity it's pointing at: `aPlayer attackEntityCurrentlyPointedAtIn: world.`

- In order to do this, the player asks the world what it's pointing at first: `entity: world entityPointedAtBy: self.` If nothing is returned, then the player would simply play the shoot animation of the wielded weapon.

- If an entity is returned, the player asks the enemy to be attacked by itself: `entity getAttackedBy: self`.

- The enemy checks whether it's the right kind of entity first (attacking a prop might do nothing, for instance). Afterwards, it asks the source entity about the various factors you talked about (what kind of attack the source's currently wielded weapon would produce, what the armor is, etc.) and responds whether it could be attacked and how much damage was produced. Maybe the enemy could even deal damage back! `source dealDamage: 123 Source: damageSources reflection.`

> The calculation of the position of where the sword impacted the enemy (who calculates that? it depends on the animation calculation results, the size and shape of the sword and of the enemy, etc.)

We already would need the animation context in order to make sense of the animation that would be produced by the attack at this point, so we can just thread it through our messages and ask where on the target entity the strike would happen. Because we would be asking the wielded weapon about the details of what kind of animation it would produce, it can add in any details and modifiers you'd like.

> The sound that plays depends on the type of equipment the enemy has (leather armor vs steel armor) and on the weapon of the player (a sword or a hammer would have different sounds).

> The actual playback of the sound depends on all of the decisions above (the animation, the collision, the materials involved in the collision).

Let's just ask the attack source about what kind of material it is, and then what it would produce if it hit us, and then ask the sound to play:

    weaponMaterial: source wieldedWeapon material.
    "Assuming you need this level of detail"
    ownMaterialAtStrikePoint: strikeSurface material.
    weaponMaterial soundForStrikingAgainst: ownMaterialAtStrikePoint
      ; play: audioContext.
> At no point, in any of this, are there messages between these objects.

I think I was able to prove this wrong. ;)

> The entities that have to make these calculations depend on multiple data sources, none of which belong to them, that is, the behavior is not associated with the data.

This isn't how the real world works though. Every action has a fundamental source and a target. Being able to model how different things interact in our system in such a natural manner like message passing can make everything so much clearer.

> An animation doesn't render itself (rendering requires a rendering context, mesh data, texture data, shaders, etc., who owns them?). A sound doesn't play itself. Collisions don't calculate themselves.

But they do! They are the best at rendering, playing and calculating themselves, because they're the information expert: they hold all the details. The code that sends the `play:` message need not know anything about whether the sound is a waveform or Opus; it need not know whether the sound object is actually a `pitchChange` object that modifies the original sound effect. All it needs to know is that the object can be `play:`ed.

> Inert objects don't inherit behavior because they have no behavior, systems do.

A system is simply a hierarchy of objects, as I detailed above, so they absolutely can have behavior.

> Even in Smalltalk you see this issue happening. Ints and Doubles respond to messages to implement arithmetic, but for example reading an integer from a stream is a method of the Number class that takes an integer radix and a Stream as input. Why is it not a method of the integer radix or of the stream? Why can't different streams decide how to parse themselves into integers? Because the whole concept of associating behavior here is silly.

Actually, it makes much more sense than a `stream readInt` ever could. Why does a stream know how to read an integer? Why does it need to know how an integer looks, and how to create one? It is much more natural to ask an integer to take in a stream and create its own representation, IMO.


> Why does a stream know how to read an integer? Why does it need to know how an integer looks, and how to create one?

Because sometimes you have streams of binary-encoded integers, and sometimes you have streams of text (e.g. JSON), and sometimes it's something else. Why should the integer know how to encode itself in all the possible ways?

A more sensible OO design is to have a separate hierarchy of readers that parse integers on top of abstract streams of raw bytes and text codepoints - this is what e.g. .NET does. But then when you add a new type like BigInteger, how do you expand existing readers?

The real problem is that, fundamentally, certain things cannot be OO-modelled properly without dynamically dispatching over several types. The proper solution to this is multimethods like in CLOS - but multiple dispatch is slow to implement, has complicated corner cases due to dispatch ambiguity, and can be tough to fit into a statically typed system well.


You've created an extremely inefficient architecture to do what some simple functions and some bytes can accomplish. What was the benefit of the exercise?

About the stream, first there's the fact that it's the Integer class that implements readFrom (and that class has no internal data beyond its type, so it's just a namespace) and the fact that in order to create the integer, the class has to see the data that comes out of the stream (it's just calling getters).

If you think this is object-oriented I have a bridge to sell you.


I'm not sure how this is anymore inefficient than what a functional design would come up with. Is asking the objects about their behavior any more inefficient than dispatching yourself based on the characteristics of that object?

> What was the benefit of the exercise?

I don't know, you were the one that asked about it. What was your point?

> first there's the fact that it's the Integer class that implements readFrom (and that class has no internal data beyond its type, so it's just a namespace)

Ah, but it's the _trait_ that would implement that behavior (my original post said that classes are just an over-simplified way to look at object programming, and I stand by that. I'm working within a prototype-based framework). So the integer's behavior would be provided by the trait, as well as the behavior to create it.

> and the fact that in order to create the integer, the class has to see the data that comes out of the stream

I'm not sure how that goes against what I said? Indeed it would.

> If you think this is object-oriented I have a bridge to sell you.

You can have non-object programming style bits in an object programming language. I'm not even sure what your original point was if this is what you're saying: that an object programming environment contains procedural or functional styled bits? I don't disagree with that, but a functional-style programming language will contain object programming-style bits just the same.


My apologies if I wasn't clear, my point is only the full blown object oriented style of programming (meaning one where everything in the program is modeled as an own entity with its own state that responds as it wishes to a set of messages) is silly. The "everything is an object" idea.

Objects with no associated behavior or state aren't objects, *by definition*, and there's plenty of code out there that benefits from data with no associated behavior and behavior with no associated data. This happens even in true OO languages like smalltalk.

Attempting to model everything as "objects" (read: own private state + behavior) not only may have performance implications due to poor cache usage and multiple indirections, it also makes modelling certain concepts harder (see: object-relational mismatch). The example I gave is a clear example of the latter, since it is very much a relational problem. (Eric Lippert has some very good articles on the matter).

That's not to say objects aren't valuable (they appear as a pattern even in functional languages) nor that "live" software isn't a great idea. Personally I think a lot of software would benefit from extensibility, which means plugins at least, which in turn means objects (private state + responding to a set of messages as they wish)


I read this thread through, and I think you both make some good points.

I think (i.e. just my opinion) you are talking past each other. There are situations where I'd prefer @trashburger's approach (instances that react to stimuli) and situations where I'd prefer your approach (functions that capture the effect of applying stimuli on simple data structures).

I don't think it is as clear or as cut-and-dried as either of you are making it out to be.

A certain amount of discretion and restraint is required to prevent an OO architecture that is unreadable and tangled spaghetti, as well as a certain amount of discretion and restraint being required to prevent a functional architecture devolving into an equally unreadable mess.

If the primary goal is neither readability nor maintainability, then all of this arguing is moot.


I feel like this is mostly a straw man argument or maybe it comes from a lack of experience in functional programming. Functional programming to me is more of a process to problem solving and less organizational. (Though it can be a part of it) Indeed you can combine OOP and functional styles very easily. Scala is a great language that did just that to great success.


It depends if the hierarchy is open or closed.

If the hierarchy is open, the behavior is defined in each subtypes and it's easy to add a new subtypes but you can not add a new operation (a new method). That's the OOP way.

If the hierarchy is closed, the behavior is defined in each function, it's easy to add a new operation (new function) but you can not add a new subtype. That's the ADT way.

This is known as the https://en.wikipedia.org/wiki/Expression_problem


Well, I am a very strong proponent of keeping the hierarchy open in all possible cases; the object should be the one that defines the behavior in all cases, and an object's behavior shouldn't be restricted just because it comes from outside of the object or within. Therefore I agree with your point here.


There are cases when the OOP way clearly has the disadvantage. That's why you need visitor pattern (https://en.wikipedia.org/wiki/Visitor_pattern?useskin=vector). The ADT approach is superior in those cases, as stated in the link above.


There are cases when the FP way clearly has the disadvantage. That's why you need the "record-of-function-pointers" pattern. The interface-and-implementations approach is superior in those cases.


Haha lovely, clearly we are in agreement here. You even put that in your book isn't it?

> The Visitor pattern is really about approximating the functional style within an OOP language (source: https://craftinginterpreters.com/representing-code.html)

The expression problem is one of those "mathematical duality" or yin-yang thing in software design that are less well-known for some reason. People keep pointlessly arguing in favor of one or another without knowing this duality.

Another favorite of mine is SQL vs NoSQL which Erik Meijer mathematically proved to be dual of each other using category theory (https://queue.acm.org/detail.cfm?id=1961297).


Are you talking about the specific case in the expression problem article? It seems to mainly be a problem when an object's behavior can't be modified, either through the language faculties or through other means like a module system. I don't disagree with its conclusions but I believe it to be a problem of language capabilities.


Maybe we'd all be happier with multi-methods

https://nice.sourceforge.net/visitor.html


Closing the hierarchy or not depend on the use cases, for an application most hierarchies are closed. For a library most are open.

For example, if your application receive JSON objects, you know all of them so, it's a closed hierarchy.


I think you have it backward:

> But I think the tendency to "lock" behavior into functions and not letting objects define their own behavior is a major shortcoming.

"Locking behaviour" happens when you bundle objects & functions together.

For instance, an OOP programmer might create a binary tree, mark the fields as private, and expose a public visit() method.

If you take away the "objects defining their own behaviour", then you're left with just the binary tree. Then different callers can write their own visit() functions.


Visibility as a concept is one I also disagree with :) I lump it together with the "OOP" classification in my original comment. I agree with you in this regard. The messages an object is able to respond to should not be restricted because of where it comes from.


I feel visibility should be there, but it should also be possible to bypass.

Essentially visibility is a contract, and sometimes you want or need to go beyond. The language should allow this IMO but make it clear that you are bypassing the contract of the original author.

Making things unreachable is just a PITA for no good reason.


If you want to override behaviour, why not just define a sum type wrapping other objects, a master function that matches on different cases, and then smaller functions that decide on what to do for each case? That seems fine to me.

Like this:

type obj1 = { ... }

type sum = | One of obj1 | Two of obj2 ...

match param with | One obj -> f(obj) | Two obj -> f2(obj)

----

There is also the handle pattern which I've used with some success. You basically place some functions in a record/tuple and then call those functions when you need to.

https://jaspervdj.be/posts/2018-03-08-handle-pattern.html

Edit: I mean to say that I've used something similar to the handle pattern in an immutable context (having a central function that takes objects that can be arbitrarily nested which carry their own function and data, and the parent object can call its function on a child object, and the child can call its function on its child, and so on...). Hopefully I am making sense.


That means having to code the dispatch manually that OO languages do automatically. It’s kind of the point of OO languages to not have to do that manually, and to have the subtype-specific functions be explicitly associated with each other, and not just implicitly by how they are used.


That's true. I didn't think of it that way.


Well, as my sibling comment says, you have just re-implemented dynamic dispatch in object programming ;). As a response to your edit, note that dynamic dispatch and immutability are not mutually exclusive; an object can have immutable state and can still have its behavior extended by creating an adapter object, for example.


> Bundling behavior together means the behavior is out of the control of the object; the object's identity now becomes a critical part of the operation being performed. IMO, putting the focus on behavior is the wrong approach here.

Why? Do you have an example to better explain?

If find people rarely talk about the evaluation criteria for which approach to programming is better. That is, what are you optimizing for?


I'll respond in reverse order ;)

> That is, what are you optimizing for?

I'm optimizing for other programmers to easily be able to see how objects fit together and behave, and for them to be able to easily extend the system without touching the behavior of other components (which you cannot do if you bundle behavior together). I've slowly shifted towards this approach throughout the last few years of my programming journey.

> Do you have an example to better explain?

Let me make an attempt to do so. Let's say that you have an HTML form builder in code (this is something I'm currently working on, if you can't tell ;). Your form builder lets you design the form such that you can split the form into sections, rows and columns. You might design a registration form like this, for example (I'm using Python-like pseudocode here):

    layout = FormLayout(
        Section(
            title=_("Account information"),
            contents=Column(
                Field("username"),
                Row(
                    Column(Field("password")),
                    Column(Field("password_repeat")),
                ),
            ),
        ),
        # etc. you get the idea
    )
Now, how would I go about rendering this into a tree? If I were to take the approach that the article suggests, I would have to do something like this:

    def render_element(element: Element) -> SafeString:
        # Let's use Python 3.10's match as an analogue for the Dart switch
        match element:
            case FormLayout(children):
                return "".join(render_element(child) for child in children)
            case Column(css_class, children):
                return format_html(
                    '<div class="{css_class}">{children}</div>',
                    css_class=css_class or "col-md",
                    children=mark_safe(
                        "".join(render_element(child) for child in children),
                    ),
                )
            # case Row, Field, ...
As you can probably tell, this is bound to get super unwieldy over time. And this is just for one property of this HTML form builder; now imagine if we had to do some operation to the tree (like checking all fields in a form are actually rendered)!

You might, at this point, suggest that we could just split the behavior into different functions... and that would just make us re-implement the dynamic dispatch system that's found in object programming already ;) You can see how we tend to move the behavior to the object in this case, because it lets each object only worry about its own behavior.

I hope this helped with giving a bit of an idea of why I'm for a more object programming-based approach over the pattern matching described in the article.


It's interesting. I've gone the other direction. Everything as functions.

I've realized that things always need to change in unexpected ways. Objects are sticky, people just cram stuff into them instead of re-thinking the hierarchy properly.

There probably exists a really nice and cleanly separated object-hierarchy for a codebase at a given point in time, but requirements always evolve too rapidly to enjoy this for too long.


I believe this to be an artifact of our current paradigm of software development, which is based purely around static "batch processing" where the software can't be molded easily. (I didn't want it to come off as a plug, so I didn't add it to my original message, but I'm working on something to solve that.)

To get a better idea of what I think is the better approach, check out these talks:

Tudor Gîrba - Moldable development - https://www.youtube.com/watch?v=Pot9GnHFOVU

Jack Rusher - Stop Writing Dead Programs - https://www.youtube.com/watch?v=8Ab3ArE8W3s


Oh yes, I'm super interested in this space too.

Really waiting for one of these ideas to breakthrough to mainstream.


I think a lot about how these things might look in a ML-family

    layout : Element → SafeString = FormLayout
       [ Section 
          { title : _ "Account information", 
          ; contents : Column
             [ Field "username"
             ; Row 
                [ Column [ Field "password" ] ]
                ; Column [ Field "password_repeat" ] ]
                ]
             ]
          }
          (* etc. you get the idea *)
        ]


In non trivial code you often need to make coherent atomic updates across a whole section of your state graph. This is very difficult when the code to update state is fragmented across many classes. For me this is where OO really falls down and more functional approaches shine.


Dart is a seriously good language.

Shares a lot in common with C#, and is just incredibly productive.


I also loved it, coming from mainly Ruby. It's just reasonable, the step here toward functional programming gets it even closer to the way I like to write Ruby.


Also coming from Rubh (still using it occasionally). Dart extensions is what I enjoy the most atm!


The one fork of history I wish I could have seen come to fruition was Dart in the browser (natively) as was originally proposed. I'm very curious to see how web development would have evolved if that had come to pass.


It's hard to see how that would have worked.

Google can put anything it wants in Chrome, but why would Safari or Firefox have done this? I don't see how they even could from a practical perspective, since they'd each either have to maintain a separate implementation (you really find out how good your specification is only once it needs to support multiple independent implementations) or import the Google one wholesale. Rinse and repeat every time Google changes Dart.

If Google puts it in Chrome and pushes it, most likely it simply dies there. But perhaps it catches on, furthering the reality that Chrome (and therefore Google) is the web. That's the worst case scenario for everyone (except Google, of course).

Anyway, WASM is here, and doesn't have these drawbacks, so we're all free to enjoy using Dart in the browser, if we want to (to the extent they support it). Not to mention Javascript is in quite a different place than it was when the idea of native Dart in the browser came up.


I realize why it failed (everything you just iterated, plus WASM was around the corner in earnest)

Simply as a thought exercise though, it would have been interesting to see a world where Dart did produce a specification and other browser vendors saw value in implementing and maintaining with it, is all.

Its a great language with a lot of good features


WASM isn't the same as Dart in the browser because "Dart in the browser" means the Dart VM and standard library are already compiled and installed on your browser while WASM means the VM and standard library must be shipped from a server to your browser and compiled before the program can run.


Well, you're trading one problem -- apps need to include their own dependencies -- with a more thorny problem: everyone has to somehow have already installed your app's dependencies. Of course, they don't, so you bundle your dependencies anyway, as polyfills or otherwise, and transpile, etc.


Dart can compile to js so it would work similarly to how multiple image formats are handled. I think dart 1 was too basic to replace js.


I work on Dart and I'm honestly glad that didn't happen. It would have frozen the language in time and features like the better type system in Dart 2.0 and sound null safety in 2.12 would have been unlikely to happen.


I work with JS every day and am somewhat ambivalent about it but I absolutely do not want to see a language controlled by one vendor (in this case Google) to become a browser standard. It would be like the JS equivalent of Google trying to get everyone to make AMP pages instead of regular HTML.

As it is there are two simple answers for Dart anyway: it transpiles to JS and WebAssembly offers the possibility to use it 'natively' too.


JavaScript was controlled by Mozilla. Standardization into ECMAScript only came later after pressure from Microsoft and a mostly compatible reimplementation called JScript. Dart is also an ECMA standard, though only Google contributes because nobody else cares about it. AMP was far less controlled by Google in practice when it was relevant. https://github.com/ampproject/meta-tsc/commit/4f18f83794ed25...

JavaScript is an awful language, with many warts that are impossible to fix while maintaining backwards compatibility, but as one of your sibling commenters mentioned, the original Dart had problems too.


As someone inexperienced with Dart, but ever curious about it - is there anything more you can share about that, or what might have been? I end up writing almost all my work in browser-based JavaScript so I'm curious how different my work might have been.


Had it actually succeeded and become the defacto UI language, it has a lot of nice properties that make it well suited for optimizations.

Javascript is a hard language to optimize and the current javascript engines have gone to heroic efforts to get there. Dart took what made javascript hard to optimize and removed it (mostly removing dynamism around types, you can't just add/remove random fields and functions to an object).

Dart was (at least initially) designed by some of the engineers behind V8 specifically for the purpose of "let's make something that can be fast without a 10 MLoC super JIT".

The other nice thing about dart is it was entirely boring. If you know Java/C# you know Dart. There weren't a whole lot of weird surprises around what "this" is, how `var` scoping works, or wrapping your mind around prototype inheritance. It was just pretty plain OO programming.

That "boring" nature of dart makes it easy to maintain and to write nice IDEs around.

Even today, Typescript has some neat things but by god do I hate it when devs make an uber complex macros impossible for my IDE to grok. (littering the code with red squiggles). All because they've done a bit too much Haskell. When the type signature spans multiple lines because "any" was too lose, but they did want to support anything which could be turned into a string, was undefined, could be null, could be a number, and also stuff that has a method `foo` that can be invoked on easter.


I had the privilege of maintaining a 6 language cross-platform library for a while and man does the Typescript stuff ring true.

Dart was the best. Java without the Enterprise JavaBeans Injection Factory Factory. Swift without the ObjC compat baggage and fundamental design flaws like the typechecker that can infinite loop and regularly takes 200 ms for methods. Kotlin without the slapdash IDE developer side-project language stuff, "here's 20 kinds of syntactic sugar and 10 methods that do the same thing. Use your favorite sugar/naming convention for convenience"

The only ones I genuinely feared were TS and C++, C++ because a deep respect and fear of manual memory management. TS was just...weird, lot of half-baked stuff.

I was having a true TS engineer do code reviews, and they had what should have been minor bike-shedding about constructor formatting. That broke parameter doc comments displaying on hover. All I could find was people pointing this out without answers, and TS reviewer was kinda busy, gave me stock answers and didn't grok there was a genuine issue.

A week later, I finally found an issue explaining some long convoluted thing that made it impossible.


Yeah... TS is just a bit weird. Seems like the language design decisions have generally been "Why not" and not "how does this improve things?" That's lead to a really complex language ripe for abuse.

C++ sort of suffers a similar problem though thankfully it's gotten way better (probably because it's old and not super appealing for less experienced devs). There's also healthy fear in C++ because, like you point out, misuse can lead to hard to find security holes.

And I'm someone that prefers a more feature rich language in general. But I want that to mostly write boring code with the "neat" escape hatches to help when it makes things clearer.


Thanks, that makes a ton of sense, smaller/better by design but equipped to handle the same kinds of problems that JavaScript does.

Is there any advantage to writing in Dart and transpiling that to JavaScript?


They ended up pivoting to focusing mainly on mobile. At this point Flutter is the reason people use Dart, more or less.

Their JS pipeline is really lacking as a result


It's damn good* as is, but there's one major problem, and a fork in the road where it looks like they'll focus on WASM moving forward.

The major problem is...you can't use an Isolate on web, put more simply, everything** is on the main thread unless you go to extremes of segmenting out the code you want on another thread, compile that to JS separately, then call into it from a web worker in JS that you invoke from Dart. I'm told web devs comfy with JS don't see this as a huge deal but for me with a mobile-only background, it's convoluted and fragile

They have a preview of compiling to WASM working, web apps already feel _slick_ but apparently they're seeing a 3x speedup. I don't know enough to know if this fixes the problem above, but I would put money JS staying as is and WASM will be the web story moving forward

* if i read grandparent as "what's the quality of a Dart web app?"

** there's caveats here, ex. I'm 90% sure things like network loads and image loads arent literally on the main thread, i.e. the browser implementation of the APIs flutter calls into is non-blocking. It doesn't really matter though, unless your app is so simple you either don't have custom operations that run long enough to cause significant frame drops


> I would put money JS staying as is and WASM will be the web story moving forward

Ditto.

WASM needs just 1 piece to be pretty much widely generalizable, GC. Once that happens, things will get pretty exciting. Access to a platform garbage collector will allow pretty much any language imaginable ship to WASM with some pretty minimal runtime overhead.

And GC for WASM is due to hit pretty soon. I'm looking forward to the revival of Applets :D

[1] https://github.com/WebAssembly/gc/blob/main/proposals/gc/MVP...


Brian Goetz published a similar article about Java a year ago: https://www.infoq.com/articles/data-oriented-programming-jav... (taking care not to call it "functional style programming" but rather "data oriented programming")


Thank you for sharing this one!

Also there are interesting talks published on YT where he discusses data/functional programming, OOP etc.

Multiparadigm (including plain old procedural) seems to be the most pragmatic approach to me, because I don’t need to bend over backwards to express something in a particular way.

Aside: It’s really fascinating how Java and the JVM manage to evolve while retaining reliability.

He talks about that here: https://youtu.be/2y5Pv4yN0b0


Poor Martin Odersky was demonstrating this fusion with Scala like 15+ years ago. But he's a nice guy so I'm sure he thinks the more the merrier!


Case classes in Scala were a direct inspiration for how we designed this feature in Dart. We stand on the shoulders of giants.


Seems like all the languages are following in the path of Scala, Go being the big exception.


Shouldn't Dart be merged with Flutter and be thought of as a single animal? There doesn't seem to be any point in thinking up scenarios for Dart that don't relate to Flutter.


Some people are using it for serverside, ie the dartfrog library. It's a decent enough Algol language for backend, better than Go I'd say due to having ADTs and exhaustiveness matching.


I loved Dart, but these days, it's not very usable for me.

The backend development capabilities have been left to rot. It's all flutter.

When it came out it had so much potential to displace NodeJS. But they threw that away when they couldn't get into the browser and pivoted to flutter.

I still maintain some of my packages like uuid, otp, etc. But my API handlers and other backend stuff is kinda dead as there is 0 interest in it nowadays.

I would use Deno before Dart for backend nowadays, even though I would prefer Dart.


Yea, I see Dart as mostly useful for developing frontends.


Is there a functional language that compiles to Dart? There’s a difference between FP features & FP ergonomics. This seems like a very verbose way to go about things I’m used to being simple/easy.


I've been working with Elixir and Elm the last few years but recently have been working on a mobile app in Flutter. While the recent changes are a good step in the right direction (being able to express ADTs and records) it does feel like an iteration away from being ergonomic.

Regarding sealed types, I'm not really sure why you'd choose these over type unions, in practice they're often used to describe things like events for reducers and using sealed is incredibly verbose compared to the equivalent in for instance typescript.

    type Message = 
        | { type: 'IncrementBy', value: number }
        | { type: 'DecrementBy', value: number }
        | { type: 'Set', value: number };
vs

    sealed class Message {}

    class IncrementBy extends Message {
      final int value;
      IncrementBy(this.value);
    }

    class DecrementBy extends Message {
      final int value;
      DecrementBy(this.value);
    }

    class Set extends Message {
      final int value;
      Set(this.value);
    }

In a real app this noise really adds up making it difficult to get an overview of the types.

Record types are nearly there, but currently there isn't a way to update the records which really hobbles them in day to day use (it looks there will be record spreading coming though so this is hopefully temporary).


[disclaimer: I work on Dart, though not on the language team]

With primary constructors this will become something along the lines of:

    sealed class Message();

    class IncrementBy(final int value) extends Message;

    class DecrementBy(final int value) extends Message;

    class Set(final int value) extends Message;
Which is considerably less repetitive, though `extends Message` is still there. I am fairly optimistic that the next step would be to eliminate that[1], though I think we would need to gather a bit more feedback from users as people are getting more and more reps in with Dart 3.0 features. I personally would prefer something along the lines of:

    sealed class Message() {
      case IncrementBy(final int value);
      case DecrementBy(final int value);
      case Set(final int value);
    }
Current syntax is not all that bad if you are going to do OO and add various helper methods on `Message` and its subclasses, but if you just want to define your data and no behavior / helpers - then it is exceedingly verbose.

[1]: https://github.com/dart-lang/language/issues/3021


That syntax does look a lot nicer and makes sense if you're going to do OO anyway. I do wonder if it's trying to shoe horn OO into functional clothing but seems practical nonetheless. Primary constructors would definitely be a welcome change.


It is way easier for me to read than what you proposed in Typescript, more characters to write? sure but how many times do I have to write a piece of code and how many times do I have to read it?

I guess ultimately it depends on which language you are more familiar with...


I'm more than happy with either of those variations. My favourite though would be Elm's syntax ;)

    type Message 
        = IncrementBy Int
        | DecrementBy Int
        | Set Int


One big difference between how Dart (and other OO languages that take a similar approach) and what Elm and other functional languages do is that in Dart, each of the cases is also its own fully usable type.

In Dart, if you do:

    sealed class Message {}

    class IncrementBy extends Message {
      final int amount;
      IncrementBy(this.amount);
    }

    class DecrementBy extends Message {
      final int amount;
      DecrementBy(this.amount);
    }

    class Set extends Message {
      final int amount;
      Set(this.amount);
    }
You can then write functions that accept specific cases, like:

    onlyOnIncrement(IncrementBy increment) {
      ...
    }
In Elm and friends, IncrementBy is just a type constructor, not a type. Further, we can use the class hierarchy to reuse shared state and behavior in a way that sum types don't let you easily do. In your example, each case has an int and it so happens that they reasonably represent roughly the same thing, so you could do:

    sealed class Message {
      final int amount;
      Message(this.amount);    
    }

    class IncrementBy extends Message {
      IncrementBy(super.amount);
    }

    class DecrementBy extends Message {
      DecrementBy(super.amount);
    }

    class Set extends Message {
      Set(super.amount);
    }
And now you can write code that works with the amount of any Message without having to pattern match on all of the cases to extract it:

    showAmount(Message message) {
      print(message.amount);
    }
So, yes, it's more verbose than a sum type (and we do have ideas to trim it down some), but you also get a lot more flexibility in return.


In an FP code base, it’s common for ADTs to be the backbone of all data structures from Maybe to Either, to anything you use to model data types. That being the case changing 4 lines into 13 scaled into a entire code base is a massive amount of sludge to wade thru & one of the best way to up the code quality is to increase readability for maintainers. More LoC is more LoC to maintain even if it seems like it’s just a few more lines.


> changing 4 lines into 13 scaled into a entire code base is a massive amount of sludge to wade thru

It's only more verbose for the code that is defining new types. Code that is simply defining behavior (either in functions or methods) is unaffected and my experience is that that's the majority of code.


But reading those definitions at the top of the file is usually one of the first places I’m going to go to understand what’s going on. The 4-line option to me is much easier to grok, not only because it’s more terse but the way the pipes are stacked to indicate visually that they are related to the same underlying structure. The `extends Message` bit is a level of indirection that requires the reader to juggle a lot more in their head (as is immutable `final` not being the default). The `class` keyword also carries a lot of baggage that it’s not clear what to expect (will this have methods or not?, will we be seeing `this`?, etc.).


I think you're evaluating this as a notation for defining sum types. But that's not what it is. It's a notation for defining a class hierarchy, with all of the additional affordances that gives you.

In practice, most real-world Dart code that uses sealed types also defines methods in those types, getters, and all sorts of useful stuff. Once you factor that in, the data definitions themselves are a relatively small part.

(Of course, you could argue that defining class hierarchies is already intrinscally bad. But Dart is an object-oriented language targeting people that like organizing their code using classes.)


This is definitely an FP vs OO thing. If you wanted to refer to the value in an ADT you'd introduce a new type to refer to it, which in Elm would be:

    type Message 
        = IncrementBy Amount
        | DecrementBy Amount
        | Set Amount

    type Amount = Amount Int
Obviously this is a contrived example - you wouldn't bother if you were dealing with an Int but once the message gets more complicated it can make sense.


That's a different thing.

Given the above ADT, how would you write a function that prints the amount of a Message, regardless of which case it is?


You'd use a switch over the ADT and extract the values as appropriate. This is method dispatch vs function dispatch. In practice you can do the same things with either, but they reflect the focus on behaviour (OO) vs data (FP).


That's the point of my last example. By building this on subtyping, you can hoist that shared property out of the cases and up to the supertype. That lets you define state and behavior that is case independent without having to write a bunch of brain-dead matches that go over every case to extract conceptually the same field from each one.

Of course, you can also reorganize you datatype so that the shared state is hoisted out into a record that also contains a sum typed field for the non-shared stuff. But that reorganization is a breaking change to any consumers of the type.

Modeling this on top of classes and subtyping lets you make those changes without touching the user-visible API.


I appreciate you taking the time to answer, but honestly the boilerplate is just a killer. I don't care about the additional flexibility when the code is so much harder to make sense of, and this is just a toy example.


fp-ts has this same unfortunate issue (https://gcanti.github.io/fp-ts/guides/purescript.html#data) where a PureScript one-liner becomes 11 lines of boilerplate TypeScript. Or (https://gcanti.github.io/fp-ts/guides/purescript.html#patter...) 3 lines of pattern matching becomes 10 where the matching is done on strings (ad-hoc _tag property).


The type-union syntax gets messy once the subclasses contain nontrivial amounts of code (like method definitions). The class syntax keeps the lexical context more local.


F# is meant to, via Fable. I don't know what the progress is there. https://fable.io/blog/2022/2022-06-06-Snake_Island_alpha.htm...


Wow this looks great. One to watch for the future.


Fable is cool, but I worry about its longevity.


Why do you say that?


Stuff like this: https://github.com/fable-compiler/Fable/issues/1822

And this Flutter binding launched a year ago by one person and then never touched again: https://github.com/fable-compiler/Fable.Flutter

It just seems like an incredibly ambitious project that appears to have very little equal but is mainly worked on by a handful of people but no corporate backing. I get the feeling that if you want to use it, you'll either be the only one doing what you're doing or among just a few people. I already use F# and feel this way about the core language itself.


> you'll either be the only one doing what you're doing or among just a few people

Agreed.. I've definitely Googled an issue I got stuck on only to land back on my own post (lol). You need fewer people for the same amount of complexity with F#, but idk if that's somehow just translating to fewer people period?


There's a Clojure for Dart I believe.



yes, though it doesn’t yet have a REPL but the hot reloading works great


F# has something called Fable. It originally just compiled down to JavaScript. Recently they've added Python and Dart.


> Object-oriented (OO) and functional languages differ in many ways, but it’s arguable that how each paradigm models data is the defining characteristic that separates them. Specifically, the question of modeling different variations of related data and the operations on those variants.

Then why is the first example they present about organizing behavior lol


Since learning Go so many years ago it influenced how I build things in every language much in the way this article is espousing.

I MUCH prefer functions that take some kind of type/interface/protocol over methods on objects.

I committed many sins in my past in Python with mixins that added functionality before I understood how to use Zope Interface and with Go it all finally clicked for me.

I don't think any pattern is Right all the time but inversion of control and interfaces rarely bite me to the same degree other patterns have.


I have also moved towards "dumb data objects", immutable if possible, and a library of functions that operate on them. Exactly what this looks like will obviously depend on the language, though.

In Java, I might have a Foo object and a FooController.

In Kotlin, I'd probably have a Foo object and a FooController file, but that file would contain functions, not a controller object. Of course, in the bytecode this would map to a FooControllerKt object.

Kotlin also offers extension functions, which are similar to how Go declares object methods. I use these for utility functions, as opposed to business logic. If I wanted a function to camel-case a string, for example, I'd use an extension function. If I wanted a function to determine if a string is a valid zip code, I'd use a method that takes in a string.


In Kotlin and Dart, the better way to do it is extension functions.

Just having free functions that take your data object as arguments is almost the same, but has the huge disadvantage that looking up which functions you can call on the object is much harder.


The biggest issue with that is autocompletion in IDEs. I agree that it's nicer "ideologically", but being able to just go "I got this thing, what can I do with it?" with a simple dot is really nice. Maybe LSPs will at some point be easy to query for that.


At least a portion of that is covered with well structured modules.


So sad there is still no static metaprogramming. I've switched my career from a Flutter mobile dev over to backend, and there are still no updates on this an utterly important feature for Dart. This could've eliminated so much sickening codegen.


We're actively working on static metaprogramming. It got slowed down during the lead-up to 3.0 because it was sort of all-hands-on-deck to get that set of features out and fully migrate stuff to null safety. Now that that's done, we're picking it back up.

It's a particularly hard feature to work on because it touches everything. Some of the kinds of stuff you run into are like:

* How do IDEs handle go-to-definition when the thing you're navigating to is generated through metaprogramming?

* What is the API we expose to macros and how does that API evolve when the language changes?

* If a macro adds a method to a class that shadows a top-level identifier, how does that interact with other method bodies in that class that might be referring to that identifier?

* What happens to the user experience when a macro takes a really long time to run? Can users tell which macros are running slowly? How?

* What happens when a macro throws an exception or otherwise fails?

* How does hot reload work with macros? How does macro performance affect hot reload time?

* What is the syntax for invoking a macro? What kinds of arguments can you pass to macros? Are they passed as values, raw syntax, meta-values? If they are passed as values, when are those values evaluated and in what context?

* Are macros expressive enough for the use cases we have? Are they too powerful? What kinds of things should you be able to do with macros? What kinds of things should you not be able to do?

* How does a user understand code that may be relying heavily on macros?

* What is the user experience for authoring a macro? What developer tools do we give you? What is it like maintaining a package that exposes publicly usable macros? When the language changes, what do macro package authors have to do to keep their package working?

Getting any of these answers wrong can really harm the whole user experience of the language, so it's just a big challenging ball of work with lots of stakeholders. We're making good progress and I'm really excited by everything Jake, Johnni, and Konstantin are doing, but it's definitely not something we want to rush.


This doc was updated 3 weeks ago?

https://github.com/dart-lang/language/blob/main/working/macr...

> This could've eliminated so much sickening codegen.

Also this is funny because what do you think static metaprogramming is if not codegen? Templates macros etc are all codegen. They're just codegen you can't see (or easily steer either).


[ I work on Dart ]

We're still working on this, and you can find updates in the language repo. One reason we're taking extra time here is that macros / static metaprogramming can blow up compile times and hot reload.


that's fine - i'm not complaining i'm waiting patiently. keep going!


It was really stuck at "in progress" for a looong time, and it's still in the working set, so it means that's going to take yet another 2-3 Dart releases and a few more to stabilize for production.

> They're just codegen you can't see

Yes, and that's exactly how it should have been in the first place. No need to run buggy services, no need to constantly juggle newly generated files.


do you understand that things that you can't see still need to be juggled? the problems don't magically go away just because you can't see the artifacts. what happens is you lose complete control over how they're juggled and you retain as much control as the compiler owner deigns to give you (see generic in Go). please see all the pain of working templates in C++.


I do, I work with Rust templates and worked with C++ templates. Obviously, people aren't asking for such a granular level of metaprogramming.


> Obviously, people aren't asking for such a granular level of metaprogramming.

that `Obviously` is doing a lot of work there but more so i'm saying going from codegen facilities to static metaprogramming is a step in the wrong direction if the static metaprogramming is at the cost of the codegen.


I'm seeing Flutter is gaining more momentum at least in social media. Last I checked, I concluded the framework inherently depends on layers of abstractions that make it slower/less-responsive than say QT or native toolkits of the OS.

Since Dart's raison d'etre is pretty much Flutter dev, I was wondering if it is worth learning if one's intention is knowing a handy cross-platform GUI dev toolkit that just works.

That said, the competition in this scene don't have better to offer anyway namely react native (web tech) and QT (LGPL).


Qt also depends on layers of abstraction and is slower than native GUI (on mobile) and also integrated badly with mobile IMO. On Android you have basic issues like the scrolling has no inertia, text rendering cannot support emojis well, and Text QML components don't play well with autocorrect and other mobile features. (This is all QML experience btw, widgets are not supported too well on mobile anymore)

I'm going to take a look at Flutter to see how it handles these issues soon, but I can't make a real comparison now.


I didn't know the state of Qt on mobile was so abysmal. Good to know.


All frameworks beside the native ones on mobile are abysmal, people delude themselves thinking that they don't need to master the underlying stack as well.

On the Android side it is even more difficult, because the NDK as matter of principle only exposes APIs for games and implementing native methods, so anything else has to be full of JNI calls into Java, even if those Java classes are mostly composed by native methods.


Are you saying that all the non-native mobile frameworks are abysmal because they have to map to JNI calls on Android and Objective-C/Swift on iOS?


They are abysmal, because they are leaky, yes.

The dream of one code base to rule them all, can easily turn into a nightmare when going outside the golden path.

And really, a large majority of apps are doing basic CRUD stuff, easily doable as mobile Web.


Agreed.

Yet, I think the one code base approach gives better results on desktop (Windows/macOS/Linux) than mobile. VS Code for example is quite nice. Do you think this is because there are more OS and UI differences between Android and iOS than between Windows/macOS/Linux? Desktop is more "standardized"?


Desktop is also as different, most people sidestep it by only caring about POSIX, or shipping a Web browser with the application.


Compose multiplatform is a newcomer in this space but worth mentioning. Built on Google's Jetpack Compose framework and Kotlin but with experimental build targets for IOS and Wasm. You can play with it now but this is pre-release quality; so expect issues. But once the feature flags for GC and Threads are removed in browsers and Kotlin 2.0 ships (early next year probably), this could change rapidly.

I've not done much with compose yet but I'm keeping an eye on the kotlin slack channels related to this. There is a lot of stuff happening there.


I have played with Kotlin Multiplatform. It's all very alpha quality in my opinion, and even more so if you use Jetpack Compose.

Lots of basic things are still in "incubator" mode, which means it can and will change, so you need to keep changing stuff on upgrades.

The technology is very cool and Kotlin is a great language. However, if you need to do something right now, there's no comparison with Flutter, which has been production ready for years.


You're not wrong.

I've been developing with kotlin-js and kotlin multiplatform for three years now. Kotlin-js and multi platform is pretty usable but the upgrades can be challenging. Parts of the ecosystem have been pretty stable for years at this point and there are lots of multi platform libraries that are very usable and stable. I use quite a few of those on jvm and js.

Even though it is in beta since October, KMM (Kotlin Multiplatform for mobile) is already used by a lot of apps; including some flutter apps. It doesn't provide a UI layer but you use it to share business logic across Android and Ios. Jetbrains has done a lot of work on the native compiler to get IOS more stable. Quite usable at this point but still some instability indeed. I expect they will announce a 1.0 release some time after kotlin 2.0 is released, which includes the revamped compiler. Kotlin 2.0 should happen by early/mid 2024.

Compose multiplatform is yet a different thing that relates to this. Some of the compilation targets are still experimental (IOS, WASM) and indeed not ready for production usage. That too is going to benefit a lot from Kotlin 2.0. The whole compiler chain for wasm is basically built on that.


Not a fan of XAML after trying to get into it, but there is Uno Platform. It wraps native widgets on mobile, just like React Native (which is good for accessibility), and uses C#. https://platform.uno/

My guess is that it's mainly focused on mobile. On Windows, it has no overhead (behaving like a normal WinUI 3 app), on macOS I think it uses Catalyst by default (which was developed by Apple to make more iOS apps available for Mac desktops) and on Linux it draws its own widgets that the devs try imitating the GTK style with.

On Android and iOS, it just uses the native widgets which I think is a better experience so you can see my reasons for guessing it's mobile-first. That may or may not be what you want.


I actually use wxwidgets after flutter and qt these days flutter is cool for mobile but so so on desktop to me. Qt's license is complex for me as well.

so far wxwidgets is a great fit.


> That said, the competition in this scene don't have better to offer anyway namely react native (web tech) and QT (LGPL).

What's the downside in your perspective of Qt being LGPL? I'm more familiar with GPL and AGPL, but with LGPL as long as the user can swap out the linked LGPL libraries you should be good. What is the downside in that? I can only think of it being an issue if the application you build statically links all the libraries it depends upon.


It's not sure if you can use LGPL when you release app too iOS App Store - both from technical and legal perspective. Technical because there is no easy way for user to swap dynamically linked libraries.

Another issue is most recent QT modules are released using GPL.


The LGPL is just a nuisance that startups always have to be wary about. Personally, yeah, I think Qt is leagues ahead of the alternatives.


You can just buy[1] Qt, which is of course why they chose the dual-licensing model in the first place.

In my opinion, though, the amount of extremely high-quality free and open source software that is available to include in one's programs is so great, it really is worth spending the time to create a proper FOSS licence compliance plan and form a OSPO (Open Source Programme Office) in your organisation to execute it. If you actively pursue the benefits that can come from directly contributing to existing FOSS projects, the rewards are even greater.

[1]: https://www.qt.io/pricing


I was pessimistic about flutter 4 years ago but rechecked it recently (learning right now) and IMO it's really good user/developer proposition these days. They resolved most of issues on mobile devices and desktops - only web version still off but once wasmGC is ready (hopefully this year) probably things will improve.

Best way for developer elevator pitch just download few flutter apps and see how you like the experience:

1. wonderous - https://flutter.gskinner.com/wonderous/

2. flutterflow (low code + gui editor for flutter) - https://flutterflow.io/

3. appflowy (notion alternative) - https://appflowy.io/

4. flutter gallery (official flutter kitchen sink) -

Android (Google Play Store) - https://play.google.com/store/apps/details?id=io.flutter.dem...

web (gallery.flutter.dev) - https://gallery.flutter.dev/

macOS (.zip) - https://github.com/flutter/gallery/releases/latest

5. official material 3.0 demo - https://flutter.github.io/samples/web/material_3_demo/#/


Wonderous:

A showcase app for the Flutter SDK.

Built by gskinner in partnership with the Flutter team, Wonderous deliberately pushes visual fidelity, effects and transitions to showcase what Flutter is truly capable of on modern mobile hardware.

https://github.com/gskinnerTeam/flutter-wonderous-app


I would bet on the web. Applications like VS Code or MS Teams are built with web technology and just work fine...


VSCode okay I agree but Teams, the performance is attrocious.

VSCode is pretty much the only fast web app and they managed it because they have done a massive amount of work to compensate it, I'm not sure other apps have the resources to go as far as them.


They work "fine", granted, but they're a compromise; their performance is much lower, user experience is unique compared to native applications, etc.


The last time I've checked, OOP and functional programming are orthogonal concepts.

EDIT: It means that there is no problem for the language to have both OOP & Functional. They don't have to choose one over another.


> there is no problem for the language to have both OOP & Functional. They don't have to choose one over another.

Sure a language can have both; but us programmers still have to choose one or the other when modelling some domain. The article is basically describing the "expression problem": how to implement a solution in a way that allows new cases to be added https://en.wikipedia.org/wiki/Expression_problem

The OOP approach makes it easy to add new sorts of data: we just write a new subclass, with implementations for all the interface's methods. However, it's hard to add new sorts of operations: we would have to add new methods to the interface, and update every subclass to implement those methods (very hard if we're writing a library that third parties are subclassing!)

The functional/ADT approach makes it easy to add new sorts of operations: we just write a new function, which pattern-matches on all the constructors. However, it's hard to add new sorts of data: we would have to add new constructors to the ADT, and update every function to pattern-match on those constructors (very hard if we're writing a library that third parties are pattern-matching on!)

When we're choosing how to represent something, it's important to consider whether future extensions are more likely to be adding new sorts of data (in which case the OOP representation may be best) or more likely to be adding new sorts of operations (in which case the ADT representation may be best). Having to extend such representations in their intended way can feel very natural and satisfying; but having to extend them the other way can be frustrating and laborious.


If you are doing subclasses you are doing OOP wrong.


Do you consider implementing an interface to be subclassing? And if so, what's the OO alternative? Prototypes?


Interfaces are fine. Anything that works like traits :)


TIL that I've been doing OOP wrong for the past twenty years.



Yes, I read it the year it came out. :)


Why would they be orthogonal? There are entire languages built on top both, like F# and Scala.


That's what orthogonal means. They're separate things, and they don't interefere with each other.


Orthogonal means they are independent principles that can be combined freely and don’t contradict each other.


>Scala

Exactly


That must've been a while ago, I find factors of both working pretty well together. Most languages are a mix nowadays anyway.


I have really liked dart, and started playing with it when it came out. I just wish I could use it more. Unfortunately it and its tools are so opinionated its kind of hard to use outside of the specific cases they have dreamed of. I really wish there were opt outs, even if it broke things that would be worth it.


Which of its tools are opinionated and how? Do you mean the formatter, perhaps?


The one I hate the most is file structure. In order for something to be a "package" it has be be in a `/lib` dir or some other magic name. I wanna nest modules and flatten their structure. If there were a flag to disable this, even if it meant I couldn't use any published third party lib or publish any packages, I would have many many new dart projects. I could probably extend my dislike of this to the entire import system. There are so many aspects of it that I dislike.


This is one of the weirdest complaints I've ever heard.

Almost every language has some sort of requirement for where you put files to properly compose a library. Dart is one of the sanest I know of, just throw your stuff in `lib/mylib.dart`. This file is normally only used to `export` what your lib API should look like, and it's a convention that the implementations go in `lib/src/`. That's all there is! You can break up files apart any way you want from there. To import a file from another file you do `import 'relative/path/to/file.dart`. Dart doesn't care where your imported files are as long as it can find them. There's no "packages" like in Java, but that's never a problem because imports can be named.

So you do stuff like:

    import 'mylib.dart' as mylib;
    // call mylib function
    main() => mylib.foo();
So simple, so great... I really can't understand why someone would complain about such an elegant system.


That is fair, there are more ways to import than you mention. Dart has a lot more constraints than a language like go or python. I use both in places I would like to use dart.

I have looked at forking the language to explore an alternate system, but the path of least resistance has been using other languages.


Any updates on type union? Using dynamic removes all the type hints.

I’d like to statically type which two types a function might return.


Rob Pike was right: "features were being added by taking them from one language and adding it to another" "...and I realised all of these languages are turning into the same language." "we don't want one tool, we want a set of tools, each best at one task"

https://www.youtube.com/watch?v=rFejpH_tAHM


Dart's reason for existing is as a cross-platform GUI language via Flutter. It's not obvious that adding on some language ergonomics detracts or adds to that goal in a fundamental way.

Sometimes languages converge because of informal consensus that something is a good idea. Maybe Go adding generics is a good idea, even if it's a step in the direction of following trends.

Either way, most languages don't have syntax which is significantly challenging or easy (with exceptions like Rust), and it's the ecosystem which makes the language so productive. So whether or not Go added generics today or a decade ago, Go was still great anyway thanks to its standard library and ecosystem.


I don't know who is was - but somebody once said code is typically written once and read many times.

ie what matters is not the number of characters you have to type, but how easy the resulting code is to understand.

And here there are two factors - the verbosity and the conceptual load.

I think the mistake some language geeks forget is the importance of the conceptual load- the language surface area/complexity - and they tend to focus on brevity at the expense of conceptual complexity.


This is one reason why I love Go, in that it doesn't follow.

I've been primarily doing Javascript and the like for the past decade, and while I appreciate new features like functional operations added to the standard library - although that was a long process - and arrow functions, I lost it when they added object-oriented structuring like classes, but only half an implementation because there wasn't any good access modifiers. And they added OOP before they added modules and imports. I don't understand.


JS has been OOP since the beginnings, one might even argue that prototype-based inheritance is the real deal compared to classes.


JavaScript like SELF and NewtonScript is based on prototypal inheritance as OOP model, even classes in JavaScript are mostly syntax sugar for how that is done at low level.


Deliberately being different from other languages… that actually makes a lot of sense why go is the way it is.

And I don’t mean it in a good sense.


I'd agree - and I'm not sure it's even possible to have a language with every feature.

Languages are about telling computers[1] what to do - at the right level of abstraction for the task in hand.

As far as I can see ( as an outsider ) is the trick in language design, is not the programmer ergonomics per se - but creating a set of abstractions ( expressed as language features ) that can then be converted into machine code in a way that's some best combination of reliable, safe and fast.

[1] Not directly, but indirectly via compilers or interpreters.


Thankfully not everyone happens to agree with him.

Had it not been for the success of Docker and Kubernetes adoption, and Go would have been as successful in the market as Plan 9, Inferno and Limbo.


this view does not allow for situations where one language design is strictly better than another, which has happened a bunch of times in practice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: