Hacker News new | past | comments | ask | show | jobs | submit login

> obfuscating what happens when you actually access them

This is the point, AFAICT. In languages with both primitive "read/write field" operations that can't be interceded upon, but also with interfaces/protocols, "obfuscating" (encapsulating) a field access in getter/setter methods is how you allow for alternative implementations of the interface/protocol.

Specifying your object's interface in terms of getters/setters allows for objects that satisfy your interface but which might:

• read from another object (flyweight pattern)

• read from another object and return an altered value (decorator pattern)[1]

• read from an object in a remote VM over a distribution protocol (proxy pattern)

Etc.

Then a client can build these things and feed them to your library, and you won't know that they're any different than your "ordinary" data objects.

You don't see this in languages with "pure" message-passing OOP, like Ruby or Smalltalk, because there's no (exposed) primitive for field access. All field access—at the AST level—goes through an implicit getter/setter, and so all field access can be redefined. But in a language like Java, where field access has its own semantics divorced from function calls, you need getters/setters to achieve a similar effect.

And yes, this can cause problems—e.g. making expected O(1) accesses into O(N) accesses, or causing synchronous caller methods to block/yield. This is why languages like Elixir, despite having dynamic-dispatch features, have chosen to discourage dynamic dispatch: it allows someone reading the code to be sure, from the particular function being called, what its time-and-space-complexity is. You know when you call MapSet.fetch that you're getting O(1) behaviour, and when you call Enum.fetch that you're not—rather than just accessing MapSets through the Enum API and having them "magically" be O(1) instead of O(N).)

---

[1] Which is a failure of the Open-Closed Principle. That doesn't stop people.




I like Swift's approach to this. (I'm sure other languages do it too, that's just the one I know that does this.)

If you write a property (what other languages might call a "field") then by default it's a stored property, i.e. it's backed by a bit of memory in the object. If you need to take actions on changes, you can implement a willSet or didSet handler to do so. If you need to change how the value is stored and retrieved altogether, you can change it to a computed property, which invokes code to get and set rather than reading and writing to a chunk of memory. All of this is completely transparent to the caller.

It's particularly interesting because it still acts like a mutable value. You can still pass a reference to such a field into an inout parameter using &, or call mutating methods on it. Behind the scenes, the compiler does a read/modify/write dance rather than mutating the value in place.


I really, really dislike Swift's approach. get/set/willSet/didSet -- so much complication to preserve the illusion that you are operating on a field when you are actually doing no such thing. Why is this desirable? It reminds me of a class of C++ footguns where some innocuous code is actually invoking member functions due to operator overloading.

I think that Java got this one right. (I do not at all like the cargo cult custom of getter/setter methods for fields; I'm referring to the language only.)


GP was referring to the feature that getters/setters are invoked with property access. Swift's didSet/willSet distinction is unrelated to this feature. ES6 JavaScript, for instance, has this feature with just get/set methods.

In Java (and JS) there's a separate, all-knowing garbage collector, while swift is reference counted. In an ARC runtime, the didSet/willSet distinction avoids explicit calls to release the object, which is pretty clearly a good thing on the programmer's end. You can debate whether the benefits of full garbage collectors outweigh their performance characteristics, but given ARC, the didSet/willSet distinction definitely makes sense.


Today's fields are tomorrow's computed properties. Fields are rigid and cannot be changed without recompiling dependencies. Notice how few fields there are in Java's standard library. Why have fields at all?


How many times have you run into this in your career (the need to convert "today's fields to tomorrow's computer properties")?


Plenty of times! The situation where a library released to third parties requires internal structural changes is not an uncommon one. What do you do? Break every piece of third party code or satisfy the new structure and the old interface simultaneously with a computed property? "Move fast and break stuff" doesn't always have to include breaking stuff.


Honestly, many times. A common occurrence is singular evolves to multiple: "the touch position" becomes "the touch positions," "selected item" becomes "selected items", etc. The backing storage switches from T to [T].

In this scenario it's easy to make the old method do something sensible like "return the first item." But maintaining an old field is more difficult: you can't make it lazy, you can't eliminate its backing storage, etc.


Quite often, especially if you make a variation of an existing class (or an extra type in an ADT).

Scala does this quite transparent. Something defined as a `var` (variable), `val` (immutable variable), `lazy val` (lazy immutable variable) or `def` (method) is called source- and binary compatible from the caller side.

I've never seen this trick anyone, except for maybe unexpected bad performance.


Aside, from what the other poster mentioned, one major advantage is that I don't want to write getXXX and setXXX, everywhere where I do need computed properties, does it matter if the value is precomputed or lazy computed (for example for a float derived from another float)?


Enough times that I remember it. Luckily, with modern IDEs, going from fields to accessors is just a "refactoring" menu away.


YAGNI. Seriously. That pretty much describes the main problem with Java - "maybe we'll need this coffee maker to also do julienne fries in the future!"


> But in a language like Java, where field access has its own semantics divorced from function calls, you need getters/setters to achieve a similar effect.

I think the problem with Java's approach wasn't just in allowing primitive slot access, it was in making that access be simpler and use less boilerplate than accessor access, so people are tempted to use it where it isn't really what they specifically want, and accessors feel like extra work. I think that most of the time when people complain about getters and setters, their main issue is just the extra syntax they require in languages like Java.

In contrast, Lisp makes using accessors the path of least resistance, so if someone does use primitive slot access, they probably consciously really wanted that. In a class definition like:

    (defclass my-class ()
      ((x :accessor x)))
The :ACCESSSOR option lets you write (x instance) and (setf (x instance) new-value); with or without it you can write (slot-value instance 'x) and (setf (slot-value instance 'x) new-value) for primitive slot access. It's really just a shorthand for defining methods like:

    (defmethod x ((my-class my-class))
      (slot-value my-class 'x))

    (defmethod (setf x) (new-value (my-class my-class))
      (setf (slot-value my-class 'x) new-value))
But just by providing that little shortcut (and by not having a shortercut for primitive access) it's enough to make getters and setters the default in a way that people rarely complain about.


The thing about CLOS accessors is that they define methods; they are not just for the sake of having the (setf (foo obj) val) shorthand.

By saying that you have an accessor x, you're writing a method called x. Actually two: x and (setf x).

You can write auxiliary methods for these: with :around, :before and :after methods you can intercept the accessor calls to do "reactive" kinds of things.


Design patterns are bug reports against your programming language. -- Peter Norvig

Java sure has a lot of patterns.


Appeal to authority. Show me a language with little to no patterns.


I don't think there is a language without patterns. But in other programming languages (Lisp, Haskell), we just call them functions.

Edit: Actually, I think your request is even more ridiculous. If what Norvig is saying is true, then every language has patterns, because every language has bugs. So you are asking to substantiate his statement by providing a counterexample.


About your edit,

It's Pattern => Language bug, not Language bug => Pattern. You are assuming the opposite meaning.

In other words, not all bugs are patterns but all patterns are necesarily bugs (not IMO, it's just the interpretation).


I agree that you're right in strict logic sense, but the statement that everything has bugs is already tongue-in-cheek.

Anyway, what I really meant was that every language has semantic limitations that can manifest as patterns which the users have to apply. Even Lisp (static type checking) or Haskell (dependent types).


Bingo!


Then what are the design patterns for Scala?


1. Cake pattern (now considered more of an antipattern I think).

2. Extending existing class with implicit conversion.


I would be interested in that elusive language in which nothing is repeated twice.


Parentheses Hell.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: