Hacker News new | past | comments | ask | show | jobs | submit login

Clojure is continuing the LISP subculture that has existed for a long time, nothing new here.

OO will continue to be successful for accessibility reasons, and it's demise will continue to be predicted to have arrived.




I think you're right about OO's continued survival. I also think mikerichards is right that OO is just technicolor procedural programming. Moreover, i think these two things are related: OO is immortal in part because it's straightforward and obvious, just like procedural programming.

But i don't think Clojure is just continuing the (equally immortal!) LISP subculture. I think LISP is growing by recruiting people from OO-ish languages like Java, Python, and Ruby, rather than from existing LISPs. This is the first time in a long time that a LISP has done that. It's also the first time in a long time that a LISP has got even this glimmer of mainstream use. pg wrote 'Beating the Averages' in 2001 [1], and even after that, everyone still thought LISPers were freaks. But announce you're using Clojure in production today, and nobody bats an eye.

[1] http://www.paulgraham.com/avg.html


The advantage of OO has always been its accessible constructs and abstractions that mirror how we talk about things linguistically. Nouns are just sticky that way. It isn't a big delta over procedural programming, and you can find object systems in many C programs written before OOP was even a word. Heck, even Clojure has its own object systems; they just often call them entities rather than objects (what is an entity? well, its an object, but not the object you are used to). I think our current OO abstractions suck, but the paradigm is OK, especially when considered in the abstract (go back to the treaty of Orlando to see how diverse objects can be!). And the biggest pushers of objects in the early day were CLOS people like Richard P Gabriel, and CLOS (edit: Flavors actually as lipsm points out) gave us mixins, which are a pretty advanced OO construct (though RPG argues the mixins that got adopted in OO languages were not the ones CLOS intended).

People were using LISP in production in the 90s as an advantage, but then they were also using Smalltalk or Objective C also (ironically often in the same places...on Wall street). A new generation is discovering LISP while learning other languages first, but the phenomena is the same: OO was just less popular in the past and so they would come from C or Perl instead of Java.

[edit: not sure why you were downvoted, I thought your post was completely reasonable]


The advantage of OO has always been its accessible constructs and abstractions that mirror how we talk about things linguistically. Nouns are just sticky that way.

But I only see that argument giving an advantage to some weird language that doesn't have structures. In C, Pascal, COBOL, you have "objects"....you just don't have any methods attached to them.

And the biggest pushers of objects in the early day were CLOS people like Richard P Gabriel, and CLOS (edit: Flavors actually as lipsm points out) gave us mixins, which are a pretty advanced OO construct (though RPG argues the mixins that got adopted in OO languages were not the ones CLOS intended).

I'd argue that methods and data don't belong together (like in CLOS), but don't have the inclination to argue it right now. I'll just say it's never felt natural to me, and actually hurts reusability.


> In C, Pascal, COBOL, you have "objects"....you just don't have any methods attached to them.

When I encoded my own objects in C, I definitely had v-tables (though I didn't know they were called that at the time).

> I'd argue that methods and data don't belong together (like in CLOS), but don't have the inclination to argue it right now.

It is difficult not to put behavior and data together while still having encapsulation (it is still possible). It makes much more sense for functional programming since values lack identities (they are completely defined by their structure), but objects have state that need to be protected for sanity reasons.


Yeah, but your argument is kind of circular - we put behavior and data together because we need encapsulation and we need encapsulation because objects have an identity that needs protection, yet those objects need an identity because we are putting behavior and data together, in combination with mutability of course.

Also encapsulation happens quite well in languages that do not have the "private" or "protected" keywords. In Javascript encapsulation happens by closures and local scoping. In Python it happens by convention. Also, take most software developed ever in an OOP style and you'll find plenty of examples of leaky encapsulation (i.e. APIs that leak restricting implementation details), starting with the popular Iterator. IMHO, the best abstractions I know come from languages that are not OOP at all.

I'd also argue that the encapsulation you're talking about doesn't have much to do with OOP in general, only with a particular flavor of it. But that's the problem in all conversations about OOP - take any 2 random people and they'll have different opinions about what OOP is.

For me OOP is about subtyping or more specifically a form of ad-hoc polymorphism that's solved at runtime. It's very convenient for modeling certain kinds of problems for sure - graphical UIs are the best example, or a browser's DOM and even die-hard detractors would find it hard to argue that a Monoid isn't a Semigroup, or that a Set shouldn't be a Function. In fact, objects in OOP do not necessarily need an identity, in which case they are just values, yet you can still put that polymorphism to good use.

But then OOP as implemented and used in mainstream languages is truly awful, being no wonder that people still consider C with half-baked object systems as being something acceptable.


Objects require encapsulation because they have something (mutable state and sensitive data) to protect. Also, I hope its obvious that with identity, you get mutability automatically (just use an immutable map as in Clojure!), and you can't really have meaningful mutability beyond global variables without identity.

Of course, encapsulation can occur at module boundaries, and even when programming with objects, a notion of "internal, package private, or friendship" is often useful. But it is pretty well acknowledged in the PL community that Javascript does encapsulation in the worst way possible (something OOP and FP people can actually agree on).

> But that's the problem in all conversations about OOP - take any 2 random people and they'll have different opinions about what OOP is.

Most people focus on laundry list of pet or hated features when defining OOP, but to me, its all about the objects and if you are thinking in terms of objects in your design. So...

> For me OOP is about subtyping or more specifically a form of ad-hoc polymorphism that's solved at runtime.

I think these are very useful when thinking in terms of objects, but that features like this do not "define objects" but rather that "working with objects make these features desirable." Since you mention it...

> In fact, objects in OOP do not necessarily need an identity, in which case they are just values, yet you can still put that polymorphism to good use.

I would say subtyping is useful for values, but not that anonymous value have somehow become objects since you are manipulating them with subtyping! If I have two values, say 42 and 42, they are the same value no matter how they were computed, stored, retrieved, and so on. They cannot have state (since you can't have state without identity), the different 42s are indistinguishable. In fact, since values are defined solely by their structure, other forms of subtyping might be considered over nominal, like structural, and you might want to abstract over them using type classes. But the reasoning is math-like equational, you aren't really doing OOP at that point from even a design perspective (this is my position, of course it is wide open to debate in the community).

I like C# since it provides both values and objects, and they are adequately separated even if some subtyping still applies to values. But when I am manipulated structs, my design perspective has shifted away from objects; I don't think of them as objects.

> being no wonder that people still consider C with half-baked object systems as being something acceptable.

Back in the 90s, we didn't have much else, C++ still wasn't very trusted; Java was very new. Or if you are referring to C++ and Objective C today, I really couldn't disagree with that.


> I like C# since it provides both values and objects

I don't think the distinction is so clear cut. An immutable List is a value, because it is clearly defined by its structure, yet you need heap-allocated values in C# (so objects) because you need pointers. An immutable List would also implement various interfaces, like Iterable, so polymorphism is useful as well.

I also forgot to say that the polymorphism that we get with OOP is NOT enough. Static languages also need generics and all languages also need type-classes or something similar. Type-classes are different and useful for different use-cases than OOP, because the dictionary is global per class and not per instance. And if it is solved at compile time, like in Haskell or Scala, it also has some performance benefits.

Clojure's protocols for example are pretty neat and get pretty close to type-classes.


Mixins are from Flavors, not CLOS. CLOS has no special support for mixins - they are just a sub-functionality of CLOS.


Thanks, my understanding of history is probably a bit confused:

http://en.wikipedia.org/wiki/Mixin#In_Common_Lisp

In RPG's inconsumerability essay [1], he describes it like:

> In that paper they described looking at Beta, Smalltalk, Flavors, and CLOS, and discovering a mechanism that could account for the three different sorts of inheritance found in these languages—including mixins from Flavors and CLOS.

They did come from Flavors first regardless (mixins being named after an ice cream shop with mixins near MIT, so I heard). I think it was also the first system to allow for aspect advice via a first-class construct.

[1] http://www.dreamsongs.com/Files/Incommensurability.pdf


Mixins were introduced with Flavors by Howard Cannon in 1979. Flavors/Mixins were used to implement some of the early MIT Lisp Machine user interface, IO and networking code.

http://lispm.de/docs/Publications/Flavors/Flavors,%20a%20non...

'aspect advice' was mostly introduced in Lisp in 1966 by Warren Teitelman: http://dspace.mit.edu/handle/1721.1/6905 See the discussion of ADVISE and a BEFORE example in the paper... From there it came to Interlisp, Maclisp, Flavors and CLOS...


Thanks. The Teitelman work is what I cite as the introduction of aspects, but I didn't realize it was more than just a pattern back then.


> The advantage of OO has always been its accessible constructs and abstractions that mirror how we talk about things linguistically.

I don't think there's much of a connection to natural language. OO is about commanding objects through messages; if we must make a connection to language, it would be equivalent to the imperative tense.

> Heck, even Clojure has its own object systems; they just often call them entities rather than objects

Which object systems would these be?


> I don't think there's much of a connection to natural language. OO is about commanding objects through messages; if we must make a connection to language, it would be equivalent to the imperative tense.

I've designed non-imperative OO languages before; e.g. http://research.microsoft.com/apps/pubs/default.aspx?id=1793...

The community didn't scream out and say "but that language doesn't have messages, it can't be object oriented!" Of course, Alan Kay wasn't at ECOOP that year, but I did get an argument from Ralph Johnson that what I did was just Smalltalk :p.

> Which object systems would these be?

Clojure's (Rich Hickey's?) ideas about OO are surprisingly close to my own:

http://clojure.org/state

> OO is, among other things, an attempt to provide tools for modeling identity and state in programs (as well as associating behavior with state, and hierarchical classification, both ignored here). OO typically unifies identity and state, i.e. an object (identity) is a pointer to the memory that contains the value of its state.

The important thing about objects is their identity; they have names like Fred or Bob; they aren't anonymous values that can only be identified by their structure 42 or (2, 3).

> There is no way to observe a stable state (even to copy it) without blocking others from changing it.

He is not against objects, just how they are realized in imperative languages.

> OO doesn't have to be this way, but, usually, it is (Java/C++/Python/Ruby etc).

Yep. So he solves the problem in a smart way:

> In coming to Clojure from an OO language, you can use one of its persistent collections, e.g. maps, instead of objects. Use values as much as possible. And for those cases where your objects are truly modeling identities (far fewer cases than you might realize until you start thinking about it this way), you can use a Ref or Agent...

I will disagree, as soon as you have a collection with a key that is a GUID (or a name like Fred or Bob), you've just invented an object, whether its properties are embedded in the object or not.

Clojure just has its own ways of doing OO programming. If you hate OO, then you might simply claim "it is not OO", but this definitely is not pure functional programming that lacks identity at all (and pure Haskell solutions really do avoid all of this, Haskell is very non-OO in ways that pragmatic LISPs are not).

Note that there are other ways of fixing this problem, one can manage time so that object properties are always observed safely; this is the approach I'm currently taking with my own research:

http://research.microsoft.com/pubs/211297/onward14.pdf


> as soon as you have a collection with a key that is a GUID (or a name like Fred or Bob), you've just invented an object

That's an unusually broad definition of an object. Is that the only criteria you have? Does the key need to be unique?


If you are contrasting objects to pure values, that is really the main distinction: objects have names at design time, compile time, and run-time, values do not. I don't think it is unusually broad (at least it is not considered a radical position in the academic OOP community). Everything else: encapsulation, methods attached to data, subsumption, nominal subtyping, require that an object has its own identity and further support its "objectness."

Even if you didn't have these features in your language, they are fairly straightforward to construct in an ad hoc way as long as you have identity (that includes aliasing, obviously). Some kind of object system is often invented while building C programs of non-trivial size.

If the key isn't unique, then multiple objects could share the same state...they would be the same object!


I'm not sure what you mean by a "name". Do you mean the same thing as Hickey's definition of "identity", i.e. "a stable logical entity associated with a series of different values over time"?


Yes. To identify something whose fields change over time, it needs a name (even if that name is just a GUID or address). Well without fields, you don't need names, but this isn't a very interesting case (it is hard to scale up programs with just what are essentially global variables!).

Values are anonymous in contrast: you can't really tell the difference between this 42 and that 42.


So would you consider a pointer to a memory location to be an object?


Use the wiki [1]:

> In computer science, an object is a location in memory having a value and possibly referenced by an identifier.

Not useful huh?

I would say, if the memory location is heap allocated and has mutable fields, then it is probably acting like an object (it could also be a value, it depends if its identity is important or not).

[1] http://en.wikipedia.org/wiki/Object_(computer_science)


Wikipedia appears to distinguish between "object-based" languages and "object-orientated" languages. On the subject of object orientated languages, it says:

> An object has state (data) and behavior (code).

Which is the definition I'm most familiar with. I wouldn't class a Clojure reference as an "object", as it has state but no inherent behavior.

> I would say, if the memory location is heap allocated and has mutable fields, then it is probably acting like an object

So a reference to an immutable map wouldn't count as an object for you, because there are no mutable fields?


The reference to the immutable map doesn't count (a immutable map is just a value), but keys into the map can form objects (in that they represent "object" properties). Actually, that is how you get objects without first class mutability (identities must still be generated, of course, which is the same thing).

Ya, the wiki article makes a distinction between programming and designing with objects, and object-oriented programming. It really. All my arguments are about designing and programming with objects vs. designing and programming with values, call it "programming with objects" if you must.


Because people come to the HN comments for stimulating, challenging, passionate expressions of thought, not reasonable ones!


Not sure what you are talking about - Clojure has a great many OO-ish constructs - even local state! The only thing that's really missing is deep support for static forms of inheritance.

As programmer that feels quite at home with OO idioms - Clojure distills it down to the parts I actually like.


I totally agree with this, and LISPs have always been pragmatic about state, although I get the feeling that Clojure emphasizes purity a bit more than other LISPs. I was replying to two separate points from parent, the points weren't meant to go together :)


Well, not all LISPs are considered friendly from our modern context. Just consider my favorite 'love to hate' Emacs Lisp example: http://www.emacswiki.org/emacs/DestructiveOperations Check out the bit where using 'sort' incorrectly can self-modify the function. Anyhow, maybe Clojure is "just" another LISP, perhaps, but it is really nice due to many small improvements making a big difference.


Well, it isn't very weird that today's LiSP is better than yesterday's. Clojure also takes purity a bit more seriously, though nowhere near as serious as something like haskell.


"Well, it isn't very weird that today's LiSP is better than yesterday's."

I wouldn't be so sure of that.


I wasn't predicting a demise, just like I wouldn't predict the demise of C, COBOL, or other procedural languages.

What I meant was is there a significant portion of corporate decision makers that don't look at OO as the default programming methodology, than say 10 years ago?

I'm actually pretty surprised at the uptake of Clojure, especially compared to Scala, which would intuitively be the next language for corporate adoption on the JVM.

I think there are many developers and decision makers that are looking for simplicity now.


Do you have any numbers on Clojure vs. Scala adoption? The Clojure community is definitely vocal, but Scala has made a lot of successes also, and at this level of marketshare, it is difficult to say if they are equal, or if adoption is off by an order of magnitude. Scala in particular has done well in the systems community, oddly enough.


Anecdotally, I've been applying for jobs using Scala and Clojure in Australasia and Europe and the Scala jobs seem more numerous and far better paid


I don't have any numbers, but given the relative youth of Clojure vs Scala, and the adoption that Clojure is getting, I'd say that the trajectory is pretty much on Clojure's side.

And this is coming from someone that isn't particularly keen on s-expressions or dynamic typing. Like I said in my first comment, I'm pretty much amazed at the adoption of Clojure by some mainstream companies.


Sean, I don't think that's really fair to either Scala or Clojure though. Yeah, both of their numbers are small potatoes next to Java or C#, but not insignicant...as in last week there was 4 Clojure users and now there's 16..OMG there's been a 400% jump!.


We are talking about adoption rates between Scala and Clojure. When Scala was a few years old, I am arguing that its adoption rate was similar to what Clojure is experiencing today. So looking at the "current trajectory" doesn't tell the whole story. It is adoption over the long term that will tell us who has won (or is winning) the language wars.

The xkcd cartoon is obviously exaggerated satire.


Well, there is always http://xkcd.com/605/

When your base is high, adoption trajectories are pretty high, but level off fairly quickly. We still need a few years to tell what is actually going on.


"base is high" should be "base is low". Trajectories level off over time.


In order for that to be a testable prediction the terms need to be made precise. OO is a famously vague concept with many different concrete incarnations. There's single dispach vs multiple dispatch. Within single dispatch you have prototype based vs class based. You have objects as a record of closures. With multiple dispatch you have dispatch on class, or predicate dispatch, or full pattern matching. There's the question whether some form of identity or state is part of OO. The concept is broad enough that you can easily stretch it so that almost any modern language is OO in some form. A functional or logic programming language where functions can be split into multiple cases with pattern matching is arguably just a generalization of predicate dispatch.

If on the other hand we restrict the term to languages where values are essentially identity + a record of methods (whether through the indirection of a class v-table or directly stored in the value as in prototype based languages), I do think that model has largely run its course. It will certainly remain popular in the near future, but people are increasingly realizing that modeling everything as a record of methods + identity (+ state?) isn't a good way to do it. There are certain things that lend themselves to that way of modeling, but most things are best modeled in some other way. Records of functions + identity is just one particular data type among many, and it's not any more special than any other data type. Modeling everything as an object is like TCL where you model everything as a string. OOP is nice on the surface because you have some vague analogy to real world objects, but that does more harm than good. It is similar to agile or TDD where people try to sit down at the computer and immediately start typing code without any careful thinking beforehand. Like TDD and agile, OO also facilitates that kind of programming because people can take the concepts in their head and immediately start typing class definitions. It works for easy problems, but even there it does not lead to a pretty end result (one of the symptoms that sticks out most is the question 'on which class does this method belong?'). Rather than designing your program to model the real world, it usually works much better to design your program around the problem you're trying to solve. Rather than starting with the real world and making an analogue to it in your program, you start with the problem you're trying to solve and come up with a data model that facilitates solving that problem. A famous example of the former style gone wrong is Ron Jeffries who tried to write a sudoku solver with an OO + TDD approach in a series of blog posts. He modeled his program with classes that seem natural for the problem, and he wrote tests for those. Unfortunately writing a sudoku solver is not one of those problems where you can just start banging out code without any thought and get it to work, so he got stuck because he did not think about how to actually solve the problem, he only thought about how to model the real world with OO.


I honestly think people are becoming smarter and less ideological about this: values where they make sense, objects where they make sense. Values definitely are good because they don't lie, they aren't biased; you are basically doing math when manipulating them.

But how often do you know the right solution to a problem? Not very often, and many problems are ill defined from the start. Objects are just units of natural language: we can lie about their relationships, include bias and stereotypes, and be wrong, but it doesn't come with the not often realistic requirement that we be right. OO is agile, it allows us to start working on the solution right away, and that gives us experience about the problem.

But it won't always work! Sometimes you really do need to sit down and work out the math, turn off your natural thought biases by not using objects. And there definitely aspects of a problem that are more mathy and not very suitable for object based design and programming.

So if objects are the only tool in your bag, you are gonna get hurt, but if values are the only tool in your bag, well the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: