Hacker News new | past | comments | ask | show | jobs | submit login
Soup – Alan Kay on Objects (fogus.me)
212 points by grzm on Oct 29, 2018 | hide | past | favorite | 78 comments



Allow me one suggestion about Kay, it seems the man knows way too much to be understood by the average us. Not pompously, it's just that when he means A we read B. That led to the drift between Smalltalk OO and cpp OO. I thought it was exagerated, but smalltalk is really entirely different from cpp/java. It has the same vocabulary, but the reality is quite surprising.

Kay had education in abstract algebra, cell biology(apparently).. he was fond of old varied computing arch (burroughs). So my point is, before reading his text, do yourself a favor, dig in the litterature deeply and broadly.


Seconded.

Took me about 5 long videos and some Google searches to get into his frame of reference, but it was well worth it. The amount of useful ideas I got out of his talks is absolutely astounding. He really changed the way I think about programming, computing, user interfaces and complex system design.

I am slowly watching through all of the videos from Kay I can find.

A good compilation to start with: http://vpri.org/talks.htm

You can also search Vimeo for more.


Yoshiki Ohshima regularly posts interesting "new" videos on his youtube channel[1]. Not only Alan Kay's talks but content produced in PARC back in the day, or relevant people around that environment.

If you need a less abstract teaser before getting into the talks and "philosophy" part of the game, here's a demo of something called "Alternate Reality Kit"[2] back in 1986. Brace yourselves.

1. https://www.youtube.com/user/yoshikiohshima

2. https://www.youtube.com/watch?v=I9LZ6TnSP40


This is the most complete list of Alan Kay's talks I found: https://tinlizzie.org/IA/index.php/Talks_by_Alan_Kay


This is one of my favorite Alan Kay talks of all time (I think this is the right version, but at the time it wasn't on YouTube)...

Programming and Scaling - Alan Kay https://www.youtube.com/watch?v=YyIQKBzIuBY

There's also one where he discusses anthropology and the human universals, but the title of that talk has escaped me for years. Anyone remember what that one's called?


Learning about all the cool things that happened at Xerox, not only Smalltalk, but Lisp and Mesa/Cedar as well, completely changed my point of view on UNIX, and its shortcuts.


It might also help to know Kay has said the Internet is a good example of "Real OOP": A system made up of other systems sending messages that are "desires", not commands. (paraphrasing)

(He has also been quite humble. He claims his own original ideas aren't very good and all the best stuff came from others. On Smalltalk, he also mentions Dan Ingalls did most of the work while Kay only did the mathematics.)


> the Internet is a good example of "Real OOP": A system made up of other systems sending messages that are "desires", not commands.

Or: OOP is what you get when you take the necessary programming model required at the software level, to get two hardware devices that don't trust each-other's engineering to successfully exchange data (like, say, a computer in a public library, and your smartphone that you've just plugged into it over USB), and then try to shrink it down as far as it'll go, while still retaining the ability to represent and handle every potential failure-mode of the original exchange.


It's funny because he keeps ranting about how things are so subpar. It can appear un-humble at times.


What if things really are subpar and you simply don't have the frame of reference to see it?


There is no silver bullet.

Actors are overrated. Either you assume that any message may get lost, and you have to deal with failure, which is always painful. Or you assume the sender and receiver share a failure domain, and you're introducing asynchrony for no clear reason.


Erlang/OTP. Failure can be orthogonal to stability, and it doesn't have to be painful.


This is the draft of the original 1972 unpublished memo where Alan Kay introduces the idea of object programming:

Alan Kay: A Personal Computer for Children of All Ages (1972) [pdf] http://www.vpri.org/pdf/hc_pers_comp_for_children.pdf

NB: Double check me on that -- is this the original unpublished one, or the revised one?


No, it's due to performance. C++ oop is somewhat close to the idea (purists don't say that it's not) without sacrificing too much perf. Objective-C spends like half of CPU time in objc_msgSend. And that's ignoring the cost of "far" jumps.


And Smalltalk - for much the same reasons - has some pretty quirky code in the VM to be able to make the concept of message passing between objects work fast enough to do real work. There is a pretty sharp dividing line in the VM between 'the elegant stuff' and 'the hack to make it all work', where the primitives are activated.

Smalltalk primitives and their classes are of necessity implemented outside of the language, they 'just work' and you can not inspect their implementation within the VM because that would require you to reflect on its own implementation, which it can not do for obvious reasons.

So even though Smalltalk is 'self aware' to an extreme degree that self awareness does not extend all the way to the bottom, and I don't see how that would be possible anyway. At some level you will have to make the jump between the CPU you'd like to have (the one executing Smalltalk directly) to the one that you actually have.

There is this guy: http://www.merlintec.com/lsi/jecel.html who has been working on a ST hardware implementation for a long time, I am not aware of its current status.

But note that even in that situation you will still be limited, but then not at the level of the VM builtins but at the level of the CPU instruction set implementing the primitives.

Ideally Smalltalk would be able to transcend the boundaries of the computer it runs on sending messages to other computers with proper error handling, that would be some kind of hybrid between the BEAM and the Smalltalk VM.


I guess it's not the same kind of introspection because it adds a level of indirection, but I've heard stories of the VM itself being implemented in something called "slang"[1] that's a kind of lower level smalltalk that easily compiles to C, but can be interpreted in smalltalk itself (via an Interpreter class that is perfectly normal smalltalk).

Information is quite sparse on that topic, and my knowledge is quite fuzzy there but I'd love to find more info on that (in case anyone can provide links.)

1. http://wiki.squeak.org/squeak/slang


When Pharo and Squeak people do VM development, they do it inside a special image equipped with the VMMaker package [1]. What this package has is both "slang" -- which can easily compile to C -- and an emulator running the development VM inside the smalltalk image.

So with some exceptions, VM development occurs almost exclusively inside Smalltalk itself. In fact, this is how Alan Kay, Dan Ingalls, and others brought Squeak to life from an old Apple Smalltalk implementation. They touched C directly as minimally as possible and took the Smalltalk concept very seriously.

[1] http://pharo.gemtalksystems.com/book/Virtual-Machine/Buildin...


When the original Smalltalk-80 "blue book" came out in 1983, it included a complete listing of a reference implementation of the virtual machine.

http://sdmeta.gforge.inria.fr/FreeBooks/BlueBook/

Instead of using Pascal or C, the code was written in a subset of Smalltalk-80 itself. This subset (later named Slang in the Squeak project in 1996) doesn't use objects nor polymorphism. It only deals with an Array of integers and only uses a few control structures, so every expression has a C equivalent.

The first VM implementers at Apple, DEC, Tektronix and HP manually translated this code into Pascal, assembly or C as their starting point and then evolved their code from there.

Much later it was shown that this code could actually run inside Smalltalk-80.

http://ftp.squeak.org/docs/OOPSLA.Squeak.html


Yeah anyone reading this who is curious should really check out the Blue Book, it is fantastic.

There is a follow up book called "Smalltalk-80: Bits of History, Words of Advice" that discusses some of those early VM implementations


...which is not all that strange. If you were using a Lisp machine (or, say, IBM z/OS), and you wanted to do POSIX software development, you'd probably

1. use your own POSIX machine to cross-compile a POSIX ABI simulator for the target OS; and then

2. deploy a POSIX sandbox to the target, with POSIX tools like gcc inside, and continue work on things like the POSIX ABI simulator in there.


Thanks for mentioning my project! I am still working on the design of Smalltalk hardware, but this year I have expanded that a bit to include efficient emulation of other machines (such as x86 or ARM) and not just the Smalltalk VM.

Note that in Self many of the implementation tricks needed to make Smalltalk run fast were eliminated. And I see no reason why the number of primitives can't be as few as 12 or so. The rest are needed either for speed (but a good compiler can let normal Smalltalk code do the same job) or to talk to the operating system (but a Smalltalk on "bare metal" doesn't need that).


Well even the standard C library is impossible to implement 100% in plain ISO C without resorting to language extensions or Assembly.


Nope. Even with badly written Objective-C that tends to max out at around 16% visible in the profiler. And with "badly written" in this context, I mean code that treats the hybrid language Objective-C as a pure OO language.

objc_msgSend() is generally not the performance problem in Objective-C code, and the amount of stupid stuff done based on the misconception that it is is truly astounding. That includes CoreFoundation ("it's C, so it must be fast, oh but we need flexibility, so let's use a CFDictionary...") and Swift, which manages to be slower than Objective-C at just about everything.

And yes, I've done the measuring: http://www.mypearsonstore.com/bookstore/ios-and-macos-perfor...


When Objective C was originally written, there were no L2 caches etc, Would you say the performance penalty objc_msgSend caused then compared to now was greater? I remember looking into it back then and it was regarded as to slow compared to C or C++, but hardware has changed. Is the hardware now more optimised to handle Objective C, than 20 or 30 years ago would you say? (Looks an interesting book by the way - now on my reading list :-) )


"As expected, the object-oriented version is slower; about 43% slower, primarily due to the extra overhead of message passing. [..] This overhead is approximately 2.5x that of calling a function and far slower than accessing memory directly with a C expression" -- OOP, and Evolutionary Approach, Brad Cox 1986

Note the comparison to memory access. Nowadays, an actual memory access (that doesn't hit the caches) is more than an order of magnitude slower than a message-send. So if you can save a single DRAM access by performing 10 message-sends, you will come out ahead. So yes, the changes in hardware, particularly the relative changes have made a huge difference. For a lot of programs today, the stuff that comes between stalls waiting for DRAM is essentially free.

Also, Apple has continuously optimised objc_msgSend(), it is quite a beast these days.

Things that make Objective-C/Cocoa code slow are multitude, certainly the prevalence of NSDictionary (and CFDictionary) and other forms of keyed accessing, particularly because NSString, while a great implementation for user-facing Unicode strings, is extremely heavyweight for use as a symbol. Then there are call-return based APIs where streaming would be more appropriate (NSJSONSerialization/NSPropertyListSerailization) and the fact that APIs that look like they would be streaming (keyed archiving, swift coding) actually build NSDictionaries underneath). CoreFoundation was also quite the disaster, performance-wise (more than 2x regression compared to pre-CF Foundation), with Foundation itself also a significant regression, partly because it eliminated convenient collections for C types except bytes (NSData).

Integer or float arrays are trivial to add ( https://github.com/mpw/MPWFoundation/blob/master/Collections... ) and 100 - 1000x faster than NSArrays of NSNumbers. The binary property list format is capable of both random access (lazy loading...) and of directly encoding object graphs, but the APIs don't expose those capabilities.

Stepstone Objective-C had stack allocation for objects, this was eliminated by NeXT and not reinstated by Apple. So objects require heap allocation. Etc.

While I also agree that Objective-C was regarded as too slow compared to C++ in particular, that was never really true. Just as a small example, I did image processing in Objective-C, and yes, if you were to apply OO and message-sending on a per-pixel level, you'd be in a world of pain. But even at scan-line granularity, the overhead was already less than 1%, so in the noise. My Objective-C Postscript interpreter is faster than the Adobe interpreter (C-based last I checked) used by Apple at basic language-oriented tasks such as arithmetic/loops.

And of course, you wouldn't really want to use the OO part of Objective-C for operations on low-level C types such as or integers, the syntax is not really inviting compared to C for that sort of stuff. So I always thought that the hybrid nature (coarse-grain objects, connecting C-level implementations) worked really well, for both expressiveness and performance. The XML Parser example in the book shows how you can combine low-level C character processing with messaging-APIs to get both amazing performance (significantly faster than libxml2) and extremely convenient/powerful APIs. IMHO :-)


It's very understandable that developers would want to stay away from C and stick to the nice OO features of Obj-C.

That's probably why the language was unceremoniously dropped by Apple - it's hard and unpleasant to reconcile these two worlds.

Btw, what is the perf problem then? Refcounting, boxing?


> it's hard and unpleasant to reconcile these two worlds

Couldn't disagree more. For the C types that tend to occur in bulk processing (bytes, integers, floating point numbers), C is great and Objective-C is...er...not so great ;-)

On the other hand, Objective-C is fantastic for hooking modules of these C-based data crunching methods together.

> Btw, what is the perf problem then? Refcounting, boxing?

Refcounting can be a problem, yes, but pre-ARC it was both much less so and pretty easy to avoid if you noticed it. Also the fact that NSObject and subclasses had no inline refcount, so went to an external hash-table for refcounting if it ever exceeded 1 (1 was implied by the object being alive).

More in the other answer, there isn't really "the" perf problem, there are various problematic implementations and habits promulgated by Apple, often without any good reasons.


[flagged]


Pick and app and I will profile it.


iTerm2? During a 'cat' of a 1MB text file.

Or Sublime Text, from launch, opening a 1MB text file, modify 10 chars at the beginning, save, and exit. Edit: NVM, it's written in C++


> Or Sublime Text, from launch, opening a 1MB text file, modify 10 chars at the beginning, save, and exit. Edit: NVM, it's written in C++

It doesn't use a "gap"? Or some other appropriate text-editing data structure?


I think the parent commentor was referring to the problem that happens when you go to save.

(Which is, really, just a problem with how filesystems only allow files to be represented as little seek(2)able character devices. If there was an API that would let a userland program treat a [sparse] file's data as a vector of extents—and arbitrarily manipulate that vector, such as by creating a new raw on-disk extent, writing data to it, and then prepending it to the file's extents list—then this performance problem would disappear, without any filesystem even having to change its data architecture.)


"The big deal was encapsulation and messaging to provide loose couplings that would work under extreme scaling (in manners reminiscent of Biology and Ecology)."

This describes exactly why Erlang scales so well.


I always tried to decipher Alan Kay's notion of object oriented programming.

Erlang with its active objects seems to me to fit precisely his description. Active objects = the objects themselves decide when to run a method = receive a message and react to it.

As opposed to java like oop where the objects just sit there passively and are woken up in a blocking fashion by callers from outside.

It appears to me that c++/java etc. flavour of oop uses objects just as a module system, but then has another module/namespace/package mechanism besides that. Two concepts for the same thing essentially, just making the language unnecessarily complicated.


Polymorphism and inheritance were also a big deal for Java and C++, which namespaces don’t provide. That and classes are a way to extend the type system.

Kay may not have had those things as his primary conception of OOP, but Smalltalk fully supports inheritance, polymorphism and type extension.


An object, under OOP, doesn't have to have an associated execution thread, in order to decide when to do what.

Instead, picture an object as a closure over mutable state, where the closure accepts an arbitrary serialized term (the "message") which it can do anything it likes with.

Note how such a "closure object" can call its runtime's timer API to get another message sent to it later. It could even do this in its constructor.

Note also how such a "closure object" can reprogram itself, by holding onto other closures as part of its mutable internal state, and replacing these with other closures [e.g. ones passed in as part of the message] in response to messages.

These two properties together, mean that how a "closure object" responds to a message at a given time, is entirely up to the closure—it can have changed into something that does something entirely different from what its "bootstrap" source code says, at any point since the start of the process. (Just like Joe Armstrong's Erlang "Universal Server" — https://joearms.github.io/published/2013-11-21-My-favorite-e...)


You may find the book "Concepts, Techniques, Models of Computing Programming" helpful.

My understanding of how Erlang is approached in this book is as follows:

- the concurrent declarative/functional computation model has the critical advantage but also limitation of not allowing non-determinism of any sort

- client/server programming without a restricted form of non-determinism is impossible (namely, the server must not need to know from which client the next message will be, which is a form of non-determinism)

- this can be solved by adding a new concept: ports. Ports introduce a restricted form of explicit state, while allowing the rest of the program to be purely functional.

- Erlang is classified as belonging to the computation model that results from adding ports to the concurrent declarative/functional computation model: the Message Passing Concurrent model

- the Object Oriented computation model is different: it is the result of adding explicit state to the (non-concurrent) declarative/functional model. Of course, you can then also add concurrency to the mix. The crucial difference with the Message Passing Concurrent model is that non-determinism is now not restricted to ports anymore, making reasoning about program behaviour way more complicated.

- so, if Smalltalk falls into the concurrent OO computation model, it is both more complex and expressive that Erlang.


> Erlang with its active objects seems to me to fit precisely his description.

He's said exactly this [0]:

> We didn’t even do all of the idea at PARC. Many of Carl Hewitt’s Actors ideas which got sparked by the original Smalltalk were more in the spirit of OOP than the subsequent Smalltalks. Significant parts of Erlang are more like a real OOP language the the current Smalltalk, and certainly the C based languages that have been painted with “OOP paint”.

[0]: https://computinged.wordpress.com/2010/09/11/moti-asks-objec...


Your point still stands but, since the field has so many terminologies that are conflated, I wanted to let you know that "active objects" is a specific kind of actor model and Erlang is not an instance of that model. Active Objects is essentially Java but with asynchronous calls, much like using gen_server for everything.

A good paper that untangles the terminology is "43 years of actors: a taxonomy of actor models and their key properties".


> Erlang with its active objects seems to me to fit precisely his description.

Simula 67, one of the first (if not first) OO language way back in 1967 was exactly this also.


Ocaml has both! You have a (very powerful) module system, serving purposes of encapsulation and namespacing, and other more powerful abstractions, and you also have dynamic/message-passing style OOP available to you.


...and no access to multicore processing that's been ubiquitous for over a decade?


I was quite amazed that this wasn't the case. OCaml clearly is working on it - http://ocamllabs.io/doc/multicore.html

Plus there are monadic ways of handling concurrency with lwt.


You can use multicores, just make processes.


A question: can OCaml’s object system be used to implement a runtime similar to that of Objective-C (things like obj_msgsend)? Or does OCaml’s type system impose restrictions that prevent this level of dynamic operation?


Does anyone actually use the OO stuff? Last I checked, the consensus was that it was a failure and that it should be avoided in modern OCaml.


Or any actor-based framework. The less state you share, the better off you are scaling-wise.


This is in the comments of the post but the author should really have linked to Alan Kay's comments here:

https://news.ycombinator.com/item?id=11808551


Fixed now. Thanks!


"For a variety of reasons — none of them very good — this way of being safe lost out in the 60s in favor of allowing race conditions in imperative programming and then trying to protect against them using terrible semaphores, etc which can lead to lock ups." - it would be great for you to lay those out! Also please tell us other Krautrock songs you dig!


Sylvan Clebsch's [1] answer to this problem is designed into Pony [2], a high-performance lock free and data-race free actor-model language and runtime [3] based on a novel deny-capabilities model...

Deny capabilities for safe, fast actors [pdf]: https://www.doc.ic.ac.uk/~scd/fast-cheap-AGERE.pdf

Sylvan has since joined MSR Cambridge and is working on the distributed runtime. He's giving a talk at QCon London in March, and there are several talks on YouTube where he discusses the ideas he combined into Pony's design [4].

[1] https://github.com/sylvanc

[2] https://github.com/ponylang/ponyc

[3] https://www.ponylang.io/discover/#what-is-pony

[4] https://www.youtube.com/results?search_query=ponylang+sylvan


I think he's talking about David Reed's work: http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-20...


For anyone reading this, Reed also worked with Kay on Croquet (an interesting project in its own right) which implemented this object-based idea of "time" via a framework called TeaTime [1].

You can try out Croquet for yourself in the latest version of Squeak -- see [2]

[1] http://www.wetmachine.com/inventing-the-future/controlling-t...

[2] http://forum.world.st/ANN-Open-Croquet-for-Squeak-5-2-32bit-...


That link is broken, but archive.org has it:

https://web.archive.org/web/20100713132640/http://publicatio...


Odd, it worked for me yesterday. I wonder if someone noticed a traffic spike on that URL and said 'hmm, that/those page(s) shouldn't still be there'


If you dig Can you'll also dig Neu!


This is lovely. There are amazing riches in Alan's comment history on HN, for anyone who wants to dive into it: https://news.ycombinator.com/threads?id=alankay1


Reposting my comment from discussion https://news.ycombinator.com/item?id=14318663 with a fixed YouTube link:

> Now this right here https://www.youtube.com/watch?v=NdSD07U5uBs&t=1362 is the problem: if you follow the idea of computers as virtualisers where it leads you do not, in the general case, end up with the flat backplane talked about from 23:15 and shown in the accompanying slide. (Actually, this is in turn probably just a symptom of another, more general problem.)


Thanks for putting this together Michael! I always find your blog posts and magazines to be very interesting (lisp, forth, Clojure)...etc.


Thank you for compiling this! All of the ideas in the text are incredibly powerful and important, but they are buried under megabytes of OOP trivia centering around C++ and Java.

I think at some point I will try to write an article "translating" his OOP ideas to something Java/C#/JS programmers can readily understand.


Haven't seen Michael on here for a while. Good stuff.


Thanks for the sentiment. The post is meant as a "living document" and will be updated with links and footnotes from now until I die (even if it kills me).


This created a simple temporal logic, with a visualization of “collections of facts” as stacked “layers” of world-lines. This can easily be generalized to world-lines of “variables”, “data”, “objects” etc. From the individual point of view “values” are replaced by “histories” of values...

Connect this idea with Conal Elliott's recently posted paper/talk on "The Simple Essence of Automatic Differentiation": https://news.ycombinator.com/item?id=18306860


Does anyone know why Smalltalk never made the leap to actors?


Smalltalk did not implement the message passing at the lowest level as an actual message between processes. It predates the actor model (which was inspired by it), which operates at a somewhat higher level than Smalltalk 'messages' do.

Smalltalk at the lower levels is a lot of very elegant hacks to give an outward appearance of one mechanism while being implemented in a very different manner. That is also one of the main reasons why it works as a VM, the kind of functionality required to detect the primitives at speed so that functions can be called that do the real work is hard to implement natively in a way that it is still efficient.

The top levels of Smalltalk are as elegant and simple as LISP, but under the hood Smalltalk is decidedly less elegant to make it work on the hardware available at the time.

Of course none of that stops you from implementing the Actor concepts at a higher level than Smalltalk objects and there were (are?) some efforts to do just that.


I’m no expert, but didn’t Carl Hewitt’s work on Actors (1973) predate smalltalk by seven years or so? Or was there smalltalk before smalltalk-80?


Smalltalk started in 1969 or so with first release inside Xerox Parc was in 1972 (Smalltalk-72). See early history of smalltalk : http://wiki.c2.com/?EarlyHistoryOfSmalltalk

See Carl Hewitt's retrospective on Actors which he attributes inspiration to Smalltalk, simula and lisp here: https://arxiv.org/vc/arxiv/papers/1008/1008.1459v8.pdf


Yes, exactly; also, there's an acknowledgement of the influence of "SMALL TALK" at the end of Hewitt, Bishop & Steiger's original 1973 IJCAI paper as it appeared in print (https://eighty-twenty.org/2016/10/12/hewitt-bishop-steiger-i...).

Actors, Smalltalk, Monitors and CSP all co-evolved during the 1970s, with many of the principals visiting each other and exchanging ideas. Retrospectives like Brinch Hansen's (https://dl.acm.org/citation.cfm?id=155361) spell out some of the lines of influence.


Thanks, TIL



There have been various actor related developments in Smalltalk and others on YC will have far more knowledge :-)

But Distributed Smalltalk (John Bennet et al) - late 1980's dabbled with actor like constructs. JOhn's thesis is still worth a read.

Actalk originally by Jean-Pierre Briot has been around since 1989 (and has supported implementations for Squeak and Pharo thanks to Serge Stinckwich).

TOny Garnock Jones - Squeak Actors (https://tonyg.github.io/squeak-actors/) is (I think) the most recent implementation (and is lots of fun to use).


Doesn't OOP have the ontological problem:

Does the knife cut the cheese, or does the cheese get cut by the knife? Formally: Is it knife.cut(cheese) or cheese.getCutBy(knife)? And why is the first argument (knife in the first instance or cheese in the second) special? Maybe Alan Kay means something different and less overtly flawed when he talks about OOP.


I could be entirely off here, but it seems to me this problem is less present when you send messages instead of call methods.

From the perspective of sending messages, the solution is that the knife object/actor sends a message to the cheese object/actor that it wants to cut it. the cheese can then respond to that message by changing its state to 'cut', or perhaps by sending a message back to the knife that it's not sharp enough for this particular chunk of cheese, and that the cheese remains in an uncut state. The knife actor/object could then decide to break, or sharpen itself, or whatever.

I haven't given this much thought yet though, so I'd love to hear how I'm all wrong about this!

EDIT: I guess this comment says something similar: https://news.ycombinator.com/item?id=18333652


Where is the state? What changes in response to the send? Here, it'd be the cheese, so it'd make sense to have the cheese be the receiver; but a real answer comes from moving further toward the Actor perspective. Given a distributed system with message passing, how do you choose to allocate locations to items of mutable state? That choice gives you your answer.

All single-receiver systems have a problem when faced with multiple otherwise-independent pieces of state that have to change in response to a given combination. You end up encoding various kinds of transaction to keep every member of the group on the same page.

Multiple-dispatch systems don't have that problem, but the tradeoff is that they're no longer message passing: all the "recipients" of a "message" live logically in the "same" location for the purposes of handling a given invocation, and if they're actually physically separated, the language implementer has to do the job of providing the illusion (e.g. via transactions) that they're in the same place.


The butler did it :-)

  theButler.cut(cheese using: knife)
Seriously: I think every programming language has that problem. If cheese and knife are arguments to a function, one of them comes before the other. It doesn’t matter much whether there’s a parenthesis, a comma or whitespace between them.

And yes, that includes esoteric programming languages that inflect variable names to indicate what’s the subject and what the object of an action (example: http://users.monash.edu/~damian/papers/HTML/Perligata.html)

They make some words special not by position, but by inflecting them differently.


"Objects" have nothing to do with taxonomies, or where you put your state. Objects are only an answer to the question: where does the computation that updates the state live?

Or, to put that another way: objects are minimal abstract computers.

Picture the same situation, except instead of objects, we've got computers. Three, in fact. One that's running a "knife service", one running a "cheese service", and one that's an RDBMS.

To cut the cheese in this Service-Oriented Architecture, what do you do?

Well, probably, you start by getting a cheese handle from the cheese service, and getting a knife handle from the knife service.

Then, very likely, you're going to tell the knife service—passing the knife handle and the cheese handle—to cut the cheese.

And the knife service is in turn going to talk to the cheese service—passing the cheese handle—telling it that the cheese should get cut, and specifying what the knife wants to do.

If the cheese succeeds in getting cut, then it returns updated cheese to the knife, which returns {success, updated cheese} to you.

Why all that? Because:

• Only the knife service contains the code—the business logic—to know how knives work, so only the knife can construct a description of the precise way that the knife is going to try to cut cheese.

• And only the cheese service contains the code—the business logic—to know what cheese is like as a material, so only the cheese can evaluate the attempt by the knife to cut it.

OOP is just like SOA, except that the fact of what "service" (i.e. class) you should talk to about a given object-handle is encapsulated within the object, and is potentially even a mutable property of that object. (Maybe cutting your cheese with a hot knife results in your cheese-handle mutating into a raclette-handle.)

Also note that at no point did I say where the state for anything is. It's in the RDBMS, maybe. Or maybe the durable state for all knives is inside the knife service, and the durable state for all cheeses is inside the cheese service. It's kind of irrelevant. (That's another thing about objects: they entirely encapsulate their state, so it doesn't matter where it is. This allows for e.g. distributed object frameworks where you have local objects that stand in as proxies for remote objects somewhere else. Just like in an SOA, you can have network proxies that stand in for services somewhere else.)


Not all OOP, just the normal sort.

With CLOS, you'd just define a generic function like this:

    (defmethod cut ((subject knife) (object cheese)) …)



With the "(C++/Java)-style" OOP on one side and the more pure/old styles of SmallTalk/(and later actors) I do want to mention that I like the Go-Lang model's version of "OOP" its sorta C with "just-enough-OOP-lookalike-stuff"




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: