Hacker News new | past | comments | ask | show | jobs | submit login
New release of Self programming language (selflanguage.org)
97 points by russellallen on May 24, 2017 | hide | past | favorite | 37 comments



Wow. For those who don't know, there was a big 'language bake off' at Sun between TCL, Java, and Self (all being funded by Sun Labs) and Bert Sutherland (then director of Sun Labs and brother of Ivan) required that the language changes stop and then we'd look at each one, and decide which one to move forward on. When the world sort of exploded at the WWW conference held in Darmstadt Germany in 1995, Java officially 'won' and both Self and TCL were de-committed. (not canceled per-se but not getting any more funding either).

I like to think that all three languages benefited from the competition.


I believe that much of the work Sun had done on Self found its way to Java in the form of HotSpot.


Not to take anything from Self, but in regards to HotSpot, a more accurate history attributes the lineage to Smalltalk and the Animorphic team. Some details here https://en.m.wikipedia.org/wiki/HotSpot


Right. Sun incorporated Animorphic's work into Self, and later purchased Animorphic and the technology became known as HotSpot.


Do you know what caused Java to "win" when the world exploded?


At the time it was the ability to construct executable content inside a web page. Javascript now dominates that space, but up until WWWC 2 pretty much everyone was stuck with page layout primitives. It made it possible to see a path to where we are today and so everyone wanted their browser to have it, and if their browser didn't have it they could run the HotJava browser and get it.

Bottom line, it demonstrated an answer to a problem that a lot of people were having, and it promised to answer that problem in an 'open source' kind of way.


OK: I was hoping it was going to be something to do with the design of the language as opposed to just to do with the implementation (as any of those languages could have been deployed for the web; especially so given my vague understanding that the web wasn't even Java's most original purpose). (Though I guess I can make up some reasons related to security models that would have been weirder to do in Self or TCL.)


A hard, and perhaps bitter, truth in programming languages is that, historically, elegance counts for nothing and applicability against current problems counts for everything.

I have no way of confirming the authenticity of the following, I read the whiteboard but anyone could have written it. On a whiteboard, in an office associated with Adele Goldberg where Parcplace Systems[1] had moved out, was penned the question; "What is the killer app for SmallTalk?" and below that in a different hand, was written, "and that is how the unholy love child between C++ and UCSD PASCAL wins."

[1] https://en.wikipedia.org/wiki/Adele_Goldberg_(computer_scien...


Can we ever get away from the misnomer of "prototype-based OO"?

In Self we have two ways of specialization:

  1. Prototypes
  2. Parents
Prototypes are objects that are "cloned" to create instances, a very simple and direct notion of inheritance. We create an object (a "prototype") with properties that apply to a larger set of objects, and then for each instance we copy (clone) it and then add or modify properties as necessary for the instance. Self had optimizations to deal with this so instances did not end up being very fat.

Parents are objects that other objects "inherit from". At run-time property lookups are delegated to a parent. The difference between a parent and a prototype is that changes to the prototype do NOT affect the derived instances. Changes to a parent DO affect the derived instances.

So, when I read about "class-based" versus "prototype-based" languages, I cringe. It is really "class-based" versus "parent-based". How did cloning get confused with run-time delegation?

Self introduced the notion of self-describing instances. That is the essential coolness. The simplifying notion.

http://www.selflanguage.org/_static/published/self-power.pdf


Agree. I'd also like to direct interested viewers to this other neat paper, entitled "Annotating objects for transport to other worlds": https://pdfs.semanticscholar.org/61f1/14dce92b7a49dbb8f981f2...


And to make it even worse, Object.create does parent-relationship, and Object.assign does the cloning(if you squint a little bit). Which seems like it should be flip-flopped, since "create" is closer to "cloning" than "assign" is. Even the es5 improvements ended up getting it wrong!


Relax already.

We say "prototype" to indicate an important feature of Self which its children (Python, JavaScript, etc.) inherited: Unlike classes, prototypes are open for modification. The precise mechanisms differ between the different languages in Self's family, but they all have that feature in common.

Compare and contrast with other Smalltalk children, like Java or E, where this isn't possible because classes are closed. (E doesn't have classes, but it has a similar property, in that object literal scripts are closed for modification.)


The problem is that there are two groups of languages that use "prototypes" and using the same word for slightly different concepts causes confusion.

In the original Lieberman paper[1] you created new objects that were empty and inherited from the prototype. Then you added local slots (name/value pairs) to the object for anything that was different from the prototype. Languages based on this concept often have two kinds of assignments: one that adds a new local slot and another that just replaces the value of the old slot (either local or inherited).

In Self, on the other hand, a prototype is a template for an object you want and you clone it to get a new object. Once it has been cloned, there is no longer any relation between the prototype and the new object and they evolve separately. These kind of languages tend to have only one way to assign values to slots.

Many languages that claimed to follow the Self model (like NewtonScript and Io) actually use the Lieberman model instead. In either case the prototypes are fully functioning examples of the object you want to create new instances of. So, unfortunately, it is natural that one word is used for both. But this results in very confusing discussions when someone is talking about a language with one model while thinking about the other model instead.

[1] http://web.media.mit.edu/~lieber/Lieberary/OOP/Delegation/De...


The paper you linked to does mention prototypes as Self's alternative to classes.

Edit: Oh I see. Cloning is the alternative to instantiation. Parent slots are the alternative to classical inheritance.


It's class-based vs instance-based.


Self (along with Scheme) was supposedly one of the big influences on Brendan Eich when he was creating JavaScript. Neat to know it's still out there.


Wonder how JS would have gone if it had fully embraced its prototypal nature instead of hiding it behind Java-like syntax. For as long as JS has been around, there were people clamoring for classes in the language, or thinking that it had classes, or implementing a class-like structure, until finally ES6.

But it didn't have to be that way. And now the sentiment is that OOP is bad, and inheritance is evil, and classes are the worse, forcing one to predefine a taxonomy that's likely to need refactoring.

But prototypal languages can be easily changed. Just change the parent slot(s), or modify the object itself, etc.


I honestly don't get what's better about prototypes than classes. In particular, I don't see that the prototypal style is supported empirically.

Are there any codebases around 100K to 1 M lines of code written in a prototypal style, which are actually in production use?

You can claim thousands of such systems for classes. In that sense, classes are a success. That people write horrible class-based code isn't a knock against them. People also write horrible procedural code. Most code is bad. But there is some code with classes that is very good.

There was sentiment in the 90's and early 2000's that OOP is bad. I think the world has learned how to use classes since then -- e.g. no more large inheritance chains and fragile base classes. Not everything is an object -- some things are just functions, and some things are just data with no behavior.

As far as I can see, prototypes are worse along all dimensions than classes.


Prototypes and metaclasses are equivalent in power, and both are more powerful than straight classes. You can do things like dynamically change the set of methods that an object responds to with them, something that you have to fake in a class-based system with the State or Strategy pattern. Also, because prototypes are just ordinary objects, you can do even fancier stuff like store the set of possible prototypes in a data structure and dynamically change them based on a lookup. Things like Django's ORM (where all the object has to define are a set of fields as member variables, and then it magically gets a bunch of methods for DB query/insert/update) are trivially easy to define with a prototype-based object system, and you could do even fancier things like add another object to the prototype chain to get JSON serialization, or another one for protobufs, and swap these out at run-time.

However, the flip side of this is that more freedom is not always good for more readable software. GOTO, for example, can express any control flow that for/while/do-while/if/switch can and a number that it cannot (coroutines/exceptions/etc.), but as an industry we've moved away from GOTO because most programmers can't hold this flow control in their heads. Prototypes are the same way: they grant a lot of freedom to implement fancy abstractions, but many programmers seem to be unable to understand the resulting abstractions, so they don't find widespread use.

Asking for codebases of 100K-1M lines in prototype-based languages is the wrong question. Because prototypes let people define abstractions that other programmers find unreadable, they a.) let you write equivalent software in fewer lines of code and b.) get rewritten in class-based languages as soon as you have more than a handful of programmers working on the codebase. They're much more likely to be used by a small team of hackers who sells their startup for $40M or so and then vests in peace while another team rewrites all their code in Java than by a big company.

If you broaden the question to "has anyone ever made significant money working on or with Self", the answer is yes:

http://merlintec.com/old-self-interest/msg01011.html

(Fun fact: Urs Hoelzle, Animorphic's CTO and sender of the second message in that thread, later went on to become employee #9 and the first executive hire at Google.)


Thanks for the response, but I wouldn't call that empirical support for prototypes. They took JIT technology developed for Self and applied it to a class-based language, Java. That doesn't mean that prototypes are good for writing programs. It actually indicates the opposite, because the technology in Self that ended up actually being deployed turned out to be something else.

(I also worked at Google and am somewhat familiar with the Self to HotSpot to v8 heritage.)

Your second paragraph is exactly what I think is wrong with prototypes. There's not enough structure, and not enough constraint. Constraints are useful for reasoning about programs. You might as well just have a bunch of structs and function pointers (and certainly many successful programs are written that way).

Having every application roll its own class system is a terrible idea, in theory and in practice. In practice JavaScript programs end up with multiple class systems because of this. The Lua ecosystem also has this problem.

It's analogous to every library in C rolling their own event loop or thread pool, leading to a fragmenetation of concurrency approaches. Go and node.js both unify the approach to concurrency so every application doesn't end up with 3 different concurrency abstractions.

I don't buy your 3rd paragraph. Python is class-based; it has the characteristic you're talking about with respect to a small team of hackers; and that has been empirically supported by hundreds or thousands of startups being acquired (Instagram, etc.) and even huge companies created (Dropbox).

I honestly think prototypes have failed in the marketplace of ideas and there's a good reason for that. I use metaclasses all the time but not for dynamically changing sets of methods. That seems like a horrible idea. The way I use them is for generating types from external sources like CSV/SQL/protobuf schemas.


Python has metaclasses, and they're used extensively in many of the frameworks that were critical for those startups (Django etc.). They're coming to Javascript too, in the form of ES6 proxies, which have been long awaited.

I actually think metaclasses are probably a better solution than prototypes, because they let you write the majority of your code in a class-based style and only incorporate funky abstractions when they're really necessary, which is typically infrequently. My point is that prototypes are strictly more powerful than (non-metaclass) class-based systems, and that this power lets you build powerful abstractions that can dramatically decrease the number of lines of code you need to write for an initial system.


Or perhaps neither? It's conventional wisdom these days that composition often works better than inheritance.


It's even more conventional wisdom that these two things are absolutely not mutually exclusive and that it's actually recommended to use both.

By the way, "composition" is ambiguous:

- Composition of classes is usually a shortcut used to describe inheritance of interfaces and implementation through aggregation (there's only one mainstream language today that supports this natively: Kotlin).

- Composition of functions, which is pretty much mainstream in most popular languages today.


In the right context, prototypes can enable Instance-First Development, which is a very powerful technique that allows you to quickly and iteratively develop working code, while delaying and avoiding abstraction until it's actually needed, when the abstraction requirements are better understood and informed from experience with working code.

That approach results in fewer unnecessary and more useful abstractions, because they follow the contours and requirements of the actual working code, instead of trying to predict and dictate and over-engineer it before it even works.

Instance-First Development works well for user interface programming, because so many buttons and widgets and control panels are one-off specialized objects, each with their own small snippets of special purpose code, methods, constraints, bindings and event handlers, so it's not necessary to make separate (and myriad) trivial classes for each one.

Oliver Steele describes Instance-First Development as supported by OpenLaszlo here:

Instance-First Development

http://blog.osteele.com/2004/03/classes-and-prototypes/

He explains how the "Instance Substitution Principal" (where class definitions looked like instance definitions) enables a more seamless granular transition from rapid prototyping to developing a shipping product.

"Instance substitution principal: An instance of a class can be replaced by the definition of the instance, without changing the program semantics."

"The instance substitution principle can be applied at the level of semantics, or at the level of syntax. At the level of semantics, it means that a member can equivalently be attached either to a class or its instance. At the level of syntax, it means that the means of defining a class member and an instance member are syntactically parallel."

"Many prototype-based languages don’t obey the instance substitution principle, either because they don’t have classes, or because class and instance definitions are not parallel. (Typically there’s not a declarative means for defining an instance member.) JavaScript versions 1.0 through 1.5 (the versions in browsers) is also a prototype-based language, but lacks classes as a first-class syntactic entity, and lacks the hierarchical syntax that Java, C++, and LZX use to define class members. JavaScript 2.0, JScript.NET, and Python have a class definition syntax, but don’t use the same syntax to define instance members. For example, contrast the following two Python programs, which parallel the LZX programs above."

I described OpenLaszlo and compared it to Garnet, another early constraint based prototypical user interface system written in Common Lisp at CMU, here:

What is OpenLaszlo, and what's it good for?

http://www.donhopkins.com/drupal/node/124

OpenLaszlo and Garnet both integrated prototypes, constraints, data binding, events and delegates, in a way that (some aspects of which) could be described by the buzzword "Reactive Programming" today.

Here's some discussion about prototypes, Instance-First Development and Lua:

Re: Need good examples of when prototype-based objects are better.

http://lua-users.org/lists/lua-l/2007-10/msg00379.html

Benedek and Lajos discussed "Bottom Up Live Micro Ontologies" including "Instance-First Development" in "Conceptualization and Visual Knowledge Organization: A Survey of Ontology-Based Solutions":

http://real.mtak.hu/31984/1/2197.pdf

2.1.3 Bottom Up Live Micro Ontologies

The structures that emerge in the course of knowledge building that includes discovery and conceptualization typically have all the characteristics we have described in the previous section.

To distinguish the emerging Knowledge Architectures from alternative approaches we propose to refer to them as “Bottom Up Live Micro Ontologies”.

'Bottom up', in compliance with the literature, [62], because they are created in the course of elaborating a concrete domain and 'micro', because the meta terms introduced can affect a single node or any that are linked to a node in a piecemeal agile way and in close contact to the context from which they emerge.

These micro ontologies are usually smaller in size than the so called “local ontologies” in domain modeling [16] because they are amenable to reuse from any context, way beyond the one that gave rise to them.

Using Micro Ontologies it becomes possible to define and manipulate domain knowledge with the aid of meta-level structures introduced on the fly, and these meta-structures can also be treated later as domains of their own right.

Elaborating meta level structures as domains of their own right, leads to additional meta-meta level structures, and the same process can be repeated as far as needed.

So knowledge architecture constructions are “turtles all the way up”.

In a bottom up approach domain specific, as well as meta-level concepts and methods can be developed in a form of “instance first”.

“In instance-first development, one implements functionality for a single instance, and then refactors the instance into a class that supports multiple instances” [13], which is to say we are “going meta”.

Only through live exploration and elaboration of descriptions of exemplars, specific instances of objects of interests, it is possible to develop suitable situated elaborations and conceptualizations that can capture ontologically what “there is” across many instances.

This can be stated as the methodological requirement of the “primacy of bottom up live development”: the characteristics of instance descriptions and the relationship with other instances should not be lost as we construct conceptualizations that are applicable to the class of things that are being described.

Hence, instead of “conceptual atomism” [2] and correspondences between descriptions and some aspects of reality, KO seeks to establish correspondence between the structure including the relationships between instances and their class models in a more abstract sense of ‘images’, or using a current term ‘visual models of reality’, in the spirit of Hertz’s Principles of Mechanics.

In the process of KO the formation of these ‘images’ is however, much closer both historically and methodologically, to Whewell’s “consilience of inductions” trough the “colligation of facts”. [41, p.74].

To paraphrase Ward Cunningham’s quoted dictum: the emerging live, visual knowledge architectures should be “the simplest thing that could possibly work” that enable us to achieve our knowledge goals and intentions in a given situation.

With respect to ontology evolution timelines, it is not only the results of conceptualization that matter but the creation of “knowledge model[s] that preserves audit trails of resource manipulation” as the records of “concept growth can increase the transparency of a research enterprise”. [56, p. 672]

The vision that takes us “beyond ontologies” had largely been explored with the Augmentation System that Engelbart created in his NLS half a century ago on a ‘milli iphone’.

With the millionfold increase of computing resource available even to individuals today, we can embark on developing the means to promote the “culture shift” Ibid.] that could lead to collaborative creation of the ‘great chain of emergent meanings’.

In this quest we need dynamic mechanisms for recognizing and merging alternative conceptualizations.


That approach results in fewer unnecessary and more useful abstractions, because they follow the contours and requirements of the actual working code, instead of trying to predict and dictate and over-engineer it before it even works.

I totally agree with this bottom-up style of software design. In Python, I start with dictionaries, tuples, and functions. And then later I might turn them into more structured classes and methods.

I'm not sure you need prototypes for this evolution, but I concede that it's plausible that they will help.

Actually I think Python is too impoverished in letting you make things stricter as the design evolves. I suspect the same may be true of prototypal languages. Yes they are good in the initial stages of program design, but perhaps the later stages are just as important.

A successful program spends more time being maintained than being written, and it's maintained by more people than it is authored by. So it makes sense to devote a good chunk of your language design to the later stages, and implement classes + metaclasses rather than just prototypes.

Anyway, thanks for the interesting perspective. Yes I concede classes can lead to early "over-modeling". But there's also a difference between Java classes and classes in languages like Python and Ruby. And classes vs. prototypes is not the only relevant issue when doing bottom-up, iterative design.


I agree: I've always thought that Brendan misunderstood and didn't appreciate some of the most important aspects of Self: 1) that you can inherit from multiple parents, and 2) that you can easily change the parent slot(s) at runtime.

Just as his misunderstanding of the meaning of equality [1] [2] and truthiness [3] manifested itself as quirks in JavaScript's design.

[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Equa...

[2] https://www.theguardian.com/commentisfree/2014/apr/07/brenda...

[3] https://www.sitepoint.com/javascript-truthy-falsy


I wish JavaScript had been more like NewtonScript with Java influenced syntax instead of NS's Pascal-ish parts. Plus NewtonScript's frame and two parents was much nicer.


I wish IO or Self had been more successful, just to see how far one could push a pure prototyping language. IO is more flexible than JS, in that it has (almost) no syntax, just messages to objects, which means you can completely modify the syntax, making DSLs a natural fit for the language.


It's also one of the big influences on modern JIT compilation techniques, pioneering a lot of the tricks that (through Animorphic Smalltalk) ended up in Hotspot first and V8 later.


Interesting. So V8 isn't really innovating implementationally in terms of major new research ideas - it's just scrounging around and borrowing old, awesome research from decades gone past.

Reminds me of the container-based virtualization renaissance that's happening right now.

Hmm, and I've always thought Chrome's extra-thick titlebar was liberated from the Xerox Star UI (http://imgur.com/6h13MlP).


> Self (along with Scheme) was supposedly one of the big influences on Brendan Eich when he was creating JavaScript.

That's really doubtful. Self's delegation-based inheritance was expedient for quickly implementing an object model instead of having to build a class system, the influence never seems to have gone any deeper.

The one big influence on Javascript was Scheme, that was the original idea, when Netscape's execs asked for a more java-style language for marketing reasons, special-casing a single parent slot was an easy way to bolt an object system into the thing.

From what little I looked at it, delegation (through parent slots) is everywhere in Self, it's used for inheritance but also for mixins and scopes chaining and… It's not just an object model, it's a core semantic principle and tool.


Brendan himself says:

I’m not proud, but I’m happy that I chose Scheme-ish first-class functions and Self-ish (albeit singular) prototypes as the main ingredients. The Java influences, especially y2k Date bugs but also the primitive vs. object distinction (e.g., string vs. String), were unfortunate.

https://brendaneich.com/2008/04/popularity/


The global object in JavaScript is also a scope and serves essentially the same purpose as the lobby in Self, and you can feel the same mechanism in play in JavaScript's with() statement.


Has someone here got some examples of things that are more neatly expressed in self than javascript and other obvious suspects (e.g. python, racket, common lisp, smalltalk)?


I'm not sure about comparing to those languages but I did a post comparing other prototype based languages https://bluishcoder.co.nz/2009/07/16/prototype-based-program...

For me though it's not about the Self language but about the combination of the language and environment. I did a screencast attempting to show some of how Self development is done https://bluishcoder.co.nz/2015/11/18/demo-of-programming-in-...


No Windows release. Hmm.


It's on the list but a big task because of the Unix assumptions build into the VM and the need to redo the graphics backend both as primitives and to hook it into the Self level GUI framework.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: