Hacker News new | past | comments | ask | show | jobs | submit login
Duckspeak vs. Smalltalk: Decline of the Xerox PARC Philosophy at Apple (2011) (dorophone.blogspot.com)
122 points by panic on Oct 27, 2018 | hide | past | favorite | 77 comments



Here's some stuff I wrote about a HyperCard-inspired system called HyperLook (nee HyperNeWS (nee GoodNeWS)) and some stuff I developed with it:

SimCity, Cellular Automata, and Happy Tool for HyperLook (nee HyperNeWS (nee GoodNeWS))

HyperLook was like HyperCard for NeWS, with PostScript graphics and scripting plus networking. Here are three unique and wacky examples that plug together to show what HyperNeWS was all about, and where we could go in the future!

https://medium.com/@donhopkins/hyperlook-nee-hypernews-nee-g...

Some highlights:

The Three Axis of AJAX, Which NeWS Also Has To Grind!!!

NeWS was architecturally similar to what is now called AJAX, except that NeWS coherently:

…(drum roll)…

1) Used PostScript CODE instead of JavaScript for PROGRAMMING.

2) Used PostScript GRAPHICS instead of DHTML and CSS for RENDERING.

3) Used PostScript DATA instead of XML and JSON for DATA REPRESENTATION.

The Axis of Eval: Code, Graphics and Data

We will return to these three important dimensions of Code, Graphics and Data as a recurring theme throughout this article. But which way to go from here?

Alan Kay on NeWS:

“I thought NeWS was ‘the right way to go’ (except it missed the live system underneath). It was also very early in commercial personal computing to be able to do a UI using Postscript, so it was impressive that the implementation worked at all.” -Alan Kay

What’s the Big Deal About HyperCard?

"I thought HyperCard was quite brilliant in the end-user problems it solved. (It would have been wonderfully better with a deep dynamic language underneath, but I think part of the success of the design is that they didn’t have all the degrees of freedom to worry about, and were just able to concentrate on their end-user’s direct needs."

"HyperCard is an especially good example of a system that was “finished and smoothed and documented” beautifully. It deserved to be successful. And Apple blew it by not making the design framework the basis of a web browser (as old Parc hands advised in the early 90s …)" -Alan Kay


PostScript is an amazing programming language for describing scalable graphics and interfaces! I especially enjoy the ability to very concisely represent information, using dedicated PostScript facilities to construct domain-specific languages, and I have used it to illustrate many situations that would otherwise be quite difficult to show.

I hope that an example can illustrate this point: In Volume 4 of The Art of Computer Programming, Donald Knuth explains how we can find kernels with maximum weight in C_{100}, the cycle graph with 100 nodes. In that example, the weight he assigns to each node is the Thue-Morse code, i.e., (-1)^ν, where ν is the number of occurrences of 1 in the binary encoding of the node number. So for example, the weight of node 2 is -1, and the weight of node 3 is 1.

We can easily determine the Thue-Morse weight of an integer in PostScript:

    /weight { 1 dict begin /nu 0 def
              { dup 0 eq { pop exit } if
                dup 1 and 1 eq { /nu nu 1 add def } if
                -1 bitshift } loop
              -1 nu exp end } bind def
Example: 8 weight ⇒ -1

Now, how might we compactly lay out the cycle graph with 100 nodes? Using PostScript lets us easily play with different layouts. For example, let us use r, u, l and d to move right, up, left and down, respectively:

     /instrs {
       { r r r r r r r r r r r r r r r r
                                       d
           l l l l l l l l l l l l l l
          d
           r r r r r r r r r r r r r r r
                                       d
         l l l l l l l l l l l l l l
        d
          r r r r r r r r r r r r r
                                    d
         l l l l l l l l l l l l l l l
         l u r  u l  u u u } } def
We can see what the described path looks like by interpreting this mini-language:

    30 dup scale
    1 15 translate

    /r { 1 0 translate } bind def
    /d { 0 -1 translate } bind def
    /l { -1 0 translate } bind def
    /u { 0 1 translate } bind def

    0.06 setlinewidth
    0 0 moveto

    gsave instrs { cvx exec 0 0 lineto } forall
    closepath stroke
    grestore
Now, we only have to draw the nodes themselves on top of this path. First, let us define which nodes are actually part of a kernel with maximum weight, found with the methods outlined by Knuth (operations on reduced and ordered Binary Decision Diagrams):

    /inkernel [101 { false } repeat] def

    [1 3 6 9 12 15 18 20 23 25 27 30 33 36 39 41 43 46 48 51 54 57 60 63
     66 68 71 73 75 78 80 83 86 89 92 95 97 99] {
        inkernel exch true put
    } forall
And now we can simply draw the nodes by interpreting the instructions again, and indicating whether a node is part of the kernel, and whether its weight is positive:

    /Palatino-Roman 0.4 selectfont

    0.04 setlinewidth
    1 1 instrs length {
        /num exch def
        newpath
        num weight -1 eq
          { -0.4 -0.4 0.8 0.8 4 copy
            inkernel num get { 0.8 } { 1 } ifelse gsave setgray rectfill grestore
            rectstroke
          }
          { 0 0 0.4 0 360 arc
            inkernel num get  { 0.8 } { 1 } ifelse gsave setgray fill grestore
            stroke
          }
        ifelse
        0 0 moveto num 5 string cvs dup stringwidth pop -2 div -0.12 moveto show
        instrs num 1 sub get cvx exec } for
In this case, I am using circles for nodes with positive weight.


The beauty of your functional approach is that you're using PostScript code as PostScript data, thanks to the fact that PostScript is fully homoiconic, just like Lisp! So it's excellent for defining and processing domain specific languages, and it's effectively like a stack based, point free or "tacic," dynamically bound, object oriented Lisp!

https://en.wikipedia.org/wiki/Homoiconicity

https://en.wikipedia.org/wiki/Tacit_programming

The PostScript dictionary stack enables OOP with multiple inheritance and customizable instances (like prototypes, in that you can add methods and instance variables to individual objects).

https://donhopkins.com/home/monterey86.pdf

>Object Oriented Programming in NeWS. Owen M. Densmore, Sun Microsystems. Abstract

>The NeWS window system provides the primitives needed to create window managers and user-interface toolkits, but does not, itself, supply either. This is done to achieve a layering strategy for building several higher level systems that can share NeWS as their low level window system. None of the traditional ‘‘tool kit’’ solutions currently span the diverse set of clients NeWS needed to serve; they simply lack sufficient flexibility. We are exploring an object oriented approach which uses a flexible inheritance scheme. This paper presents our initial attempt at introducing a Smalltalk style class mechanism to PostScript, and our first use of it.

Apps like (early versions of) Adobe Illustrator, and tools like Glenn Reid's PostScript Distillery (and later Acrobat Distiller), used a domain specific subset of PostScript as their save file format: A save file was a domain specific language that just happened to be PostScript (with no loops or other programming constructs), so you could prepend some standard PostScript function definitions to the front of the save file and send it to a PostScript printer to print. Or you could redefine those names to do something else, like extracting the text, or cataloging the colors, or creating interactive editable objects that you could manipulate and save out again.

Distillery (and the later Acrobat Distiller) went the other way, by partially interpreting any arbitrary PostScript drawing program against the stencil/paint imaging model (capturing a flat list of calls to fill and stroke, transforming the paths to a standard coordinate system, optimizing graphics state changes between rendering calls, unrolling all loops, and executing any conditionals, loops, function calls or other programming constructs). That's exactly what happens when you convert a PostScript file to PDF.

https://en.wikipedia.org/wiki/Partial_evaluation

https://en.wikipedia.org/wiki/Meta-circular_evaluator

Acrobat is the (purposeful) de-evolution of PostScript from a Turing complete programming language to a safe static file format: PDF is essentially just PostScript without any of the programming constructs. (But then they added a lot more crap to it, to the point that it was so buggy they needed to push out patches several times a month over many years.)

Here's some more stuff about PostScript distilleries and metacircular PostScript interpreters (but with old links, so I've included the new one below):

https://news.ycombinator.com/item?id=13705664

Thanks to PostScript's homoiconicity, a PostScript data visualizer/browser/editor just happens to also be a PostScript code visualizer/browser/editor!

https://medium.com/@donhopkins/the-shape-of-psiber-space-oct...

>The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — October 1989 Written by Don Hopkins, October 1989. University of Maryland Human-Computer Interaction Lab, Computer Science Department, College Park, Maryland 20742.

>Left shows objects on the process’s stack displayed in windows with their tabs pinned on the spike. Right shows a Pseudo Scientific Visualization of the NeWS rootmenu instance dictionary.

>Abstract: The PSIBER Space Deck is an interactive visual user interface to a graphical programming environment, the NeWS window system. It lets you display, manipulate, and navigate the data structures, programs, and processes living in the virtual memory space of NeWS. It is useful as a debugging tool, and as a hands on way to learn about programming in PostScript and NeWS.

>[...] Printing Distilled PostScript: The data structure displays (including those of the Pseudo Scientific Visualizer, described below) can be printed on a PostScript printer by capturing the drawing commands in a file.

>Glenn Reid's "Distillery" program is a PostScript optimizer, that executes a page description, and (in most cases) produces another smaller, more efficient PostScript program, that prints the same image. [Reid, The Distillery] The trick is to redefine the path consuming operators, like fill, stroke, and show, so they write out the path in device space, and incremental changes to the graphics state. Even though the program that computes the display may be quite complicated, the distilled graphical output is very simple and low level, with all the loops unrolled.

>The NeWS distillery uses the same basic technique as Glenn Reid's Distillery, but it is much simpler, does not optimize as much, and is not as complete.


Yikes, am I reading this right? That PostScript here is implied as a good thing? Here is xeyes implemented in PostScript. It was probably better than the X Windows implementation of the era, but compared to JavaScript... I shudder.

https://groups.google.com/forum/#!original/comp.windows.news...


Here's the PostScript source code to PizzaTool, which shipped with OpenWindows 3.0 and Solaris 2.0 (SVR4), which is well commented and organized so you can follow what's going on:

https://donhopkins.com/home/archive/NeWS/pizzatool

And the manual entry:

https://donhopkins.com/home/archive/NeWS/pizzatool.6

More on the sordid history of PizzaTool and the Window System Wars at Sun Microsystems:

https://medium.com/@donhopkins/the-story-of-sun-microsystems...

This is a taste of what multi threaded object oriented PostScript code looks like:

  % Paint the pizza, forking off a processes to paint every topping.
  %
  /Paint { % - => -
    PizzaLock { % monitor:
      /reset self send
      /PaintProcess { % fork:
	pause
	/PaintNewPizza self send

	% Wait a second until things settle down ...
	% If /paint is called again within the next second,
	% this process will be killed and a new paint process
	% will sleep for a second.
	[1 0] SLEEP

	% If our parent window frame isn't opened,
	% then it's hiding and we shouldn't draw the toppings.
	/opened? Parent send { % if:
	  % Draw all the toppings at once! Weeee!
	  4 { % Lots of cheese, repeat:
	    Cheese /StartSprinkle self send
	  } repeat
	  /toppings Pizza send { % forall:
	    /StartSprinkle self send
	  } forall

	  % Wait around for all the sprinklers to finish.
	  { % loop:
	    Sprinklers length 0 eq { exit } if
	    % push one of the sprinklers, doesn't matter which.
	    Sprinklers { pop exit } forall
	    waitprocess pop
	  } loop
	} if
	/StopPaint self send
      } fork def
      PaintProcess /ProcessName (Pizza Painter) put
    } monitor
  } def


I truly enjoyed programming in PostScript. I actually like it better than Forth because it had some really nice constructs. If you did Forth programming and enjoyed the paradigm then PostScript is actually pretty good compared to JavaScript.


Well, given that you notice that it "was probably better than the X Windows implementation of the era" it looks like you are in some sense reading this right but are nevertheless making a comparison to a language with many decades worth of modernization improvements, for some unclear reason.


well, it looks frankly beautiful. The code is 100% declarative. It's a pure dataflow graph, right before your eyes. Sure, the syntax feels old, but the ideas beneath will always be the ones underneath reactive and user interface programming.


That's 93 lines of PostScript code, including comments. And how many lines of C code is XEyes? (Not including the Makefile.am, autogen.sh, configure.ac, and other supporting cruft, to be charitable. I'm counting xeyes.c (141), Eyes.c (642), Eyes.h (56), EyesP.h (56), transform.c (111), transform.h (32), eyemask.bit (22), eyes.bit (22) = 1082 lines of really ugly code -- not easy on the eyes!)

http://cgit.freedesktop.org:80/xorg/app/xeyes/snapshot/xeyes...

By the way, XEyes was a later imitation of Jeremy Huxtable's original NeWS "Big Brother" eyes.

https://web.archive.org/web/20140228084012/http://en.wikiped...

Here's a much more elaborate version of NeWS eyes, that lets you drag and drop eyes into each other (see "MoveStopSub"), nesting them to any depth (so child eyes recursively move around with their parent eye, and look really creepy), and split eyes in two with a menu (see "SplitEye"), in 375 lines of PostScript code:

https://donhopkins.com/home/archive/NeWS/eyes.ps

Have you ever actually used the X11 SHAPE extension to make a round window, did you know all X11 windows actually have FOUR shapes (client bounding region, client clip region, effective bounding region, effective clip region), and do you understand all the nuances and implications of the ICCCM ("Ice Cubed") window management protocol?

https://en.wikipedia.org/wiki/Shape_extension

https://www.x.org/releases/current/doc/libXext/shapelib.html

https://blog.dshr.org/2018/05/recreational-bugs.html

"You will get a better Gorilla effect if you use as big a piece of paper as possible." -Kunihiko Kasahara, Creative Origami.

And what is the net speed velocity of an XConfigureWindow request?

https://medium.com/@donhopkins/the-x-windows-disaster-128d39...


Apples and Oranges. Postscript itself includes the code for the primitives used in xeyes, whereas the C code implements them first. Take away that code and it is much more illuminating.


Are you saying that HTML Canvas should not have a method to draw circles because you can implement a circle drawing algorithm in JavaScript?

What is your point? That it's a liability that PostScript has a far superior, more powerful, scalable, device independent, future proof, and vastly more convenient and easier to use and understand imaging model than X11?

And by the way, NeWS eyes came first: "xeyes" was a lower quality knock-off, not the original. The first version of "xeyes" actually used a rectangular window, which kind of missed the whole point, until somebody added hundreds of lines of extremely complex code to implement the SHAPE extension. I linked to the source above: download and read it if you don't believe me.

NeWS lets you shape a round (or any shape) window the exact same way you draw the same shape. See how pizzatool shapes the spinning full or half circle pizza window floating inside a rectangular frame, so you can easily move and resize it:

  % Reshape the parent popup pizza window,
  % to reflect a change in the pizza's shape.
  % In all truth, the pizza canvas is rectangular!
  % It gets its round (or semi-circular) shape
  % from the shape of its parent canvas,
  % which is a hollow rectangular popup window frame
  % (just the window borders),
  % with a discontiguous pie floating in the center.
  %
  /ReshapeParent { % - => -
    gsave
      Parent setcanvas
      /bbox PreviewWindow send /reshape PreviewWindow send
    grestore
  } def

  % The popup pizza window path includes the window borders and the
  % round (or semi-circular) pie floating in the middle, but excludes
  % everything between.
  %
  /path { % x y w h => -
    matrix currentmatrix 5 1 roll		% mat x y w h
      /minsize self send xymax
      4 2 roll translate			% mat w h
      0 0 3 index 3 index rectpath
      WInset SInset translate			% mat w h
      EInset WInset add				% mat w h ewinsets
      NInset SInset add				% mat w h ewinsets nsinsets
      xysub					% mat insidew insideh
      0 0 3 index 3 index rectpath

      2 div exch 2 div exch			% mat centerx centery

      2 copy translate
      min dup neg scale				% mat

      { % send to Center client (the Pizza):
	/radiusscale self send 0 moveto
	0 0 /radiusscale self send
	0 360 /fraction self send mul
	arc closepath
      }						% mat {msg}
      /Center /client self send			%       ... client true | false
      /WhatPizza? assert			% mat {msg} client
      send					% mat
    setmatrix					%
  } def

  % Reshape the window canvas with the even/odd rule.
  %
  /reshape { % x y w h => -
    /invalidate self send
    gsave
      4 2 roll translate			% w h
      0 0 4 2 roll				% 0 0 w h
      /path self send				%
      self eoreshapecanvas
    grestore
  } def
So it looks like this:

https://donhopkins.com/home/catalog/images/pizzatool.gif

But with X-Windows you have to bend over backwards manipulating pixmaps or lists of rectangles, so the window shaping code is completely different and much more complex than the drawing code.

Case in point: take a look at the "ShapePieMenu" function in the Tk pie menus I implemented for X11/TCL/Tk SimCity, for another example of how much of a pain in the ass it is to shape a window in X11. It has to first get the display and query to make sure the extension is supported, then if it is, make the window exist, get the window id, make a pixmap, make a graphics context, set the foreground color, erase the background, set the foreground color, fill a bunch of rectangles, free the graphics context, and call XShapeCombineMask. And that's only shaping the window to a simple list of few rectangles for all the labels, not a circle, which would have been MUCH harder:

https://github.com/SimHacker/micropolis/blob/master/micropol...

So funny that you'd compare drawing circles in X11 -vs- PostScript. Have you actually tried to draw a circle with either API yourself -- let alone used the X11 SHAPE extension to make a round window? Here's an excerpt from the X-Windows Disaster which addresses that very topic -- specifically "in the case of arcs":

https://medium.com/@donhopkins/the-x-windows-disaster-128d39...

Myth: X is “Device Independent”

X is extremely device dependent because all X graphics are specified in pixel coordinates. graphics drawn on different resolution screens come out at different sizes, so you have to scale all the coordinates yourself if you want to draw at a certain size. Not all screens even have square pixels: unless you don’t mind rectangular squares and oval circles, you also have to adjust all coordinates according to the pixel aspect ratio.

A task as simple as filing and stroking shapes is quite complicated because of X’s bizarre pixel-oriented imaging rules. When you fill a 10x10 square with XFillRectangle, it fills the 100 pixels you expect. But you get extra “bonus pixels” when you pass the same arguments to XDrawRectangle, because it actually draws an 11x11 square, hanging out one pixel below and to the right!!! If you find this hard to believe, look it up in the X manual yourself: Volume 1, Section 6.1.4. The manual patronizingly explains how easy it is to add 1 to the x and y position of the filled rectangle, while subtracting 1 from the width and height to compensate, so it fits neatly inside the outline. Then it points out that “in the case of arcs, however, this is a much more difficult proposition (probably impossible in a portable fashion).” This means that portably filling and stroking an arbitrarily scaled arc without overlapping or leaving gaps is an intractable problem when using the X Window System. Think about that. You can’t even draw a proper rectangle with a thick outline, since the line width is specified in unscaled pixel units, so if your display has rectangular pixels, the vertical and horizontal lines will have different thicknesses even though you scaled the rectangle corner coordinates to compensate for the aspect ratio.


I created a new language called zis, which has a function named "doXeyes()". I can do xeyes in literally one line of code. It destroys postscript with it's amazing simplicity. Sorry for the snarkiness, but that's my point.


Nice way to completely miss Don's point. His point is that postscript provides all the necessary primitives to draw in 2D, consistently at every level of the stack. X-Windows is a continuation of the 'worse is better' NJ school of software design.


Snarkyness? What snarkyness? I don't understand what you mean. So I'd appreciate it if you'd explain what your point is more clearly, by directly addressing the questions I asked.

Please be kind enough to link to the source code of the language "zis" that you wrote, just like I linked to the source code that I and others wrote to illustrate my point. I'd like to see the actual code you wrote, so I don't miss your point, and can see that you're arguing in good faith. Thank you, kind sir or ma'am!

So what else is your language "zis" good for? Is it fully general purpose, easily editable at runtime, graphically skinnable with a build-in drawing editor, and dynamically scriptable and extensible by normal users, like HyperCard and HyperLook and Smalltalk? Do you think Alan Kay would describe "zis" as the "right way to go" like he described NeWS, and what other end-user problems does it solve, in the sense that he thinks HyperCard is brilliant and that Smalltalk pioneered? Is "zis" as "finished and smoothed and documented beautifully" as Alan Kay describes HyperCard?

Have you developed and shipped any commercial products or free software with your language, like SVR4 or SimCity, that I can see and run for myself, so I can be confident you're not just making stuff up or wildly exaggerating, and that it works as you advertise? And have you written any books or documentation or articles or research papers about it that I can read to learn more about it? Or at least a screen snapshot, please? What are the URLs?

Have you finished reading the X-Windows xeyes source code that I referred you to? Did you simply copy that code somebody else wrote into "zis" verbatim, or did you actually write your own original code? How is it licensed, and where has it been distributed: is it open source or not, and where's the repo?

Now that you have presumably used the X-Windows SHAPES extension API first hand yourself, please share your experience and tell me what you think of it: do you think that it is well designed and easy to use or not? What changes or improvements would you suggest? Did you run into any of its limitations, or notice any unnecessary complexity, or ICCCM window manager incompatibilities? How well does your language "zis" support the SHAPES extension and ICCCM protocol, or improve upon them, and what else is it good for than xeyes?

Finally, does "zis" support dragging and dropping eyes into each other, nesting their windows into an arbitrarily deep hierarchy, like my NeWS eyes did 27 years ago, and what ICCCM window managers support that feature, or did you have to write your own window manager or X extension to support that feature?

Thanks for taking all this time and effort to design and write and share all that code, and to support and clarify your fascinating arguments. I'm looking forward to seeing your code and your answers to my questions.


So, yes I've made xeyes in both X11/C and another language - it was part of a computer graphics course taken years ago, oddly enough, and it might have been PS, but I don't really remember. I did write one or two small programs in PS, I know that.

My point with the (imaginary) "zis" language is that comparing a domain-specific language like PS to C code is a strange comparison - not without merit, but PS is a higher level abstraction, which is great, but saying it is an amazing language because it is slightly easier than raw C is misleading. If I build up (or use) a nice library (and I'm not going to get pulled into defending X, I'm not as harsh as you on it, but it is ... painful ... to use - also, side note, I remember skimming through the Unix Haters Handbook when it came out, so I probably read your words back then, funny!) written in C, it is fair to compare that to PS.

I'd rather do it in OpenGL/C for instance, plus my version will be more performant. There's a reason that combination (or DirectX) is used in the vast majority of graphics software out there.


Since this article was written, the iPad has gained a number of excellent programming tools from third party developers (Pythonista is my favorite).

And perhaps more importantly, Apple has released two: Swift Playgrounds and Shortcuts. Swift Playgrounds is explicitly designed so that kids (and adults) can learn to program. But it’s also pretty full featured, with access to all of the APIs and frameworks on the system. Shortcuts is programming-in-disguise, much like HyperCard before it. It’s an end user app that nearly requires you to “program”, albeit visually for the most part.

I think the vision of every computer user also being a programmer hasn’t happened because we really still haven’t figured out how to make that work. But that doesn’t mean no one is trying anymore.

As for me, I’m a professional programmer because when I bought a Mac 15 years ago, it came with Xcode in the box, and I was able to start building my own Mac apps without too high of a barrier to entry.


> Since this article was written, the iPad has gained a number of excellent programming tools

That's true, and good, but misses the point. In the smalltalk world (and Lisp machine worlds of PARC and MIT) everything running in the machine (above the microcode layer) was inspectable, breakpoint able, and modifiable.

While on the iPad everything is, as the article says, opaque and unmodifiable, unless its your own app.

note: I worked on Lispms at MIT (and other places) and on D-machine Interlisp at PARC (and other places)


What can you say about the contrast between the East Cost -vs- West Coast approaches to Lisp?

My impression is that Interlisp-D tried to solve problems with excellent tooling, which helped you get around the crappy language design, and Lisp Machines tried to solve problems with excellent language design, which helped you get around the crappy tooling. Kind of like two different manifestations of "Worse is Better", with different trade-offs for what they do worse.

I don't see any reason somebody should't simply combine excellent tooling with excellent language design! But here we are, with neither.


I’d never considered this formulation before.

Maclisp/CADR Lisp was more to my taste, true, but to be fair Interlisp was originally developed at BBN, and folks like Teitelman and Bobrow were former MITers.

I agree the D machine had a more intensive UI while the cadrs and their descendants were more text oriented.


On the other hand, the article misses the point too: if the iPad had everything modifiable

(a) most people wouldn't bother,

(b) the fewer that would bother would include programmers and tinkerers who could create awesome stuff, but also marketers, spammers, crackers, bad programmers, etc, who would create malicious or totally crap stuff that would undermine the stability, functionality, security, etc of the system to a total nightmare.

(Not to mention those were made in an era where you didn't have your personal life plus credit cards on a computer).

In the end of the day the allure of the iPad is that if worse comes to worse, 99% of the time you reboot (or just close an app) and everything is like new. And most of the time it just works.

Android is nowhere near as open as the Smalltalk systems, but is more open than the iPad, and is plagued by the majority of malicious code (according to research).

And there are ways to just open it totally and write anything you want on it, but most don't care.


That’s true, but I shudder at the thought of today’s users running a machine where they, their kids, or some nefarious third party can easily change, say, what the command to draw a line, to write a file, or to send a network packet does. Even well-intended “here’s a nice hack that makes your word documents smaller” changes will wreak havoc, breaking the ability to share files.

Smalltalk and lisp machines are great for hacker types, and I think many of today’s systems are locked down to much, preventing potential programmers from making mistakes to learn from, but I also think we need some restrictions in today’s world.


I disagree. I'd rather have a defense in depth against the nefarious & accidents and allow a Cambrian explosion of experimentation. The professionalization and permissions culture that's developed over the last few decades is ultimately stultifying.

It's shocking to me that network protocols (layers 1-5 at least) are basically 1970s designs. Why don't each set of machines automatically develop their own "jargon" optimised to the kind of communication they have? That's what humans do, to hone the efficiency and precision in their language, yet it doesn't stop us from talking to people outside our immediate circle.

But amateur experimentation in many areas is basically dead.


You worked on D-machines at Parc? Awesome! For years I had an ‘in love’ relationship with my 1108 Dandeline Lisp Machine. Makes me a little sad to now work in the environment of Databricks and Jupiter notebooks using Python, compared to the 1108 in the 1980s. Oh well!

EDIT: I wonder if we ever met? I used to demo my OPS5 port to the 1108 at conferences in Xerox’s booth.


I'm pretty sure we would never have met as I was on the research staff and only went to conferences like POPL and AAAI/IJCAI.

I agree about Jupiter notebooks; the funny thing is on some HN discussion of them I commented that perhaps their popularity might mean we were back on a path towards lispm kinds of environments -- and the comment was downvoted to oblivion. Hilarious!


I always went to the AAAI/IJCAI conferences back then, and those were where the Xerox 1108 sales team would invite me to do demos. That was so much fun.

I still find Jupiter notebooks painful programming environments, Python kernels at work, Common Lisp + Haskell at home. I should try to be more open minded about Jupyter.


> In the smalltalk world (and Lisp machine worlds of PARC and MIT) everything running in the machine (above the microcode layer) was inspectable, breakpoint able, and modifiable.

So Linux plus maybe some elegance features?


There were no kernel/user space distinctions in any of the systems I mentioned, and all of it could be dynamically modified at runtime (no recompilation or restarting) down to the function and variable level. Philosophically completely different from Unix et al.

But you raise an interesting point: one of the reasons I started Cygnus with John and Michael was that I had never used a machine to which I didn't have access to the source code and the idea of that horrified me.


Linux does not expose the whole stack as Lisp Machines and Smalltalk did.

Including the Mesa and Mesa/Cedar systems also done at Xerox for memory safe systems programming.


Having everything in the same space, and being able to inspect and modify anything in real time by the seat of your pants, was exactly how Dan Ingalls impressed Steve Jobs on his fateful visit to Xerox PARC! It's unfortunate the iPad never did (and never will) achieve that level of power and flexibility.

https://www.quora.com/What-was-it-like-to-be-at-Xerox-PARC-w...

>Q: What was it like to be at Xerox PARC when Steve Jobs visited?

>A: Alan Kay, Agitator at Viewpoints Research Institute

>[...] The demo itself was fun to watch — basically a tag team of Dan Ingalls and Larry Tesler showing many kinds of things to Steve and the several Apple people he brought with him. One of Steve’s ways to feel in control was to object to things that were actually OK, and he did this a few times — but in each case Dan and Larry were able to make the changes to meet the objections on the fly because Smalltalk was not only the most advanced programming language of its time, it was also live at every level, and no change required more than 1/4 second to take effect.

>One objection was that the text scrolling was line by line and Steve said “Can’t this be smooth?”. In a few seconds Dan made the change. Another more interesting objection was to the complementation of the text that was used (as today) to indicate a selection. Steve said “Can’t that be an outline?”. Standing in the back of the room, I held my breath a bit (this seemed hard to fix on the fly). But again, Dan Ingalls instantly saw a very clever way to do this (by selecting the text as usual, then doing this again with the selection displaced by a few pixels — this left a dark outline around the selection and made the interior clear). Again this was done in a few seconds, and voila!

>The Smalltalk used in this demo was my personal favorite (-78) that was done for the first portable computer (The Parc Notetaker), but also ran on the more powerful Dorado computer. For a fun “Christmas project” in 2014, several of us (with Dan Ingalls and Bert Freudenburg doing the heavy lifting) got a version of this going (it had been saved from a disk pack that Xerox had thrown away).

>I was able to use this rescued version to make all the visuals for a tribute to Ted Nelson without any new capabilities required. The main difference in the tribute is that the revived version had much more RAM to work with, and this allowed more bit-map images to be used. This is on YouTube, and it might be interesting for readers to see what this system could do in 1978–79.

Alan Kay's tribute to Ted Nelson at "Intertwingled" Fest

https://youtu.be/AnrlSqtpOkw?t=142


I agree about the excellent-ness of Pythonista for writing Python on the iPad. Raskell is also excellent for writing short Haskell programs on an iPad. Hacking bits of Haskell on my iPad is a favorite activity when I am flying and don’t have an internet connection.

What I really hope for is a version of Pharo Smalltalk that was beautifully integrated and sandboxed on the iPad. I would pay a lot of money for it!!


Squeak was ported to the iPhone in 2008, but has never been allowed in the app store. So you have to be a developer and download it and compile it yourself just for your own device, which is kind of the point of the article.

Note that applications built on top of Squeak and Pharo have been allowed on the app store, but all the programming goodness has to be completely hidden. The only exception I know is Scratch, which keeps Smalltalk hidden but is itself a programming language: https://www.softumeya.com/pyonkee/en/


I have spent 100's of hours building a simple simulator using Julia recently. I mention this because this is what this view of computing misses; despite having hugely productive tools (compared to the 70's, remember "compile time?") vast reuse (library after library filled with wonderful goodies) and amazing computers (I use an X1 and have access to servers and so on and if I really need it I can use the cloud any time), software is complex and time consuming. That's why programming / developing is a full time job. Everyone else needs to get on with all the other things they've got to do.


The view of computing you are criticizing in fact doesn't miss your point. It attacks it straight on. What does "programming" mean? Insofar as it involves people using Algol inspired languages -- with some features here and there -- typing up instructions in expensive systems that are (still!) teletype emulators, then what you say is always going to be true.

The point of the article is that once upon a time people began to think another way was possible and started going in that direction. The reasons that all of this has fallen out of the computing pop culture have little to do with viability, but rather the needs of the market.


Discussed in 2015, including comments from an HN user who worked at both PARC and Apple:

https://news.ycombinator.com/item?id=8976872


Windows has a surprising number of ways to code out of the box and all require nothing more than opening notepad and saving it with the right file extension.

.js .vbs .bat .ps1 or .hta

The problem is they are not simple to use for the average user. Windows lacks a straightforward procedural automation system. Flow, Workflows and PowerApps address this in some ways but are really jank.

I want something like:

  GET COLUMN 'customer name' FROM accounts.xls
  REMOVE ROW 1
  SAVE customers.xls
  EMAIL foo.xls TO 'some@guy.com'


Smalltalk is the product of a very well funded research lab - Xerox PARC. It was not so much a successful product.

It's less interesting about think what Apple did with it - because they had other, more conventional, goals in the end of the day.

I think it's more interesting to think what happened at Xerox - why didn't they get it out to millions of people as the software a computer boots into or as the primary development environment. Or even as a system design philosophy?

Xerox sold expensive systems like a few Smalltalk machines, but much more they sold office systems on the same hardware NOT implemented in Smalltalk. IIRC they might have sold larger laser printers with Smalltalk-based control software - can't remember.

Xerox gave Smalltalk 80 to a few companies - like Apple and others. Apple ported it to their machines - this is also the origin of Squeak (a later open source Smalltalk). But it never really caught on. Even though Apple gave developers access to it.

Now the philosophy of Smalltalk was already in the Apple ][. It booted into a BASIC prompt. But later computers were more universal and current incarnations are only seen as appliances. Nobody needs to program to use an iOS device.

On a Lisp Machine one can force any application into a REPL (called listener) at any time. The keyboard even has a key for that. Live programmability is the main purpose of such a computer. Source is only a keystroke/mouse click away.

We don't have that anymore and there must be a reason for it. Probably it would also be a security nightmare in current networked surroundings...


In his classic, must-read article, "Worse is Better", Richard Gabriel explains why. The victory of the C ecosystems has had a profound impact on computing. One small example is that it produced the need for scripting languages like Perl and Python. The Lisp and SmallTalk based systems had no such need.

Also, I think the author somewhat misses the point with his emphasis on end-user programming. The pioneers of the late 1960s and the 1970s saw the computer as a means to elevate and enhance human abilities. The computer would be an intelligent collaborator; teaching us, helping us think more clearly and more creatively, helping us communicate, and so on. Programming was the means not the end. Imagine, for instance, running R and it actively helps you clean, organize, explore, and analyze your data (with the knowledge of a master statistician) instead of just sitting there passively waiting for you to type some cryptic commands. It is this original vision that, sadly, has not survived.


I really like using Smalltalk (Squeak) for personal programming and small projects. It's very reflective and it is easy once you know the tools to learn more about how the system works and how to do stuff. I have not found any similar systems anywhere. I can inspect and browse the source code for the whole system and I really enjoy it. These functions are opposite of what most other systems let you do. They want to be an app like entity that fills a function without you either wanting to know how it works or being able to know how it works.


Smalltalk had a successful niche as server software running complex business domains, eg. oceanic container shipping & routing, and power grid billing. Its legacy is fading, but continues. The old GemStone/S object database still lives on to this day: https://gemtalksystems.com


The reason Smalltalk didn’t take over the world was it was so resource intensive. A single workstation would have cost about $40,000. There’s no way the early Macs could have run Smalltalk, they only became capable of that years later and it was slow as molasses. I suppose it should really have had a chance by the Java era, but C style syntax languages ruled the world and the state of the art had moved on by then.


July 11, 1988 — "Digitalk to Ship Mac Version of Smalltalk: … will run on Mac Plus, Mac SE, or Mac II and will require 1 megabyte of memory."

https://books.google.com/books?id=5T4EAAAAMBAJ&pg=PA3

> I suppose it should really have had a chance by the Java era …

pdf "Ubiquitous Applications: Embedded Systems to Mainframes"

http://www.davethomas.net/papers/ubiquitous1995.pdf


But would it take over the world today? Our latest smartphone properly have 10x to 100x the processor power of the "$40,000" Workstation back then.

I am not entirely sure if Smalltalk was the right approach, most people, the 98% who don't read HN don't want to code. I think Hypercard is a much closer to that vision but it doesn't seems there are massive interest for something like that either, The world today is like 90% consumption, people consume news, media, video etc.


> it doesn't seems there are massive interest for something like that either

It's been a long enough time that most people don't even know that "Hypercard like things" are even possible, or what they mean, or how one could use them. The "appification" of personal computing has ensured that there are "programmers" and "users" (scribes and plebs) and that the relationship will be transactional and commercial.

> he world today is like 90% consumption, people consume news, media, video etc.

The more the development community realizes this, the better off we will be -- because it will become clear that in a technical sense a completely different way (ie the direction of Smalltalk or Hypercard) is possible and that the truly limiting factors are cultural more than anything else.


Given thing today where a hero jpeg is bigger than most Smalltalk’s VM images, it would be fine. The problem is we are in a Unix / Windows world and getting popular now is a lot harder.


A Mac Plus could run Smalltalk.


Yes, but crawling.


That was not unusual for a Mac plus.


Apple ran Smalltalk 80 on the Lisa, Mac Plus, Mac II and a small version on the Mac 512k.

A Mac II or IIfx would probably be faster than a Xerox system - which generally were not very powerful.


ParcPlace Systems was shipping fast, production-grade Smalltalk-80 on the Mac II with JIT compilation (PS and HPS VMs) in 1988. It was certainly faster than cruddy D-machines (other than Dorado of course) and their final Xerox successors.


>Xerox gave Smalltalk 80 to a few companies - like Apple and others. Apple ported it to their machines - this is also the origin of Squeak (a later open source Smalltalk). But it never really caught on. Even though Apple gave developers access to it.

Well, it's not like Lisp ever really caught on either...


At least Apple had Lisp as a real developer product with an internal team developing it for a few years.

https://www.scribd.com/document/45488252/Macintosh-Common-Li...


But while computers rapidly increased in power, the tools that programmers used to program them developed relatively conservatively. It is easy to imagine a world where those tools developed along with the computers, until programming itself became so easy that the average user would feel comfortable doing it.

It's nice to imagine. But is it so easy to imagine?


He has provided historical examples that trend in that direction. The point is: industry stopped looking into this (and an implicit point is also that investigating these things is no longer well funded).


Maybe I’m cynical but I just don’t think the average person cares enough to want to program even if it’s really easy. I mean it’s easy to criticise the software for being locked down, and fair enough, but maybe .0001% of users would really want to modify things anyway.


The amount of half-baked Javascript bookmarklets, Excel scripts, and SQL stored procedures I have come across in my lifetime beg to differ.

People are already programming, and they'd do more of it if everyone had basic knowledge of how. It's like literacy. Before general literacy, people thought it was pointless to learn to write, now everyone jots down grocery lists and working notes because they're capable of doing it without a second thought.


>The amount of half-baked Javascript bookmarklets, Excel scripts, and SQL stored procedures I have come across in my lifetime beg to differ.

Even those are made by what? 1% of users?


People need to understand how computers work because they put a lot of trust and faith in their computers. At any time, the computer might be sending your financial details to a criminal, or taking pictures of you and sending that to a criminal. This sort of overwhelming trust is easily abused.

Learning to program is a good step to understanding how computers work. Making software "inspectable" or "modifiable" by ordinary people allows them to learn more about what's going on, and become accountable for it.

I might be wrong, and there are indeed lots of things in people's lives that they don't understand, but it's still a good thing to take into account. Computers in a sense "do more" than most other machines, and our connection with them is more intimate, so it's perhaps more important for us to understand how our computers work than, for instance, how our cars works.


Many would like to automate their lives, though–but they might not be aware that they are even able to, much less capable of doing so.


the spreadsheet proves many non programmers want to program


I don’t think they want to. They need something (result, transformation...) and it’s the easiest way. They don’t like it, one bit.


Creators love to create, thinkers love to think, programmers love to program.

Most people do NOT like creating(when they can use something already created by someone else without effort), programming or thinking(it is slow, takes effort and gratification is not instant) at all.

It is not a critic on most people, in fact I have seen people spending months of work for installing or compiling a custom Linux like gentoo just because they can. Thinking on problems that will probably be a dead end forever or programming things that would have been done cheaper, better and way faster manually.

That is the reason Steve Jobs succeeded, he made what people (most people)wanted. Instant gratification like music on iPods. Simple solutions to problems people have(simple install). Making sure creators could live from their professional work(simple payment system).

There is a talk somewhere when Steve Jobs talk about him realizing people really demand things like Trash TV.

When you sell products you see reality, Alan could only see the idealism of Academia.

It is not that people do not want to improve themselves. It is just that most people do not want to pay the price, in effort, in time or focus. Just look at the Ads on TV: Lose weight with no effort with product X, in no time, while you watch TV.


> It is easy to imagine a world where those tools developed along with the computers, until programming itself became so easy that the average user would feel comfortable doing it.

I would argue it IS SO EASY. The thing is that programming itself is not a problem. The problem is he complex world the programming happens in. Programming itself even helps to reduce that complexity though by allowing you to traverse some amount of that complexity automatically.

The problem is not, that programming is too hard for people, but that people are not willing to take responsibility for themselves and hteir problems. Most people look to others for solutions to their problems. That's also why people who promise them solutions, like Jesus or Jobs, are praised as saviours, even though they are never able to fulfill all the promises, at least in the amount and dimensions that the receiver would hope for.

> It is interesting that at one point, Jobs (who could not be reached for comment) described his vision of computers

A kind of statement we haven't seen for a long time. RIP, Steve.

PS: The article didn't really explain why he thinks there was a Xerox Parc philosophy inside of Apple and how/why it was in decline, right? It basically just says, there was Xerox Parc with that awesome philosophy and there was Apple who copied some of it, but altogether was a closed system developer.


This has been something that I've been thinking quite a lot about for the past few years. The early Apple was heavily influenced by the work that came out of Xerox PARC. Even during Apple's low point in the 1990s, Apple maintained a research lab that was the logical successor to many of the ideas that came out of Xerox PARC. The Dylan programming language would have been an interesting environment for programming Newton applications, and OpenDoc would have brought the idea of larger applications and compound documents composed from smaller applications and documents, which I find quite similar to the Smalltalk vision in some ways. Unfortunately Apple's research lab would be shuttered in 1997, but the rationale was understandable; Apple was on the verge of bankruptcy back then and Apple desperately needed to focus on its core product strengths.

Apple used to be the champion of personal computing. Personal computing is about empowering individuals by giving them access to computation in a relatively accessible and affordable fashion. Apple's mission was to empower the user through usability, and they applied the research from Xerox PARC and other places to accomplish this. Even though the classic Macintosh operating system is not as powerful as the Smalltalk environment, and even though certain important proposed additions such as OpenDoc unfortunately were cancelled, Mac OS enabled people to be more productive and more creative, and it even helped create many industries such as the desktop publishing market and the early web design market during the 1990s. Apple's usability guidelines for Mac programs were well thought out and enforced a coherent vision of usability for the entire platform. Mac OS X in the 2000s was the pinnacle of Mac, combining the Mac's focus on usability and good design with a solid Unix foundation that provided features that the classic Mac did not have such as preemptive multitasking and protected memory. And, let's face it, once competitors like the Amiga and BeOS died, Apple seemed to be the only major player remaining in the computer industry to have a coherent view of personal computing for the masses. If the Mac at its peak was like In-n-Out Burger, then Microsoft Windows is McDonald's, and the desktop environments for Linux are like frozen microwaveable burgers from the grocery store.

Unfortunately once Apple started making piles of money from the iPhone and other parts of the iOS ecosystem, Apple started to neglect the Mac (especially on the desktop hardware side) and the overall vision of personal computing as a way of empowering the masses. It seems today that all Apple is concerned about is the iOS ecosystem, which is a locked-down walled garden instead of the freedom and empowerment that the Mac provides.

Right now personal computing needs a champion. Many hundreds of millions of people, if not billions of people, rely on personal computers in order to carry out their business and creative tasks. Unfortunately there are no companies that are passionate about personal computing. Apple has become the iPhone company, Microsoft is focused on upholding its monopoly and building its cloud business, Google is all about mining personal data, and the Linux desktop world is too fragmented in order to put up a united front. Even worse, many of the major players in personal computing bought into the notion in the late 2000s and early 2010s that personal computers will be replaced with smartphones and tablets. This led to Apple's neglect of the Mac, Microsoft's failed Windows 8 Metro interface, and some odd design decisions in the early days of GNOME 3 and for Ubuntu. While the industry has been backing away from the thought that tablets will supplant personal computers, personal computing still lacks a champion.

My dream is for either a company or a team of open source software developers to pick up from where Alan Kay, Don Norman, and other people left off and create an personal computing operating system that combines the best ideas of Smalltalk, Lisp machines, Hypercard, OpenDoc, and Apple's usability guidelines from the Mac OS 8-9 days. It will be an operating system that is focused on composable documents similar to OpenDoc but with Smalltalk-like levels of flexibility and control. It will also have a strong emphasis on usability with a "back-to-basics" viewpoint instead of the flat design promoted by contemporary desktop and mobile GUIs.


I am curious to know which GUI you think was better, Mac OS 8-9, one of the versions of OS X, or the original black and white Mac OS (System 6)

I am currently writing a window manager / desktop environment, and I'm kind of torn. At the moment, I'm using the look of the black and white Mac GUI, which I prefer over the colorized Mac OS 8-9 GUI, but at the same time I am fond of the early OS X GUI (10.1 Puma or 10.2 Jaguar).

You say that Mac OS X in the 2000s was the pinnacle of Mac, but then you say you want Apple's usability guidelines from the Mac OS 8-9 days. Which GUI do you think was better? What exactly was it that made OS X in the 2000s the pinnacle?

My goal is to build a "desktop programming GUI environment" using the Objective-C runtime as the base, but with the look and feel of the black and white System 6 GUI. I've written my own simplified Foundation-subset (only the stuff that I need) as well. I'd like for it to be as simple as possible without making too many sacrifices. After all, the original Mac used 128K of RAM, the original OS X could run with 128MB of RAM, and the original iPhone also came with 128MB of RAM. It's ridiculous that everything requires gigabytes of RAM nowadays.


I should've clarified in my original post my preferences regarding the Mac OS.

When it comes to overall systems, I believe that Mac OS X is the pinnacle of the Mac due to its combined usability and stability, and I also believe that the Mac as a platform peaked around the Snow Leopard era (2009-11). Mac OS X has the usability and consistency of the Mac platform with the stability of Unix. Even though Mac OS X hasn't gotten worse since Snow Leopard, unfortunately it hasn't dramatically improved since then; if it weren't for the necessity of modern browsers and security updates, I could still be productive on a computer running Snow Leopard. And the hardware situation since 2012 has been discussed on Hacker News numerous times.

However, when it comes to usability alone, I have a soft spot for the classic Mac OS. Personally I believe the classic Mac OS's interface is actually more user-friendly than Mac OS X's (Mac OS is really simple with its controls and its spatial Finder, while Mac OS X is more complex due to its NeXT roots), and I also believe that the Mac platform back then was more compliant with Apple's user interface guidelines than today (although unfortunately I don't know of a way I could measure this). I find Apple's A/UX and Rhapsody projects to be quite interesting attempts to bridge the classic Mac environment with Unix. I also like the conservative choices that Apple made at the time regarding color. While Mac OS X's Aqua is quite impressive, there's something about the subdued atmosphere of Mac OS, whether it's the original black-and-white interfaces of System 6, the light touches of color in System 7, or the Platinum theme from Mac OS 8 and 9, which is still quite conservative compared to Aqua.


I have been thinking along similar lines, so if you ever start anything up please get in touch.

I think in order to pull this off one needs to build a new kind of personal computer from the ground up with specific end designs in mind.

I have become particularly attached to the idea that the "window manager" equivalent of such a system would be "Hypercard-like" in that all windows, icons, and behaviors at that level can be described, explored, and manipulated in the Hypercard-like thing. It would be powerful and enough for most users. If they needed to go deeper, they could peel back a layer and discover the (high level) systems language, someting like Lisp or Smalltalk with easy library abstractions that can also be explored and learned etc. Hopefully that makes sense.


> The Dylan programming language would have been an interesting environment for programming Newton applications

The shipping product was using NewtonScript. Not too bad.


Funny how smalltalk was one of the first object oriented language, and Kay is among the inventors of object oriented programming, and yet the vision is completely opposed to the concept of encapsulation (which is one of today's core principle of OOP).

Aka: to provide a simple interface to manipulate an object, rather than expose its internals.


That's because Smalltalk's real invention was the term "object oriented".

The C++/Java/C# family of objects favoring encapsulation and interfaces started with Simula 67 before Smalltalk. It's actually amazing how much Simula looks like a modern object language, for example it has a direct analog to Java's inner class. Not much has really changed in terms of concepts.

The story of Smalltalk is really a story about marketing: it's a simple and pretty model that's easy to tell so people lionize it. The reality is that almost all successful object languages used in industry are based on the Simula model not Smalltalk.


+1 right on! I have some experience with Simula 67 (in the mid 1970s, as a young programmer I was tasked with converting a large Simula application to FORTRAN. After building up FORTAN from the bottom, it was easy enough to do, but it was painful to convert something elegant to an uglier form just so it would run on our in house computer).

I totally missed the significance of Simula before your comment.


> …the vision is completely opposed to the concept of encapsulation…

Why do you think that?

p.286 Design Principles Behind Smalltalk [author Daniel H H ingalls]

https://archive.org/details/byte-magazine-1981-08


To reinforce your point: Alan Kay majored in both Computer Science and Molecular Biology and the cell membrane was a significant inspiration for objects. It completely isolates the inner chemical functions from the rest of the world. So encapsulation was the first thing Alan worried about.


I’m not talking about smalltalk’s vision as a programming language but rather the operating system exposing the source of every component in read / write mode.


That is not what you wrote.


I'm very much in favor of constructionism as an educational philosophy, and thus of teaching programming as a basic skill to a wide audience.

However, I also think there is a need for professional software development, and the interfacing between professional software and casual end user programmable systems is problematic in my experience.

Try dealing with an organization where business critical processes are handled by a FileMaker database, a HyperCard stack, or an Excel macro written by a rando. For extra credit, try dealing with an organization where there are dozens of copies randomly copied to different users' machines, with different strains of customizations, none of them under version control (even if the authors were willing and knowledgeable to use VCS, casual programming environments often don't seem to play nicely with version control).

I have little personal experience with pervasively programmable systems, but I can't imagine that they interface well with professional software development either. How do you install a word processor on a system where it's anyone's guess how the addition operator behaves in a given week?


Gets right to the heart of the matter. A clear and well-written article.


If you want to run an Alto, the Living Computers Museum has an emulator that can be downloaded.

https://livingcomputers.org/Discover/News/ContrAlto-A-Xerox-...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: