Hacker News new | past | comments | ask | show | jobs | submit login
The Decline of the Xerox PARC Philosophy at Apple Computers (2011) (dorophone.blogspot.com)
146 points by rlander on Jan 31, 2015 | hide | past | favorite | 113 comments



I worked at PARC (in the Learning Research Group) for most of a year in 1974, and later at Apple for 10 years. In both places I worked with Dan Ingalls and other Smalltalk folks. At Apple I was very involved in projects that led to various HyperCard related follow ons including AppleScript.

This article does not ring true to me at all. The problem with making a big fraction of users into programmers is that we don't know how to do it. The LRG at PARC and then various groups at Apple tried every way they (we) could think of, and other ways have continued to be invented and tried. So far none work. Hypercard was indeed the most accessible development environment, but only a small fraction of Hypercard users ever wrote code (maybe 5%).

Apple has continued to make user programming of Macs as easy as they conveniently can (as far as I can tell having been out of there for a long time). iOS has a different goal -- making devices safe and usable -- which is intrinsically in conflict with maximum programmability. Even on iOS there are plenty of "user programmable" apps.

I think part of the problem here is that many developers take themselves and their developer friends as "typical" but that is totally not the case. This really treats most of the population with (unintentional) disrespect.


5% seems pretty good to me actually! The number for iOS is probably, what, like .01%?

I have a soft-soft for Hypercard -- it was the first programming I did, back in middle school, on the school computers.

They were those old black-and-white macs with Hypercard, and a variety of stacks. We all knew the trick to turn on author mode (level 5 or something? it's been a while!), and from there you could explore and mess with existing stacks.

I still remember the thrill when I first discovered loops, and that I could write a script that would move a character on the screen in response to keys being held down. Then I discovered if statements, & made enemies that chased the main character. Honestly, I FELT LIKE GOD. I had created life.

No instruction, no books, it was all just poking around.

There's really something magical about an environment like that, and it's sad that we've gotten away from it.

(Maybe minecraft or something is the modern equivalent?)


Love your reminiscence. Really captures that awesome moment when the basic idea fully takes hold.

I do wonder why Hypercard's modern clones haven't been more successful. Maybe there are just too many relatively easy interactive environments for any to stand out. For example any browser (with the debugger open) is a pretty amazing interactive graphical development environment -- now with full 3D rendering etc. if you get ambitious.


Excel is, by far, the most successful programming-for-the-masses environment ever created. I'm not talking about VBA, either -- Just the sheet and formulas are a brilliantly accessible functional programming environment.


Agreed (though "functional" is quite a stretch -- "dataflow" is accurate).

And how many people, given a spreadsheet, are willing to look into the formulas and tweak them?


And how many people, given a spreadsheet, are willing to look into the formulas and tweak them?

Enough that speadsheets probably account for a trillion dollars worth of human productivity or more since their invention (just a wild guess).


In my experience, one or two orders of magnitude more than those who are willing to look at "code".


Probably. Maybe the same ratio applied to Hypertalk vs "real code" -- i.e. 5% for Hypertalk, 0.5% for "real code". We never tried to find out.


"Apple has continued to make user programming of Macs as easy as they conveniently can"

I don't see how that is true. XCode is not installed by default. You can use AppleScript and Automator to automate some tasks, but you can't really call that programming.

The source code to the whole system is closed which limits discover-ability.

I remember in my childhood playing Gorillas on my DOS machine. I pressed a button and suddenly all the code for it popped up. I learned that if you change a part of that, the game would change.

There is NOTHING like that on a Mac install today. The closest thing is in the web browser.


> You can use AppleScript and Automator to automate some tasks, but you can't really call that programming

If Hypercard was programming, so is AppleScript. So is Python. So is bash. I think we're talking about the general "programming" - since I also see Excel included alongside Hypercard in this reminiscence - not the more specific (and arbitrary, IMHO) programming vs. scripting language divide.

I loved Hypercard and was also a regular in the AppleScript world for a decade (and made a good part of my living on it at the time), but there are far more options for anyone remotely motivated to program today and with the exception of XCode (probably the least accessible option) they are preinstalled and freely available. And if downloading a free development environment with a simple installer is too great a burden today, I don't think you would have gotten anywhere in the good old days.

I don't really see the burden today - we have more options than ever, and many are available out of the box, and the rest almost universally free. I can automate my OSX environment with AppleScript, JavaScript, Python, Ruby, and bash (and doubtless others) out of the box. I can build command line applications out of the box. I can build Web applications with GUIs out of the box. I can easily extend AppleScript or Python (and, I'm certain, Ruby and bash) to allow me to build GUI applications.

Programming in the 80's was different, for sure, but I have a hard time understanding the sentiment that it was somehow more accessible. Sure, BASIC was preinstalled on some platforms. Still, most platforms today have more, and better, options out of the box than any platform of the early 90's. Did you write C in the 90s for Mac OS (if we're accusing Apple of being inaccessible because of the lack of preinstalled XCode at the moment, then the 90's were far worse if you ever had to pay for THINK C or CodeWarrior).


This is sort of tangential to the other issues being discussed (which I agree are important), but compared to the era of software distributed on physical media, today "Xcode is not installed by default" does not mean that much given that installing it only takes a few clicks in the App Store. Yeah, it's a few gigs to download, but Apple has always been forward-looking, and on a fast internet connection that's only five minutes' wait.

Not to mention that, also unlike then, it's easy to access tutorials and videos galore on the Internet to show you how to get started with development.


It would be interesting to mandate that all Mac Apps be written in Javascript so users could modify them. However this seems ambitious. Even mandating that all apps be scriptable seems to be a bridge too far.

Note that Javascript is supported in addition to AppleScript as a user level scripting language.

Much of the Mac OS is in fact open source -- see Darwin and other projects. However having the code for the OS available -- or having XCode installed -- does nothing for user programmability.

This comment is a good example of why it doesn't work for us developers to see ourselves as typical users.


Apple still supports Widgets, which are canned JavaScript built to run in a special Dashboard app. (Does anyone still develop Widgets?)

But this still misses the point about HyperCard, which was that Hypercard was its own toolchain. To get started with it, you didn't have to:

Install a separate editor Learn a separate editor Install a build system Learn how to build a project Learn how to share code Deal with dependencies

It's the no-separate-toolchain feature that made HyperCard so accessible, and which was influenced by SmallTalk.

Modern programming is a nightmare in comparison - including modern app programming.

Just because Apple uses Objective-C doesn't mean it's approaching app development in a SmallTalk-like way. Getting an app out of Xcode and into the store is a horror story of provisioning profiles, debug vs production builds, sandboxes, entitlements, and so on. Obviously people manage to fight through this, but it's a long way from being friendly and accessible.

IMO only VBA gets close to being a successful no-separate-toolchain environment - which is one of the main reasons VBA became so popular.

(You could argue Python is, but I don't think it's equivalent, because unlike VBA and HyperCard the first thing the user sees is the editor, and not a product that's obviously and immediately useful, but also happens to include a code editor.)


Developers are definitely not the typical users. The hoops you have to jump through now are much greater than Apple II Where you got a BASIC prompt.

I think the reason people are scared of such things is not because they are incapable of learning the tools, but because developer culture has a good reason to make it look hard. Otherwise, why are we payed so well?


> making a big fraction of users into programmers is that we don't know how to do it

It depends on how you define "programmer." At Microsoft in the late 90s/early 00s we were very happy that there were so many Visual Basic programmers, but several million of those included people who would not self-identify as a programmer. That is, they were people who would write VB oneliners in either Excel or Access; people who would make or modify "keyboard record & replay" macros; etc.

I agree that trying to make everybody a full-blown systems or application programmer seems like a very hard problem.


By "programming" here I just meant someone who writes (or modifies) even a small amount of code. We found that this was always an extremely small fraction of users, even in the easiest cases.

I certainly agree that self-identifying as programmers would be totally the wrong criterion.

Consider the number of people today who edit a URL in their browser location bar (for example to delete unnecessary trailing information if they are going to email it, or to try to fix it if it doesn't work). I bet a good study of a representative user population would find that only 10% or less would do this even if it was suggested.

If someone is willing to do that I'm willing to consider them a programmer in this context. They have made the basic leap to understand how to map between text and behavior, how to debug, etc.


Not sure about it constituting programming, but that (awareness of being able to manipulate the resource portion of a URL) does strike me as a good litmus test of general power-userdom.

Funnily enough, starting from Yosemite, Safari has started to hide that portion until you click into the address bar.


I bet all that funny stuff at the end kept people from reading the first part. Now that just the domain shows, I bet more people actually read it.


I don't think it's even a matter of definition. Most people just don't want to program computers. They want to just sit down and use them. If they really did want to write programs (in any way you define that term), they would: the tools are available. A few people explore macros, etc. provided in sofware such as Excel, but most people don't, and it's not because of the tools, it's because they just don't have the need to do it.


Those of us who became programmers in part because of discoverable built-in programming environments would argue that even if only .5% of users do any programming, the inclusion of those features is worthwhile because the other 99.5% of users will benefit from the code written by the few who were inspired to become programmers.


To add what you wrote:

Design for extremes: if you make stuff for people who hate programming, that will make it easier for people who love it. You may end up coming up with ways to boost productivity (and fun) for people you never even thought about before: https://books.google.com/books?id=idNhCcrANP0C&lpg=PA57&ots=...


Like the philosophy of the book, though the examples are really old school. I don't think the operative dimension is "love" vs. "hate" -- more like "can easily handle" vs. "find difficult".

But I don't see how to apply it to programmability -- and I don't think Apple has found any way. How could you make something programmable for people whose eyes slide over a URL without being able to see its parts? (Note: they maybe are willing but they can't, just like grandpa can't make his hands stop shaking and get the key into the lock. Naturally after enough frustration they come to "hate" whatever it is.)


I posted this a long time ago, feels relevant:

https://archive.org/details/CC501_hypercard

It's a Conputer Chronicles episode on HyperCard with creator Bill Atkinson.

What's striking is how ordinary people (the used-car salesman for instance) actually _programmed_ their machines to solve a real-world problem.

This is such a drastically different paradigm then today, where even the savvy are more often consumers than producers, when it comes to computing.

Anyway, HyperCard was amazing.


Did you get any resistance from graphic designers and illustrators when it came to writing code?


Those roles weren't so well defined as they are in the marketplace now. But basically, less resistance than the user base. People working around computers are already somewhat self-selected to be more comfortable with software. Plus of course they knew being able to understand code would increase their marketability.


In 1995, we watched these 1979 videos of Smalltalk in my programming languages college computer science course, which focused on Scheme.

I vividly remember the reaction of my fellow students. Given the mockery and jokes from my fellow students, you'd think they were watching a bad sci-fi movie. Most students discounted everything they saw, 'real men and real programmers used C'. I remember being so disheartened that it seemed we'd evolved so little in tools/languages from 1979 - 1995.

At the time, everything was Unix and C programming (DEC Alpha were just being installed on campus, Windows 95 had just been released). There were a lot of reasons Unix/C succeeded, there is a great classic paper about why C beat Lisp, and I agree with the author.

However, what always troubled me, is how my fellow students completely ignored any potential learnings from those videos. In many ways, those early Smalltalk programs were far more impressive than anything they had created, but they just wrote them off.

At GDC 2014, a post-mortem was presented on the classic game Myst. That was written entirely in Hypercard.


The prejudice of programmers is one of the biggest hindrances of technological advancement in computing AFAIC.

Think about it: currently, functional programming is, finally, getting some well deserved recognition in the wider programming world.

Yet almost everything it presents has been present in programming for as much as 45 years. The original paper on Hindley's type system was published in 1969. Milner's theory went to print in 1978. Scheme first appeared in 1975 and was already building off functional ideas that had been spawned by earlier Lisps. Guy Steele designed an actual Scheme CPU in hardware in 1979.

And yet even today, a non-trivial number of programmers react with absolute horror at the idea of a Lisp (usually based solely on ignorant trivialities like the parens-syntax), more or less exactly as your C programming classmates did in 1995, and while FP is starting to gain major inroads in some spheres, others dismiss the whole field as wank and Java and C remain kings that are unlikely to be unseated for another decade at a minimum, if ever.

We remain utterly bound to one model of hardware, one model of programming, and largely, only a couple models of operating system, after decades of development, because so many programmers react with horror at anything they're unfamiliar with or that deviates from the percieved norm, be it in features, syntax, or focus.

And God forbid you make anything that might actually be easy for non-programmers to learn. It will be more or less met with instant and persistent scorn, and its users derided and outcast, simply because they didn't use a 'Real Programming Language' like C. Go ask a BASIC coder what life has been like for the last 40 years, or a game dev who worked in AGS or GameMaker prior to the last half decade or so. Hell, I have a friend who still sneers at visual layout designers.

The divide described in the article is very much culturally enforced as much as economically.


> Think about it: currently, functional programming is, finally, getting some well deserved recognition in the wider programming world.

People assume that functional wasn't used because of programmer prejudice.

Why don't people assume that functional wasn't used because there were good reasons not to use it?

Imperative programming lives in a world where CPU cycles and memory are scarce. Gee, once CPU and memory became abundant and free, people started using functional programming. Go figure.


The divide described in the article is very much culturally enforced as much as economically.

Statements of this kind trouble me every time I see them, last time yesterday in the discussion about Greece.

I guess Marx's ideas aren't very popular over here. But implying that economy and culture need to be mentioned separately seems at least a little naive.

More fundamentally, you're ignoring the huge difference in power between consumers (in this case the programmers) and the people that create the tools and take them to market.


Actually, what I mean by that is rather the difference in power between the guys doing the coding, and the guys making the business decisions.

Both tend to enforce that divide, but for very different reasons; the coders for cliquish reasons as I described, and the business guys like Apple for the reasons in the original link.


The coders can't really enforce anything. We have opinions and fall in groupthink indeed. But the mere fact that there are different tools is obviously caused by the people and organizations that create them.


Bologna. Plenty of programmers are constantly searching for the new new thing, obviously (ahem hn). It's mostly the maintenance types of people who are prejudiced, and they aren't just programmers.


its not just a language issue, there is a divide amongst platforms too. You can use the same language and you will find people who will still talk down or ignore your points because of platform or language choice.

it is bizarre


And God forbid you make anything that might actually be easy for non-programmers to learn. It will be more or less met with instant and persistent scorn, and its users derided and outcast, simply because they didn't use a 'Real Programming Language' like C.

Prejudice doesn't seem to have anything to do with it. Functional programmers think differently, and what's obvious to the Functionals isn't to the Statefuls. And the Statefuls are currently most of the world. I've flip-flopped myself, because while I love the elegance of being a Fucntional, being a Stateful is just so much more productive. There are a few reasons for this: If I want to make a game, there's no good functional framework. If I want to write a script to get something done, like download a webpage, my goto language is Python because I know for a fact that their libraries work and that their documentation is almost always stellar. Contrast that with Lisp where you can spend at least a day just getting the environment set up in a way that asdf doesn't hate. Especially on Windows. (Yes, if you want to make games, Windows needs to support your dev environment.) My info about asdf is a couple years out of date, because to be honest I haven't felt inclined to look into it again after some bad experiences.

Haskell could be wonderful. Never tried it. Will someday. Until then, I'd love some sort of competition where a Haskell programmer and myself are given a task, like "write a script to X," where X is some real-world relevant task and not an academic puzzle, and see who finishes it first. It would be illuminating, since I'd give myself about a 30% chance of finishing first, but it would reveal what I'm lacking.

Arc had potential. It really did. Everyone just gave up on it, and it never attracted the kind of heroic programmers that Clojure did.

So the wildcard seems to be Clojure. It's a decent mix of performant, practical, and documented.

I'm out of time to pursue this comment further, but the main point is just that FP's problems have very little to do with societal acceptance or scorn. If you're running into that, you're probably running with a bad crowd anyway. It's mostly because imperative languages are popular, so network effects mean they'll just get better and better. If FP wants to chip away at that, it'll need to start off better and stay better. "Better" is many things, but it includes performance, cross-platformability (yes, Windows is necessary), documentation, and practicality (the ability to quickly accomplish random tasks without a huge prior time investment, Python seems to be the best at this so far).


> Haskell could be wonderful. Never tried it. Will someday. Until then, I'd love some sort of competition where a Haskell programmer and myself are given a task, like "write a script to X," where X is some real-world relevant task and not an academic puzzle, and see who finishes it first. It would be illuminating, since I'd give myself about a 30% chance of finishing first, but it would reveal what I'm lacking.

I think one of Haskell's biggest marketing problems is that its strong points (strong static types + separation of side effects) aren't all that important in scripts (or any program that's small enough to fit in someone's head in its entirety), which makes it difficult to convince people of its merits in reasonably-sized examples.

What Haskell gives you are good, solid abstraction boundaries that you cannot accidentally break, and the ability to refactor code with a high degree of confidence that it's still going to work fine afterwards.

Neither of those are particularly helpful for any program that you might write in a competion, but both are incredibly important in day-to-day software development.


My apologies, that portion of my point was meant to be independent of the FP comparison. It is absolutely true that, save for the occasional grudging exception, the 'easy languages' have largely been scorned and shunned throughout programming history. Hell, I've been guilty of it myself where HyperTalk is concerned despite loving the concept.

As for "getting work done" in Lisps/FP, I earnestly recommend checking out Racket. The developers have said that they've aimed to create 'the Python of Lisps' and by and large they've succeeded at exactly that. The documentation is thorough, the functional tools live alongside OOP and imperative ones quite nicely, the standard library is enormous, and DrRacket makes for a very good 'just open up the damn editor and start writing' tool.

Really, most of the FP languages are much more multi-paradigm friendly than Haskell, to the extent that I wouldn't even consider CL a functional-first language at all. CL's problems there are more a lifetime of neglect + Emacs loyalty, though Allegro and LispWorks offer more 'everyone else' friendly options. F# is fantastic as well, and integrates nicely with the rest of .NET and allows for a mixed paradigm while still getting all the functional tools and the power of type inference on top.


> Contrast that with Lisp where you can spend at least a day just getting the environment set up in a way that asdf doesn't hate.

I recently re-installed my OS and setting up my Lisp environment (SBCL, Quicklisp, SLIME+Emacs) took about 30 minutes.


Yeah, download SBCL, load quicklisp.lisp, done.

I don't see what the problem is on Windows.


Several years ago SBCL was unusable for game programming on windows due to crashes. I tried and eventually gave up.

My info is out of date though. Maybe it's better now.


I've not had any issues with SBCL on Windows except with the thread support.


> Stateful is just so much more productive. There are a few reasons for this: If I want to make a game, there's no good functional framework.

> my goto language is Python because I know for a fact that their libraries work and that their documentation is almost always stellar.

> It's mostly because imperative languages are popular, so network effects mean they'll just get better and better.

This claim makes it seem like imperative/stateful is more productive because of functional itself:

> I've flip-flopped myself, because while I love the elegance of being a Fucntional, being a Stateful is just so much more productive

All of your other comments seem to be "imperative/stateful is more productive because of popularity, docs, and libaries".

Are you claiming both of those are true? Weakly claiming the first, strongly the second? Could you elaborate?


No no, I'd never claim something as foolish as FP is inherently less productive. It's the opposite in my experience. Once you have a bunch of libraries and utility functions that you know how to use, you'll wipe the floor with any stateful programmer. The hard part is getting to that point. I'll elaborate more in an edit when I'm home in like 30 minutes.

EDIT: Maybe a little longer.


I'm short on time, but my main point was just that if you're already a stateful programmer, it's difficult to become productive when switching to functional programming due to a combination of factors. FP isn't inherently more difficult than stateful programming. It's simply that stateful programmers have already spent the time to become productive, whereas switching to FP requires another time investment in terms of learning a new mindset, getting a programming environment set up, learning how to exploit the environment (how to be "fiercely effective"), learning which libraries to use and how to find them and set them up, and most importantly: deployment. Deploying native apps in stateful languages has seemed easier for some reason, whether it's python or C or whatever else. I tried to use lisp for gamedev, but it became a nightmare to figure out how to actually deploy everything including libraries along with all dependencies. The end result of deployment needs to be a program that's launchable by double clicking on an icon, which seemed difficult. (Again, my info is out of date, so maybe things are much better now.)

So it's just a combination of mindset change plus a different "context" for programmers to get used to. That, along with deployment problems and sometimes lack of documentation, leads to a lack of incentive for anyone to make the switch. Since stateful is more popular, it's almost always better for productivity to start and stay a stateful programmer. However, for personal skill level, everyone needs to become a functional programmer for at least several months and try to develop production apps with it. Who knows, you'll probably get much further than I did, and figure out a way to convince all your friends to make the switch too.


or maybe, just maybe its all about PERFORMANCE.

Very recently computers got to the point where we can afford to write slow code, this is the reason higher level people are coming out of the woodwork.


Yeah, everything pretty much fell apart in 1958 with that stinking mccarthy. Since then trying to write any performant code meant dealing with an innane chatter of closures and inheritance and parametric polymorphism. Maybe, if we're lucky, computers will get slow again and performance will matter.


Coincidentally, I spoke at length with a grad student software developer for the RoboCup challenge in which autonomous robots compete. In this particular league, Nao robots are supplied to each team as the standard, making the performance emphasis on software that runs in the robot.

So I asked the developer, does he use the famous ROS software for running the robot. He said, no way, because the robot had only an onboard Intel Atom of relatively low performance, all the complex multi-threaded behavior had to be written in low-level C. No space nor execution time for anything more.


I'll bet you a nickel that grad student didn't learn about complex multi threaded behavior in low level C. His challenge was to take a high level concept and express it in a resource constrained environment.

Of course there is a need for freestanding C. But for every one of him there are a hundred developers using c++ or java or javascript. They may not appreciate the power of working in such high level languages, but the vast majority of developers work in HLLs.


Were it really performance-critical, the software would have been written in assembly.

There. Top that.


GDC 2013 Myst Postmortem:

http://www.gdcvault.com/play/1018048/Classic-Game-Postmortem

Their Hypercard experience is at the 38 minute mark.


Sorry, it's just bemusing thinking of 19-year-old college students bragging about being real men.

But this is the problem with our profession, it's baked in at a deep level that old == bad. So every new generation reinvents the wheel, worse than the last. The truth is all the big problems in software were solved in the 1970s, if only we would pay attention. The only actual progress in computing is that the hardware guys make everything faster.


Thanks for sharing.

Do you have a reference to that paper on C and LISP?


I think mmarks is referring to this classic:

https://www.dreamsongs.com/WorseIsBetter.html


Please note, the actual essay in question is here: https://www.dreamsongs.com/RiseOfWorseIsBetter.html


I just discovered that Gabriel wrote a rebuttal to himself under a pseudonym : https://www.dreamsongs.com/Files/worse-is-worse.pdf


I got a tour of Xerox PARC in 1975, when taking McKeenan's UCSC course in computer architecture. I got to see the first Ethernet (Alan Kay referred to it as "an Alohanet with a captive ether"), the first Alto workstations (they were making their own CRTs, and were having trouble getting a uniform phosphor coating), an early version of Smalltalk, and a xerographic print server.

Back then, Kay saw simulation as the killer app. He saw discrete-event simulators as a business tool. They had a hospital simulation, where patients went in and went through Waiting, Examination, Surgery, Recovery, Rest, and Discharge. This was visual, and you could click on the little patient icons and get something like "I am a victim of Bowlerthumb". Smalltalk was a descendant of Simula, which was a simulation add-on for Algol. So that was a natural direction.

It turns out that very few people use discrete-event simulators as business tools. Although you can model your simulator or bank branch in a discrete-event simulator and try to fix bottlenecks, nobody does this. That was a dead end as a concept.

Xerox's commercial product, the Xerox Star, was very much locked down. It came with a set of canned applications, and was intended for use by clerical staff. I don't think it even ran Smalltalk, but was programmed in Mesa or Cedar. It competed with a forgotten category of products, shared-logic world processors. These were low-end time sharing systems with dumb terminals tied to a server machine with a disk, used for word processing. Wang was the leader in that field. Those, too, were very locked down.

Back then, hardware cost was the huge problem. Kay said they could build the future because Xerox had the money to spend. (Xerox stock hit its all-time high in 1973). Each Alto cost about $20K at the time. Apple's first attempt at an Alto-like machine was the Lisa, which was a good machine with an MMU and a hard drive. But it cost around $10,000. The Macintosh was a cost-reduced version, and the original 128K floppy-only Mac was a commercial flop. It wasn't until the cost of parts came down and the Mac could be built up to 512K and a hard drive, with the option of a LaserWriter, that it started to sell in volume.

What made the Macintosh line a success was desktop publishing. "Now you can typeset everything in your office" - early Apple advertising. It sold, because it was far cheaper than a print shop.

Before the Macintosh, there was UNIX on the desktop. Yes, UNIX on the desktop, with a GUI, predates the Mac. Three Rivers, Apollo, and Sun all had good workstations with big screens in the early 1980s, before the Mac. The Three Rivers PERQ launched in 1979, five years before the Macintosh, with a big vertical CRT like the Alto. All these had some kind of GUI, generally not a very good one. Those were the first descendants of the Alto. They were used for engineering and animation, not just word processing.


iPad is the first computing device in history that I can suggest to my non-computer-savvy friends and know they will be able to use it, without the software getting screwed up in a few months. With the iPhone and iPad, Apple has (pardon the hyperbole) brought usable computing to the masses.

We can bash Apple that they did not turn the masses into programmers at the same time, but it’s far from obvious this is even possible (on the contrary). And, at least for me, a hackable software device is a device that can be broken more easily, thus compromising the first and most important principle of usability. It’s so refreshing to be able to tell people that they don’t have to worry about the device, because there is almost nothing they could break, no matter how hard they try.

Also, this sounds awfully condescending:

I think one of the main consequences of the inventions of personal computing and the world wide Internet is that everyone gets to be a potential participant, and this means that we now have the entire bell curve of humanity trying to be part of the action. This will dilute good design (almost stamp it out) until mass education can help most people get more savvy about what the new medium is all about. (This is not a fast process). What we have now is not dissimilar to the pop music scene vs the developed music culture (the former has almost driven out the latter -- and this is only possible where there is too little knowledge and taste to prevent it). Not a pretty sight.

It’s like someone has got the whole computer interaction thing sorted out and is just waiting for the rest of the idiots to catch up. With all respect to Alan Kay, I’m not buying that.


>iPad is the first computing device in history that I can suggest to my non-computer-savvy friends and know they will be able to use it, without the software getting screwed up in a few months.

This is really important. Mutability is a key requirement for creation, but also destruction.

What's more, the iPad is a capital P Product, which itself hosts Virtual Products, both of which have tightly constrained mutability. And this, it seems, really resonates with people: bright, shiny, consistently behaving virtual products. Apple wins financially by tightly coupling these virtual products to their physical product, but people win too because now they have, arguably for the first time, truly reliable virtual products.


iPad is the first computing device in history that essentially takes away any user capability to tinker with its machine, forbids installing any software that Apple decides it shouldn't be running on it, including basic things like alternative browsers (as you know, Chrome on iOS is only a reskin of the included WebKit) or installing a previous OSes (which means my one year old iPad 3 became a slow but beautiful paperweight after updating to iOS 7), you can't develop apps for it without paying Apple 100€/year (even if you don't plan to distribute them), etc. But if you think that's okay because that means less technical support for friends, go ahead.


> forbids installing any software that Apple decides it shouldn't be running on it

Weird, I got a developer license and was building and installing random github projects on my iPad without any fuss. If you can't afford the license, go in on it with 100 of your closest friends and it's only a buck.

> But if you think that's okay because that means less technical support for friends, go ahead.

I'd happily pay 100 bucks not to do tech support anymore. I sent my mother my iPad 2 a year ago and haven't heard a peep from her about computing problems since.


So you don't see any issue with paying Apple _again_ to grant you the right to install what you want in the device you already paid for (and I'm not talking about distributing it in the App Store). Well, we can agree to disagree.


It's not that I don't see any issue--I do and it's actually a selling point. The best security in the game comes at a price that I, and millions of other satisfied customers, are more than willing to pay. The only people who complain about Apple's system are people who don't mind doing free sysadmin labor on a device that they already paid for.


Ever hear of an NES?

History didn't start in 2008.


It's a spectrum, Game consoles have always been similar to that too


There is also no way to fix things when they break:

One of the iOS upgrades (7 or 8) brought massive performance problems, on a normal OS you would just downgrade but there is no such option on iOS.

Alternative browsers are also very slow, it looks like Chrome isn't able to take full advantage of V8. An Android device that costs less than a third than the iPad can run Chrome much better than an iPad2 can.

The iPad2 (WiFi) also has a GPS device, but it only works when you are connected to WiFi: I couldn't get it to work without WiFi which limits its usefulness. Android devices with GPS always worked offline, you even have the option to download AGPS data when you're connected to a WiFi to speed up TTFF, but even without that it works it just takes (a lot) longer to get a fix.

Try finding the equivalent of a commonly used application from Linux/Windows on iPad that isn't filled with ads, or limited features (unless you do an in-app purchase). For example is there a usable VNC client?

In the end the iPad is useful only for: web browsing (with default browser), reading emails, playing games. An equivalently priced Android device can be much more useful, and you have a wider selection of useful applications.

With all these problems I couldn't recommend anyone to buy an iPad (much less an iPhone), and I'll probably sell mine at some point.


The iPad 2 WiFi didn't have any GPS, it uses WiFi Positioning to geolocate hence the need to have WiFi on:

http://en.wikipedia.org/wiki/Wi-Fi_positioning_system

There are several good VNC clients for iOS, here are a few paid and free ones: http://www.thetechgadget.com/top-vnc-client-iphone-ipad-ipod...


> The iPad2 (WiFi) also has a GPS device, but it only works when you are connected to WiFi

Testing on my iPad3 right now, and it seems to work fine. iPad2 should really be considered a legacy device at this point (even if Apple still sells it).

> Alternative browsers are also very slow, it looks like Chrome isn't able to take full advantage of V8. An Android device that costs less than a third than the iPad can run Chrome much better than an iPad2 can.

And you can't use Safari at all on an Android device! Gasp! As for the JS engine, I'm pretty sure all apps have the ability to use the full-speed one now, and if Chrome is lagging behind, the blame lies with Google, not Apple.


Chrome has never used V8 on iOS. All browsers on iOS use the built-in WebView with custom UI and with iOS 8 all 3rd party browsers have access to the same full-speed Nitro Javascript engine that Safari uses.

WiFi versions of iPads do not have GPS. Only the cellular versions. And those cellular models can use GPS even with no cellular connectivity.

Complaining about iOS apps with ads is laughable when Android is just as bad or worse. And god forbid that developers make any money with a 99 cent app or in-app purchase. And it's obvious you didn't even try a simple Google search for "iOS VNC" as there are plenty of worthy apps.


Looks like they haven't updated Chrome to the new API as the benchmarks here still show it as significantly slower: http://octane-benchmark.googlecode.com/svn/latest/index.html

I agree about the Android market, but at least there are alternative sources like f-droid which usually have a selection of better apps.

I tried 'vnc viewer' and 'vnc client', neither of which worked too well. There are probably better ones if I would've kept looking


For me, this was the money quote which I think is a sentiment not mutually exclusive with what you're saying:

It is just a shame that in an effort to make interpersonal engagement over computers easy and ubiquitous, the goal of making the computer itself easily engaging has become obscured


> the pop music scene vs the developed music culture (the former has almost driven out the latter -- and this is only possible where there is too little knowledge and taste to prevent it)

I wonder about the idea that popular music is undeveloped, tasteless, and ignorant. Take Michael Jackson's "Off The Wall" album. It's a very sophisticated album by any measure except, of course that it is pop music.


Yeah, that was a really ignorant statement for him to make. It cracks me up because a lot of pop musicians are classically trained.


I don't think Kay was claiming popular music is _universally_ tasteless; that would be ludacris. He's lamenting that taste is not important to music's popularity. That you could think of a particular example of developed popular music would seem to support the claim that pop music is, in general, undeveloped.


There are an awful lot of great pop albums. I don't find them inferior in any way to classic classical music.

> He's lamenting that taste is not important to music's popularity.

Taste is such a subjective thing really all he is saying is that popular music is not to his particular taste.


A lot of the most popular music has lyrics utterly devoid of meaning, simplistic least-common-denominator structure, humanity auto-tuned out of the vocals, and dynamic range compression crushing out subtlety for greater loudness. Taste IS subjective, and that's why the things that become very popular have to have very little of it.


Ravel's "Bolero"


I think he might refer to what Geocities was to web design, or what PHP copypasting is to software development.


Indeed! I for one welcome our plebian overlords.


I have mixed feelings about these sorts of utopian dreams of universal programming. On the one hand, yes, that would be amazing! On the other, whenever you make a system that open, inevitably malicious strong programmers will find a way to take advantage of naive weak ones. I still think it's possible, but Smalltalk, nor anything else we've come up with yet, is still hopelessly inadequate. I suspect that the problem of creating a programming environment that anyone can use is roughly equivalent to the problem of creating an AGI — it has to do what the user means, while seamlessly handling the thousands of details and gotchas that can sink them.


>On the other, whenever you make a system that open, inevitably malicious strong programmers will find a way to take advantage of naive weak ones.

Can you explain this and give an example? I've been around for a while and don't see, or perhaps don't recognize, this pattern. If anything, I observe the opposite: very strong programmers help the weaker ones by creating new languages, libraries, and tools.


I'm talking about malicious hackers. They're a tiny minority, but it only takes a few.


Viruses, botnets, key loggers...


Malware.


Alan Kay gave a talk that was recently discussed here on Hacker News titled: Is it really "Complex"? Or did we just make it "Complicated"?

https://www.youtube.com/watch?v=ubaX1Smg6pY&=

Someone asked Alan Kay an excellent question about the iPad, and his answer is so interesting, and reveals something very surprising about Steve Jobs losing control of Apple near the end of his life, that I'll transcribe here.

To his credit, he handled the questioner's faux pas much more gracefully than how RMS typically responds to questions about Linux and Open Source. ;)

Questioner: So you came up with the DynaPad --

Alan Kay: DynaBook.

Questioner: DynaBook!

Yes, I'm sorry. Which is mostly -- you know, we've got iPads and all these tablet computers now. But does it tick you off that we can't even run Squeak on it now?

Alan Kay: Well, you can...

Q: Yea, but you've got to pay Apple $100 bucks just to get a developer's license.

Alan Kay: Well, there's a variety of things.

See, I'll tell you what does tick me off, though.

Basically two things.

The number one thing is, yeah, you can run Squeak, and you can run the eToys version of Squeak on it, so children can do things.

But Apple absolutely forbids any child from putting a creation of theirs to the internet, and forbids any other child in the world from downloading that creation.

That couldn't be any more anti-personal-computing if you tried.

That's what ticks me off.

Then the lesser thing is that the user interface on the iPad is so bad.

Because they went for the lowest common denominator.

I actually have a nice slide for that, which shows a two-year-old kid using an iPad, and an 85-year-old lady using an iPad. And then the next thing shows both of them in walkers.

Because that's what Apple has catered to: they've catered to the absolute extreme.

But in between, people, you know, when you're two or three, you start using crayons, you start using tools. And yeah, you can buy a capacitive pen for the iPad, but where do you put it?

So there's no place on the iPad for putting that capacitive pen.

So Apple, in spite of the fact of making a pretty good touch sensitive surface, absolutely has no thought of selling to anybody who wants to learn something on it.

And again, who cares?

There's nothing wrong with having something that is brain dead, and only shows ordinary media.

The problem is that people don't know it's brain dead.

And so it's actually replacing computers that can actually do more for children.

And to me, that's anti-ethical.

My favorite story in the Bible is the one of Esau.

Esau came back from hunting, and his brother Joseph was cooking up a pot of soup.

And Esau said "I'm hungry, I'd like a cup of soup."

And Joseph said "Well, I'll give it to you for your birth right."

And Esau was hungry, so he said "OK".

That's humanity.

Because we're constantly giving up what's most important just for mere convenience, and not realizing what the actual cost is.

So you could blame the schools.

I really blame Apple, because they know what they're doing.

And I blame the schools because they haven't taken the trouble to know what they're doing over the last 30 years.

But I blame Apple more for that.

I spent a lot of -- just to get things like Squeak running, and other systems like Scratch running on it, took many phone calls between me and Steve, before he died.

I spent -- you know, he and I used to talk on the phone about once a month, and I spent a long -- and it was clear that he was not in control of the company any more.

So he got one little lightning bolt down to allow people to put interpreters on, but not enough to allow interpretations to be shared over the internet.

So people do crazy things like attaching things into mail.

But that's not the same as finding something via search in a web browser.

So I think it's just completely messed up.

You know, it's the world that we're in.

It's a consumer world where the consumers are thought of as serfs, and only good enough to provide money.

Not good enough to learn anything worthwhile.


While Jobs saw a Smalltalk system during some Xerox PARC visit, the Mac was more influenced by the Xerox Star office system with its GUI, Desktop, Icons, ...

http://www.digibarn.com/collections/systems/xerox-8010/retro...

Compare that with a classical Smalltalk UI:

http://research.microsoft.com/en-us/um/people/blampson/38-al...

The programming language of the Xerox Star system was 'Mesa'. Which was much more like what Apple used: Clascal, Lisa Pascal, and then Object Pascal, ...

http://en.wikipedia.org/wiki/Mesa_(programming_language)


Yeah, back in the day the bitmapped display was so awesome.Look at the proportional fonts and bitmapped image!

I was using IBM 3270 terminals, in a special shared room for that purpose, to edit COBOL programs with fixed width fonts - ugh.


As Smalltalk allows to write really readable code, I often wonder how much subconscious fear coming from the IT community itself is triggered by something like Smalltalk?

How strong the self-defence mechanisms would be if this system succeeds at allowing anyone to be equally able to program anything on the system?

Wouldn't be way "safer" for the statu quo of the IT community to hide implementation power behind the walls raised by the C learning curve?

Interestingly, the divorce between technicians and users/consumers is steady reversing the trend.


That’s simply crazy. Are you a programmer?

I, as a programmer, spend my days trying to express things as clearly as possible. Programming, as a whole field, is trying hard all the time to come up with safer and simpler ways to express precise thoughts.

Smalltalk is probably neat, but it’s not a magic bullet. There are no magic bullets. Anyone can’t be a programmer not because programming languages are hard. Anyone can’t be a programmer because you have to be able to think precisely, abstract, deal with complexity.


Yeah, the simple explanation as to why the programming community at large doesn't agree on languages or methodologies is that no one knows what they are doing. This is still a very new field. We're all just bashing rocks together. To declare some language or methodology as universally "better" just reveals a fundamental lack of insight into just how difficult the problem being solved really is.


> I, as a programmer, spend my days trying to express things as clearly as possible. Programming, as a whole field, is trying hard all the time to come up with safer and simpler ways to express precise thoughts.

Mmm... I'm not so sure. "Programming as a field" certainly does, in the sense that if you take it as a whole and look into the field, there are people trying for those things. But by and large, many programmers seem to revel in some degree of obscurantism.


No I don't really think so. Otherwise python and ruby would trigger similar fears.

And I work with Objective-C a lot. There is a downside to theExtremelyLong:message passingStyle:functionNames

You could say similar things about applescript too, yet I've never met anyone who actually uses it: http://alvinalexander.com/blog/post/mac-os-x/applescript-lis...

Smalltalk was good and it inspired a lot of future things but it didn't succeed because it had a bad $ price / performance ratio in the 80s and the image system created too much coupling compared to files.


"There is a downside to theExtremelyLong:message passingStyle:functionNames"

I actually like the selector syntax. I have type ahead and don't have to constantly look at code to figure out what parameter number something is.


It is nice in some cases, but for example, string manipulation is pretty hard to parse compared to python because frequent operations are huge sentences. Pulling out json data that is presented as NSArray's and NSDictionaries was especially painful. That large amount of text makes it slower to parse and and thus your less productive. As PG said, the consistent factor in bugs seems to be lines of code. The more verbose and boilerplat-ish a language is, the more likely your going to have slip ups that cause bugs for similar functionality.

Ex: [NSString stringWithFormat:@"%d items", itemCount] vs. "%d items" % itemCount

or

json[@"keyA"][@"keyB"][0]; vs. [[[json objectForKey:@"keyA"] objectForKey:@"keyB"] objectAtIndex:0];


Agreed. My college days Fortran IV only allowed 8 character upper case identifier names - ugh!


>> … but it didn't succeed because … <<

By what definition of success?

From 80's through '90s Smalltalk was used commercially in finance and manufacturing and … on large scale projects.


Well yes there were commercial implementations, but people bemoan smalltalk 'dying'. Compared to C++, C or even Java for example.


Presumably because they now have to use C++, C or even Java.


Smalltalk, as used by enterprises in the 1990's was far from simple to program. Object oriented programming is a powerful tool to manage complexity, so Smalltalk programmers at the time used it to introduced then-new programming constructs that would confound trainee developers ...

* publish - subscribe * everything is an object * message passing * naming conventions for Classes, methods, and parameters * Image-based editing as opposed to word-processing style editors and so on...

In its domain, for ease of use back in the mid 1990's, Microsoft Visual Basic was the best. See the pattern: Visual Works Smalltalk, IBM Visual Age Smalltalk, Microsoft Visual Basic.


"...[Steve] Jobs (who could not be reached for comment)..."

I did a double-take after reading that line before I noticed the publish date.


Despite the fact that computer languages are technological constructs, what happens to them is similar to what happens with other human languages. Adoption is more correlated to the usefulness than to any intrinsic characteristic of the language. For example, English is the most used language in the world, despite the fact that it is a bad language in so many senses: hard to spell, hard to speak, ambiguous, etc. Other languages more suited to the task are forgotten -- just take the example of Latin that is now a dead language. So, despite the fact that the C family has shortcomings, this has little to do with its destiny as a way for humans to collaborate in creating software.


This line of thought is pretty common amongst a certain class of modern HCI researchers.

See for example a tweet from Bret Victor a few days ago:

https://twitter.com/worrydream/status/560519372641677312

Or this article by Alec Resnick:

http://alecresnick.com/post/84756789325/from-bicycles-to-tre...

Their argument can be roughly summed up as "We were on the right track with the constructivist approach to HCI in the 70s/80s, but capitalism ruined everything and now we just watch Kim Kardashian on our tablets". There's of course some cynical truth in there- I also spent my grad school years reading Papert and Piaget and Kay and Resnick (Mitchell, not Alec), and I also find their vision of personal computing very enticing. And in many ways I orient my work to fit within the frameworks they built- I have nothing but deep respect for them as academics.

But I don't buy the whole "tablets and phones are just mind numbing dumb entertainment devices". In the past 5 years, I have seen:

- my younger brother start an internet radio from scratch from a bedroom at our parents, that rose to thousands of listeners

- my grandmother use the internet to communicate with her geographically distant friends and family

- teenagers making short movies in their backyard, or learning how to compose music

- kids discovering themselves a passion for photography, able to retouch photographs without needing access to a dedicated dark room, all with a hundred dollar device (when I was a kid, a hundred dollars barely got you a walkman).

- illustrators empowered to work from their home and make a living by working with clients many thousands miles away

- and many, many more.

None of that would have been possible even 15 years ago. Our tools are much, much better - Garageband and Photoshop and iMovie and Fruity Loops and others offer the means to do things in your bedroom that would have cost tens of thousands of dollars and required hundreds of square feet of free space a few decades ago. Sure, you have some purists that might argue that Instagram is to a dark room what a McDonald's burger is to Kobe beef - but I take issue with that as well [0].

Naturally, there are people who only use these devices for watching movies and playing games. That's absolutely fine, and I'm not sure why computer scientists like to take such a haughty attitude towards that. Not everyone spends all their waking hours working on their next masterpiece; and in fact, even the people who produce masterpieces spend idle time doing unproductive things just on par with reading Reddit or watching silly YouTube videos.

Are computers the best they can be? Far from it, and many of us work very hard at it (including the people I quoted earlier). But the attitude that we are now in a tarpit of mind controlling devices and that the golden days of computing are behind us is deeply wrong. We have come such a long way.

[0] https://news.ycombinator.com/item?id=8856780


I blame fervent fanboyism of any particular hardware or software for leading us to such levels of distraction. The article mentioned something dear to my heart that I want to expand upon because I think it’s at the core of what’s preventing real progress not just in computer science but in the way computers elevate humanity. One of my biggest dreams is to see a reconciliation between message passing languages like Smalltalk/HyperTalk and functional languages like Lisp. They are more similar than they may seem.

For example functional languages eschew the use of variables but I think that has set back their adoption because variables merely attach human readability to logic (like comments). Seeing a big blob of functions is great for brevity but disastrous for long term maintenance. A compiler can trivially treat variables (especially immutable ones) like macros and transform them back into the same syntax tree that pure functional code generates. To say that another way: why don’t functional paradigms suggest that we break long blocks of code into shorter blocks with variables? They seem to be discouraging human instinct when there is no cost in encouraging it.

Also if we take a step back and see that the elimination of globals gets us most of the way to functional programming, then the main thing left is the notion of time due to externalities like input/output. Imagine for a moment working in a lisp environment using REPL. When you type some code and hit enter, how is that fundamentally different than a monad? Well, it potentially triggers the entire syntax tree to be reevaluated which is actually more expensive than handling input with a monad. But metaphorically it’s similar - we can just think of monads as places where the logic can’t proceed because it’s missing a piece of the puzzle. If we were using functional programming properly and assumed that we had a near-infinite number of cores and flops, we’d quickly see that monads are the bottleneck.

We should be able to translate between global-less Smalltalk message handing and Lisp monads, and let the compiler optimize away any monads that don’t depend on external state. To me, that suggests that working at a level below human-readable/imperative is generally a waste of time. We should be able to select any code block and use a dropdown menu to select the language we want it presented in. I remember the first time this hit me and I asked myself if it was possible to write something in HyperTalk/Smalltalk/C (or any other imperative language) and covert that to Lisp with these conventions. The answer to me is pretty self-evidently yes. Going the other direction from Lisp to an imperative language is even easier.

A ramification of this is that if you converted a C loop that uses immutable variables and no globals to Lisp, it would be evaluated into the most minimal logic possible because analysis would reveal that (for example) the elements being iterated over have no side effects between one another. In other words, the compiler would independently parallelize the loop and derive something analogous to map/reduce. Why are we doing that by hand? Surely there has to be a better reason than brevity but I struggle to find one.

When it’s all said and done, I think the arbitrary distinctions we’ve made between computer languages are relevant to education and worker productivity, but in the end are fantasy. I would have thought by now that we would have dropped the pretense. If computers worked the way I had imagined they would by this point in history, a tablet would have at least 1000 cores (or drop the notion of cores entirely and go with functional hardware), the compiler/runtime would consider the latency between it and the other devices around it in the mesh and adapt accordingly, all processes from the kernel to userland services to executables and threads would just be functions with the minimum permissions necessary to do their jobs, and most importantly usage would be reversed so that users would write macros through the use of human language rather than try to figure out how to do a task by hand from a locked down sandbox. All this code would be shared out to the world through some kind of hybrid Git/BitTorrent so that the solution for how to perform some declarative programming task would almost always already exist. And all of that would be constantly evolving with genetic algorithms and other software agents.

The tablet may as well not exist, because with the world’s computing power at your fingertips, why is your dumb terminal any better than another except for eye candy? It’s reminiscent of idol worship. Kind of gives me the heebie jeebies actually. And tragically moves us further away from technoliteracy with each passing moment.


(2011)


In the early 1990's Visual Works Smalltalk became the language that introduced enterprises to object oriented programming.

At FPL, the large Florida electric utility where I worked, we went all-in, received training, and with pricey consultants, built several client server applications using GemStone as the Smalltalk server.

What most developers loved about Smalltalk was the liveness of the system, in that a just-in-time compiler made it possible to develop in a debugging environment. You changed a line in a method, and immediately you could interact with the revised app.

In 1995, Sun Microsystems released Java whitepapers. When I saw those I thought - Java is Smalltalk with C syntax, but without the image-derived liveness.

FPL switched from Smalltalk to Java, gaining all the benefits of reusable code, and object-oriented libraries.

I miss Smalltalk syntax, in particular named method arguments, and to this day comment my Java code to compensate like this...

    sendMessage(Message.notUnderstoodMessage(
            message, // receivedMessage
            this)); // skill
where the method parameter names appear as comments if the calling method does not have local variables of the same name.

A particular feature of the Visual Works Smalltalk implementation was a concept known as "Save to Perm" which, after a thorough garbage collection, moved the remaining long-lived objects, e.g. nearly the whole language runtime, to a memory buffer thereafter free of garbage collection.

Even after two decades, no Java implementation has this feature without resorting to off-heap storage.

IBM got on board the object-oriented development wave in the early 1990's and introduced Visual Age Smalltalk. They bought a Smalltalk version control product "Envy" and created the development environment that has over the years evolved into the famous Eclipse (IBM's Eclipse "blocks out" Sun Microsystem's Java).


I make an app at work using a Javascript framework called Cappuccino. It transpiles an objective - c like language to JS allowing Cocoa developers to develop web apps.

Initially it was very tedious to develop apps as you'd have to refresh the page each time , and the transpilation times began to add up. I read about small talk and realized that objective-c is very close to small talk and hence I could implement hot reloads in a similar way.

I proceeded to make a watcher process in node.js and made a process in the page listen via websockets. Whenever a file changed , I'd just fetch the file and execute it again , replacing it in the central class repository. All existing objects would instantly have their implementations changed.

It's much more fun to develop this way but it's also easier to make a mess. But it's speeded me up enormously and I end up missing it when I code in other object oriented languages.


I got to use Visual Works at the university during 1995, then as you say Java came out and Smalltalk became history.


Yes, I read somewhere it was supposed to be an Eclipse of the Sun :)

Note sure if apocryphal or not.


The smalltalk environment, while technically impressive, had a very low usability level for beginners. Just download Squeak and try to use it! It was basically academic in every sense of the word.

Or put another way: yes, smalltalk addressed programmability, but it did not address usability.


Squeak is not representative of the original Smalltalk.


You mean the original smalltalk was more user-friendly?


Is this article just not loading for anyone else? I have all scripts running and everything else the webpage wants, and there's no content there. Just the loading icon.


Works OK for me - Chrome on Ubuntu.


Modern Blogspot strikes again.


Always important to establish credibility by getting the name of the company you're writing about wrong in the title.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: