For the specific case of traversing objects (well, let's say data types), I think Haskell's lens library is the ultimate modern example. In fact, I think it basically covers everything he asked for in that section and more. It's quite a large library, for better or worse, so it really is complete.
People have called it a "jQuery for data types", and that isn't such a bad description. It allows you to uniformly interact with nested records (similar to objects), collections and variants (values chosen from a known set of possibilities--sum types).
It's systematic, modular and surprisingly powerful. Of course, like many powerful abstractions--including Haskell itself--it takes a bit of up-front effort to learn, but it's more than worth it.
Which is also a bit closer to the nature of `>=>` chaining. I actually really like the _Just descent since it makes some of the failure modes for this lens very explicit.
And then it should always be said that the lens has setter properties that the `>=>` chain does not.
I'm of the opinion that the way we interact with code is flawed. This author touched on it when discussing text dumps. I agree with the author that little time is spent modifying large modules of code (something NoFlow[1] is trying their hand at).
However, I think these text dumps we work with should be organized differently. As it stands, we interact with code on a file-by-file basis. But any programmer knows that the execution path of a given program is rarely going to be contained within one file, and it's not going to execute from the top of the file to the bottom. Execution jumps from file to file, and programmers spend much of their time maintaining a mental model of these jumps.
I think this is the wrong way to program. I think execution path should be more easily accessible to a programmer, and they shouldn't have to navigate through function calls one "Go To Definition" at a time. This topic has become fourth year design project for myself and a few peers and we're trying to play with different representations of code on a function-by-function basis.
We've since integrated it into Visual Studio so as to retain Intellisense, syntax highlighting and existing plugin functionality. We're using Roslyn to gain insights into semantic information about the code.
For this specific aspect of the problem, you'll probably find some inspiration in code bubbles, debugger canvas, and Light Table. I implemented a prototype of the former in VS before I left MSFT, it later became the basis for the debugger canvas, and then Light Table happened.
I remember seeing code bubbles when you released it and thinking it looked much more promising than the flow-based approaches to abstracting away code files that we usually see. NoFlow and the like tend to ignore the fact that:
1) The approach has been tried countless times before and failed.
2) Programmers don't need code abstracted away. Text is great for telling a machine what to do. The real pain point is in linking pieces of relevant text, something that OOP does pretty successfully, but still leaves some room for improvement.
3) Nobody has 25 feet of screen space to see an entire program on, and scrolling around over giant flows is a huge PITA.
4) Attempts to solve the screen real estate issues through collapsable nesting of flow paths usually lead the same incomprehensible rat's nests of logic that regular old code does.
5) Business people don't want to code, despite their fantasies of kicking all the expensive programmers to the curb. Computers usually do exactly what you tell them to, and talented programmers are different than your typical business people in that they have a knack for taking high level requirements and turning them into highly complex, low level implementations. Someone who has no interest in this sort of work will ever be any good at it IMHO.
6) Building flows requires 99% of the implementation to already be complete. Stringing together prebuilt modules with if/thens is not that difficult or unreadable in textual code anyway.
Regarding (3), I think if you need 25 feet of screen real estate in a graphical environment, then your code is probably not organized well, regardless of whether it's textual or graphical.
Regarding (1), I think we need to understand why it's been tried and why it's failed. Knowing that it's been tried before and failed should be enough to make you skeptical, but it shouldn't be enough to outright reject something unless it's a copying an old idea wholesale. That said, I think NoFlo specifically has very few new or interesting ideas.
I actually saw the Codeporium .dlls kicking around inside the DebuggerCanvas package. Most of the Visual Studio interfaces lack a lot of formal MSDN documentation, which has forced me to do some novel research haha.
Edit: Oh, you are Chris Granger haha. I'd also like to thank you for the Visual Studio 2010 editor tutorials, while I have the chance.
Light Table looks fantastic, and I hope it inspires a change within other IDEs as well.
I hate to say it, but that looks like my Smalltalk browser. Smalltalk has various built in code browsers that allow you different views of the code, your plugin behaves like one of them, the chasing browser.
but loved the visualisation space imagined / implemented for excel calculation paths in this recent submission.. i wonder whether a similar idea (ie. for viz.. not replacing an editor) would have mileage here, or would it just be so complex as to be indecipherable?
That's very similar to the effect we're trying to achieve. I was happy to see it was received well by the HN community. I think the same approach would work well in an IDE.
Anyone who thinks we're using programming technology of the 70's probably wasn't programming in the 70's. Today it's routine to create systems in a handful of days that would have taken a larger team a much greater amount of time to create in the 70's, 80's, 90's, or even 00's. In fact, for years now, power users do certain kinds of applications with spreadsheets and similar tools-- without programmer intervention-- jobs that were previously handled by systems analysts and development teams.
Interfacing different kinds of computer systems together today is orders of magnitude simpler than it was even in the 90's, much less the 70's.
Sure, we're not all talking to our computers while they program themselves and driving around in flying cars. But software development technology has come a long, long way. Users are tons more empowered to do their own work now than they used to be. And the amount of knowledge and training needed to make a developer effective today is nothing like it used to be.
C and Unix were invented in the 70's. Yes, things have improved a lot in the meantime, but underneath, it seems like a lot of the really basic paradigms are the same. We're mostly typing imperative code into a plain text editor, saved in a disk file in a tree-shaped filesystem, that probably runs on a Unix-like system. Maybe Windows, but in the space of all possible operating systems/environments, they're not that far apart.
I'm starting to think that the reason we keep building tree-like data structures is because that's how our minds work. We can intuitively grasp hierarchies of categories (a tree structure), with the occasional exception (a symlink to another location).
Because we intuitively see the world in this way, we are good at building and maintaining systems that work in this way.
Anything more sophisticated than that (algebra, relational databases) requires a lot of study and practice to get good at.
> [Maybe] the reason we keep building tree-like data structures is because that's how our minds work.
No. That's how physical space works. When you're a library, and you need your user to be able to find the books, you don't have a choice: any given book must be in one shelf, in one alley, in one room. And bam, you have a tree hierarchy that is three levels deep.
But our minds are better at dealing with tags. Just see how popular they are in blogs. I bet that a tag based file system would be vastly better at storing personal data than a hierarchical one.
For me a basic tag based file system needs several things: A unique ID for files (a strong cryptographic hash of the contents is probably best), a name for each file, and a list of tags for each files. And of course, a number of logical volumes on which the files are "located". Where a directory structure is actually needed, files could have special names, like "/os/glad/file_42" (that's a name, not a path).
A variation on this theme would be to do away with explicit names altogether, and only use tags. The "name" of the file would just need to be reasonably unique. That one is probably best.
Now, when you search for a file, you just query for tags. Can also be done through the shell: it's just that those two commands would have the exact same effect:
cd /foo/bar
cd /bar/foo
As for the volume, you need them to transfer files between your USB key and your computer.
Oh, thinking of using cryptographic hashes to name files… When you modify the file, the ID changes as well… that's a
new file! By default, the old version should still be around. Imagine that: Git for the masses.
Both is better. There is no reason for the navigation system to ignore groups and hierarchies, it just should also be able to deal with arbitrary tags.
Having watched people struggle with what is essentially a configurable search pane (specifically, the excellent thumbnail viewer in Windows Live Photo Viewer), there does need to be a simple default, probably one that looks just like a hierarchical file system. It just shouldn't be the only way to use the system.
We could just put all the books in one, big room, just aisles and aisle of shelves, everything numbered sequentially. We could even build a library as one long hallway -- one single line of books.
But we choose to divide our buildings up into rooms. We choose to create that hierarchy.
Edit:
I suppose, though, that our minds evolved to operate in physical space. Go hunt some zebra, go climb a tree... things like that...
Even if you put the books on a single long shelf, chances are you're going to make the numbering system based on the content of the books, like say, the Dewey Decimal system. That's a hierarchy of at least two layers, right there. If you assign random identifiers, you'll probably end up creating that same hierarchy in the index instead, unless you do something more tag-based which is actually likely to be better.
When I got to learn UNIX in the form of Xenix, I was already comfortable with many GUI systems and IDEs.
Eventually I used most UNIX systems, commercial and free versions thereof, including the tools most HN folks love. However I am a mouse/keyboard person and love my IDEs.
Every time I look at someone using plain vim/emacs with a few text terminals open, I see someone working in a UNIX System V system.
Programming is futuristic in a 1950s sort of way. There's a classic Astounding Magazine cover from the era of a guy aggressively boarding a spaceship or something with a ray gun in his hand and a slide rule in his teeth.
In other words, it's not futuristic because we're not terribly good at imagining the future.
The guts of modern computing is based on the fundamental logical architecture of Von Neumann and others, work of the 1940s and 1950s. We may be building things in software that couldn't have been imagined back then, but we're building it on structures designed back then.
Look at the iPad and modern tablet computers, in many ways the pinnacle of the modern computer movement. Then look up Alan Kay's Dynabook, which he invented (conceptually) in 1968! We're just now catching up with his vision from 45 years ago.
I recently had a need for an algorithm that did some pretty complex geographical data interpolation that was faster than the existing implementations. The modern vendor library I was using took about a minute to compute.
I searched through some old archives and found a scanned in PDF of the algorithm I wanted to implement from a late 60s fortran implementation. I converted it to node.js and Presto! The thing takes the exact same inputs, gives the exact same outputs, and does it all in... 35 milliseconds.
With the complexity of the algorithm and size of the dataset I am pretty sure it would have taken a mainframe the size of a warehouse and weeks to compute back then, but it showed me that the underlying data structures and mathematics have not changed in the least.
I have always tried to take it to heart that "we stand on the shoulders of giants" and all that, but this was really a turning point for me over the last few weeks, that many of the greats from the past would make us all look like morons if they were around today.
There are implementations of this algorithm in C, C++, Java, Python, Javascript, and probably others, but they are all at least an order of magnitude slower than the one that came out back then, given the same modern hardware, even in a language that is not exactly known for its computational prowess.
Tablet computers the pinnacle? They're fairly generic from a conceptual point of view. Pretty much the straightforward logical evolution of computing, as size gets smaller and computational power gets higher.
Please don't mention the iPad as a pinnacle of anything. It is not. It is merely a rehash of old technologies packed into a slick exterior and heavily marketed. It is more a tool of social status than a utility. Compare this to the HP Compaq TC1100, which was a tablet computer released seven years before the first iPad and the specifications of which trumped those of the iPad by double.
As for being victims of a constrained Von Neumann mindset, perhaps so. On the other hand I haven't really noticed any non-Von Neumann languages that are too practical for human purposes. Languages like APL and FP are quite esoteric, compared to the straightforward, if theoretically lacking, ALGOL model.
> Please don't mention the iPad as a pinnacle of anything. It is not. It is merely a rehash of old technologies packed into a slick exterior and heavily marketed. It is more a tool of social status than a utility. Compare this to the HP Compaq TC1100, which was a tablet computer released seven years before the first iPad and the specifications of which trumped those of the iPad by double.
The iPad is more than heavily marketed dribble; it was actually usable when compared to the TC1100, and basically "realized the dream" of tablet computing when predecessors couldn't: specifications are shit, usability is king. Engineers often don't get that.
> As for being victims of a constrained Von Neumann mindset, perhaps so. On the other hand I haven't really noticed any non-Von Neumann languages that are too practical for human purposes. Languages like APL and FP are quite esoteric...
CUDA dominates the HPC world right now; MapReduce dominates big data processing. Vector machines and data-parallel pipelines have won big in their own domains. And I'm not even going to talk about relational database, all of which are very non Von Neumann computing models and have been very successful.
I had the TC1100, was able to pick it up cheap from liquidator of computers.
I had to figure out ways to undervolt the CPU so I could extend the battery. The battery typically lasted maybe 1-2 hours, but by undervolting I had gotten it to about 3. It was incredibly bulky and heavy, and it had no actual touch ability. It would heat up quite a bit.
Now the one good thing about it was it had a very accurate Wacom digitizer and man could it write notes. Shame there was never a good app for it.(OneNote maybe).
I think Microsoft took a cue from the way TC1100 provides a keyboard(not a shitty SurfaceRT one) that you can connect to and it's sturdy. Something I think the Surface is doing.
So overall not a usable device. My only question is that someone like Steve Jobs being around the same time never thought to release an Ipad at the time. I wonder why, did he think that the tech just wasn't ready hence not worth spoiling the user experience.
The iPad is the pinnacle of the marketing job of making people forget everything that came before :)
But, seriously, Haskell, for example, is quite practical (better than the imperative languages) for a big set of applications. One problem is that people want a language to rule them all - and that language can only be imperative; the other problem is that people don't want to learn different things, and a new paradigm is as different as you can get in programming. That makes everything else unpopular.
People say this a lot, but I honestly don't think it does. Code written using monads, whether the IO monad or a custom monad stack provided by a web framework, tends not to play too nicely with other Haskell code. It kind of demands everything be on its own terms and isn't as simple, clear, and transparent as the best Haskell code can be.
Don't get me wrong, I love Haskell, but I don't think it's very good at imperative programming, and I think there are much better solutions to be found to problems such as I/O.
I would say that the iPad is the pinnacle of "not having to think about anything".
Battery life is solid, unlike the tablets before (the figure I found for the TC1100 is 2-3h), the operating system is built for touch unlike Windows (or OS X). There are just no quirks to it that have to be explained by historical or technical circumstances. It just is. That's probably why it is so frustratingly non-futuristic :)
I think one of the easy mistakes is to think of the future as a mere progression of the past. This question of futuristic programming inevitably comes back to transcending current limitations of computers. But those limitations haven't really changed in the last thirty years (or perhaps longer). The same fundamental problems exist in all times and eras.
Perhaps these are the wrong limitations. Perhaps the limitations that really matter is how we think about programming. The discussion of sequential vs parallel programming comes to mind.
But it is one thing to think about programming as single threads which follow nice, clean flowcharts, and seeing the program as multiple threads woven or knitted together in complex ways. This isn't necessarily parallel programming, but it can range from things like Aspect-Oriented Programming to moving to asynchronous parallel programming and distributed systems.
But if we recognize this then maybe the solutions in the past, as inadequate as they were, were striking out in useful directions. Thus I am a fan of a sort of back to the future approach where we look sympathetically on discarded solutions for inspiration to future problems.
Love the quote, "Phrasing problems in solvable terms is more effort than solving them." as to why Prolog didn't take off.
I'm surprised, however, given the talk about ASTs, that the article mentions nothing about LISPS and the notion of a language being homoiconic. Lot's of really awesome stuff happening in clojure that really does feel like the future (or the awesome past reloaded).
Right, any lisp is basically a readable (and with macros dynamically editable) AST. The author even suggests XML as a possible format, which is significantly more difficult to read and edit than simple s-expressions.
Or Forth. One to the powerful abstractions there is that you can only write a program using the vocabulary offered by its semantics. So a simple core allowing more complex constructions allowing for expressing solutions to a problem that is expressed in the same syntax.
For me though this really comes together as a knotty problem in hardware description languages. They are also a great place to start on the problem because hardware is pretty easily specified as a 'problem'.
I think the explanation for this is relatively simple:
The obvious is that the low hanging fruit has been picked, so of course the rate of change is going to decrease as the problems get larger and harder to solve.
The less obvious is the massive influx of users for these languages/tools. There is an unavoidable regression to the mean related to ability/willingness to expend effort that comes from that influx.
In the previous era if you were involved in any way with programming you were, by definition, interested in pushing the boundaries. Now the majority are actively resistant to change like humans are with most things. This is their job, and changing how things work threatens their livelihood.
So why have the subset of people who are willing and able to do these things not doing them? I say some of them are doing it, but not enough to get a critical mass (at least a critical mass that would lead to the kinds of widespread improvements being discussed).
Obviously this is affected by the subculture I'm in but the most common reason I've encountered why these new ideas don't get traction is because it's considered a waste of time. Learning a new way of solving problems is time not spent spamming other social networks trying to increase signups to your social network. Growth Hacking is "getting shit done!", using some new technique that requires learning something new in order to be more productive is considered Ivory Tower bullshit.
Both Clojure and Haskell are communities I follow where there is real interesting futuristic work being done right now. But Node.js, with it's basis on callbacks is way more popular. Even though everyone in the 70's agreed that continuation passing style was a usability nightmare for programmers and put a strict limit on the complexity a human using it could handle.
Looking at it another way: Programming is futuristic, it's just that most people are too lazy to bother and just stick with the older stuff that works the way they are comfortable with, then human nature kicks in and labels everything they don't do as "stupid waste of time".
Node.js is just full of Worse Is Better. I think there's a danger for more pure/forward-looking environments to look down on that, because there really is some Better in it – I think the work done to get us forward is going to have to meld Worse Is Better with Actually Better.
When I first encountered computer programming, we were still using line numbers. Now, about 20 years later, we've got Python and Ruby and Haskell and ubiquitous GC and… so many frameworks.
Programming is more futuristic than ever, and the tools available are continuing to evolve.
If you discard the idea of working with objects, and instead work with data structures directly, you get generic querying and transversal for free.
A lot of these points look like they're solved by Clojure, but Clojure is my hammer, so perhaps it's cognitive bias that these look like Clojure-shaped nails.
Right. Actually a lot of the issues Bicking points out were handled by Lisps years ago. And you don't have to give op on objects per se: Objects in Clojure (with ad-hoc typing) can be treated as maps and darned near anything is seq-able in that Lisp dialect! Having s-expressions and macros lets you traverse and recombine things you could only imagine doing in Python. And don't even get me started on the power of laziness in Clojure.
Don't get me wrong: Python pays the bills these days. And that language is an able workhorse. But boy do I miss Clojure. The reason I don't use it at work is that I work in Android, iOS and Google App Engine most of the time. Which means Java, Python and Objective-C. The code world has infantilized away from Lisp too far.
Yeah, we're stuck with plaintext currently. This is because every time someone tries to write a better non-textual language, they fall into the trap of writing a cute toy or flowcharty bullshit--not a robust tool that scales to real codebases. And you can point to Smalltalk and Lisp machines all day, but they have to win, not be mere curiosities that lost gracefully. How do you write a structured code editor that wins against text? It's really, really difficult.
I really, really want to fix this--write a real post-plaintext programming environment. I've tried and failed so far (although at least I figured out the data structures) and I have my own life to live, issues to deal with, etc. All hope is not lost though. Rust is shaping up to be the exact foundation the next generation of programming environments needs. We'll get out of this plaintext plateau soon.
Re: "Better, more accessible ASTs," this is the aim of Steve Yegge's Grok Project.
The author never explains why all these features are desirable.
Safe object traversal? Sure, reflection is a pain, but how would an improvement in this area bring us into the "future"? Also, I may have misunderstood, but isn't Lisp's object traversal about as safe as you can get?
More/better ASTs? I think he was trying to say "compilers should have APIs for third-party tools". Otherwise I don't see why it matters whether the compiler builds an AST or not.
"Direct manipulation of data" as coding? First of all, what does this even mean, and how is it better than the current standard of ASCII files? The author dismisses graphical programming languages as being too heavily focused on "symbolic manipulation". I'm guessing he's talking about LabVIEW, which is admittedly horrendous. But there are others. Blender's node-based shader system[1] fulfills many of these seemingly arbitrary wish-list items. It's easily accessible via Blender's Python API, it has a clearly visible AST, it's deterministic and massively parallel...
In short, I'm confused and unconvinced that the future envisioned in this article is better than what we have now.
Safe object traversal: if you can traverse objects you can start to create search algorithms (which is what Prolog's solver really is) that work across different domains and objects. I think this is a path towards goal-oriented programming.
ASTs: by representing the program in a more abstracted way, we can start to build tools that manipulate them in ways other than ASCII editors.
Direct manipulation of data: well, Bret brought this one up, not me. I thought it was a little odd. While spatial representation of code was about stretching our ASCII into something else, I chose to interpret this as stretching our concrete editing tools towards code. In other words, how do you add abstractions to concrete editing tools. I think the simplest task would be the one to start with: how do you parametrize something in a concrete data editor?
A modern IDE already knows the AST it's working with in intimate detail - it has to, so that it can refactor it. But it's still displayed as (mostly) plain text, because that's still the best format for humans to read code (or indeed anything else) in.
I'm doing some serious development right now for the first time in 12 years, and it's way more fun and much easier to work with a geographically dispersed team than it ever was before. It may not be hand-wavy object magic.. but let's be real about this: we're not a future-oriented culture anymore. We're all about the present. Lots has been written about this elsewhere.
The secret to creating advanced tools is to minimize the magic. Crazy powerful tools still require the expert user to have a predictive mental model of what is really going on so that accurate decisions can be made and valuable experiences can be accumulated on the learning curve.
I would say, though, that the programming we do is by its very nature futuristic. So few humans have been able to interact with information the way programmers today do. The tool churn and huge volume of ideas and trials that we're going through will, eventually, become a mainstream way of working with ideas. It may take a long time.
The other thing is.. a hammer may seem like a simple and obvious tool, but there is still a huge gap between an expert and a novice when it comes to pounding nails. You still need a lot of practice.
I dunno, I think programming is pretty futuristic. We've got live coding, futuristic platforms, fast dynamic languages, etc... Of course, the most futuristic platform is right under people's noses, the browser. It does things that old Lisp programmers imagined, like live coding, even 3D games, with sound, and even Kinect-like motion control.
And of course there's plenty more that can be done. People tout an IDE like Lighttable, which of course uses web technology to accomplish everything special it does.
Edit - and as ugly as Javascript is, it's pretty powerful. Has all the power of Lisp/Scheme, beat with a C-flavoured ugly stick for awhile, enhanced with advanced runtimes (V8), and transformed by transpilers/compilers, is even used to represent bytecode (asm.js) and emulate other platforms (http://fir.sh/projects/jsnes/ and http://copy.sh/v24/)
Personally, I prefer just seeing raw text on a screen, without a fancy IDE, just a plain vi session. If it is my own code base, then I already know what is where, and can navigate / update very efficiently. If however I am working on someone else's code base, then it is necessary to study it for some time before diving in. What can really help in this case is a "code map" -- a document that gives a tutorial introduction and reference to the code, what is where, and why, etc. Included in the code map would be both program flow, and data flow (what data goes in, how it is transformed, and where it ends up). So what is needed is tools to help maintain that code map document so it doesn't get out of date.
Programming is futuristic. It's just that we can always imagine a better future -- and that's what makes programming so awesome. It enables us to build that future.
But if you compare Ruby or Clojure to Forth or COBOL, there's no way you can't see how far we've come. And there's no end of improvement in sight.
Comparing, and... nope. They were both developed within a few years of each other, and are from different paradigms, so regression seems like a cheap dig, but not an accurate one.
When I have a really computationally complex thing I need to write, I first write it in Clojure (or some other Lisp, but Clojure is closest at hand these days) so that I understand it fully. It's just simpler: I don't even have to think about syntax in that language. To me, this is the first requirement of what Bret Victor is talking about. Python and Ruby pale by comparison in expressiveness. Because there is no "syntax" that will beat pure data. Then I go and write it in whatever language I'm "supposed" to write it for work. It's kinda shameful to me that this is where I'm at.
I've wondered if it would take a 'non-programmer' to develop a new metaphor/representation of programming that is more futuristic. The OP has some interesting suggestions of characteristics that 'futuristic programming' would have, including safe object traversal, ubiquitous object extensions, and code transport, but these are still conceived within the paradigm that we think of programming today. Maybe programming needs an outsider to help us start over conceptually?
If so, we are doomed. The set of non-programmers that can competently think about formal rules is getting smaller by the day. And it'll only decrease as long as computation remains relevant.
There is a catch-22 here in that in order to come up with new ideas, you need to be fairly well-versed in what kinds of things computers can do but can't have been indoctrinated into assuming programming has to work a certain way.
I'm saying in order to come up with new types of programming, you need to already know about how programming works currently, which limits your thinking.
it's slow, but i think it's happening. co-routines are starting to become popular 35 years after icon had something similar. that could be seen as step on the road to integrated search (which needs some kind of idea of sequences of results).
[incidentally, i wrote a recursive descent combinator lib that worked on generic sequences, in python. afaik no-one ever used it for anything but strings. not even for parsing binary data in comms protocols. i extended it to a regexp engine; it was hopelessly slow. one reason regexps work so well is that they're so efficiently implemented on simple sequences of bytes. the overhead of "irregular" sequences is quite something - perhaps jits will help here (although pypy didn't help me)...]
This is a sad discussion -- both the essay and here. Nobody has bothered to look back in history.
"You need to be able to inspect and traverse objects, all objects."
Smalltalk? Lisp Machines?
"There also must be a culture where proper extensions are regularly provided on objects. Powerful tools are built on powerful paradigms, and enabling a paradigm isn’t the same thing as actually implementing it across a fully developed programming environment."
The MOP?
Are we doomed to keep on asking these questions again and again? An IT curriculum should include at least one unit where students are required to study the history of these things and write an essay.
Lisp and Smalltalk are not on the standard curriculum at most US schools. I know that GT used to offer an OO course (speaking of abandoning history, can't view old course webpages past 2008, and new ones are apparently hidden behind a student only portal) which used Squeak, it was a co-requisite of Software Engineering (circa 2003, don't know how it's changed since because I can't get to the pages). I wasn't introduced to lisp until I took a special topics course that used Common Lisp for AI in grad school. Ok, technically I used scheme before then in a survey of programming languages course, but 2 weeks exposure hardly counts. For various reasons I left GT about that time, and finished up elsewhere. I do know that in 2002 (2001?) they introduced scheme for CS 1301 (whatever the number changed to), but dropped it for python later (cheating scandal that year, something like 200 students caught in CS 1 and CS 2).
Speaking of, that PL course was probably the closest to a CS history course that I ever took. The only other courses that came close were ones where I deliberately sought out papers on algorithms (AI, graphics) that had been developed in the 70s/80s relevant to projects I was working on. Other students just brute forced their solutions taking advantage of the much faster hardware available at the time.
I took an AI course at uni so that I could see some Lisp in action (and learn AI) but the bastard lecturers (2 of them) change it to ... JAVA! While still using the Norvig textbook that had the algorithms in LISP-y pseudocode. And with lead lecturer being a Vietnamese PhD who couldn't speak enough English to explain the most basic concept. 2/3 of the class flunked.
Way to go on your initiative to look for papers and context around your other courses! I bet the perspective it gave you has helped.
Smalltalk, what of it? I've never used a Lisp Machine, but I did go through a Smalltalk phase. Smalltalk doesn't identify between safe and unsafe method calls, for instance. So sure, you can safely get the underlying instance variables of an object, just like Python's obj.__dict__. But you can't traverse into external structures, you can't use the standard getter methods since you don't know what's a getter and what causes modifications.
And the meta-object protocol is exactly what I'm talking about: extensibility without ubiquitous useful patterns. The MOP might (maybe) be helpful in implementing a prototype, but it wouldn't by itself create a system where goal-oriented traversal and search across an application was possible.
Assuming your concern is across a single/uniform language:
How about Smalltalk as a basis or the foundation of what you want? If somebody pursues the approach, how hard would it be to develop a layer that takes care of the safe-unsafe issue?
MOP: extensibility without ubiquitous useful patterns.
My experience says that "ubiquitous patterns" manifest when you use a technology against real problems. So use the MOP to tackle several problems and then watch the patterns emerge. If 1/25th of the raw brain power had been applied to abominations like PHP had been diverted to using Lisp and exploring the meta patterns in there.... my gut says your blog post would have been about some cool framework you wrote to solve the problem ;)
On the other hand, if your concern crosses the single language barrier (you mention LINQ) then what sits at the top of Object Model A and Object Model B? It is some Meta Model or a specific library framework that understands both. Back to the meta again, back to all those leaves on a branch of history left unexplored.
What do we have instead? Brainpower, time, money and youth/a generation being devoted to the "hip hop VM" at Facebook.
Given your prolific programming output (thank you for pip and virtualenv, big admirer) it is no wonder that you ponder these issues and wish to pursue something higher up. What I was trying to get at is that, respectfully, you are not the first and that there might be some lessons in the history of our profession that we could learn from and use as a basis for progress.
Regarding ZeroVM, I am a bit confused as to what exactly "moving apps to data means". If I have a 20 GB log file to process living on a filesystem, how do I "move" my Python processor script to it?
"How about Smalltalk as a basis or the foundation of what you want? If somebody pursues the approach, how hard would it be to develop a layer that takes care of the safe-unsafe issue?"
I don't feel like Smalltalk is going anywhere, so I don't know. Smalltalk has some great lessons, but I don't think well defined data access is one of them. Haskell, as mentioned earlier in this thread, has some pretty strong guarantees. Probably too strong ;)
I think something like the simple Scheme/Ruby convention that mutable methods have a ! would go a long way. You'd need more than a casual convention though, it would have to be a real promise. Though maybe if you had search agents rooting around in any method without a ! you'd see people apply that convention more thoroughly ;)
"On the other hand, if your concern crosses the single language barrier (you mention LINQ) then what sits at the top of Object Model A and Object Model B?"
FFI. That is, in any language you have an interface to access external things, so if that interface also includes information about the thing you are accessing then you have an opportunity to traverse into other systems. That's where the extensibility comes into play: you can't typically enumerate an external resource efficiently, but with appropriate interfaces maybe you could route the more high-level goal to that external resource.
That's a little handwavy, but...
'Regarding ZeroVM, I am a bit confused as to what exactly "moving apps to data means".'
I'd say two things. First, in the cloud context you can move a routine to another server. For instance consider `SELECT user FROM users WHERE confirmed_not_spammer(user)`: lets say confirmed_not_spammer() is not a database function, it's something pretty complex that you've implemented outside of the database context. Given portable processes it's at least imaginable that you could send just that routine to the database server, and it could call out to that function.
Actually that's both things! First, you send the routine to another computer, second you can have inversion of control: instead of sending a fully-baked query to another process, you leave open the possibility that the other process calls back as part of the process of finishing the query.
This is all also very hand-wavy. I expect there are still massive amounts of tooling necessary to fulfill the promise of ZeroVM.
"Given your prolific programming output (thank you for pip and virtualenv, big admirer) it is no wonder that you ponder these issues and wish to pursue something higher up. What I was trying to get at is that, respectfully, you are not the first and that there might be some lessons in the history of our profession that we could learn from and use as a basis for progress."
There are lots of corners of computing I don't know about, and I'm interested in hearing about them. But in these particular cases I am aware of the history. I think it's a little too easy to say "hey, someone already did this!" when really you mean "hey, someone made what you describe possible!" – but everything is possible, that's what Turing Complete means. Creating a viable and rich environment in which productive work can happen is different than creating a context in which you could create that environment.
Maybe someone, using Smalltalk, created a general object search and reasoning algorithm. I would in fact not be surprised if that did exist at one time, and I would be interested in learning about it. Similarly in Common Lisp, or even today in Clojure. Someone has even noted an example in Haskell. But the specific product is of interest to me, not just the environment that spawned it.
"massive amounts of tooling necessary to fulfill the promise of ZeroVM"
For sure. To me it seems like a good environment for isolating processes and users in security and execution contexts -- just boot up an instance, perform a task and then destroy. How it applies to data processing is not very clear.
"But the specific product is of interest to me, not just the environment that spawned it."
I see what you mean. I feel that anyone creating such a product/environment will run into massive problems that are very non-technical. How do you get language owners/implementers to agree to a common standard especially in today's web where we see common denominators of data sharing like RSS slowly being strangled by the big players -- let alone co-operations in programming environments/runtimes. The reason I mention the "big players" is that without somebody getting funded to think about this in a serious manner for an extended period of time, it will be very hard. Organisations like universities have the brainpower for this but not the money or the drive.
Something encouraging here is the recent slew of JVM languages -- they share objects with Java. Limitation here is its all on a single platform.
Maybe with the recent announcement of a Haskell kernel for IPython environment there is room for doing some inter-lang op work?
People have called it a "jQuery for data types", and that isn't such a bad description. It allows you to uniformly interact with nested records (similar to objects), collections and variants (values chosen from a known set of possibilities--sum types).
It's systematic, modular and surprisingly powerful. Of course, like many powerful abstractions--including Haskell itself--it takes a bit of up-front effort to learn, but it's more than worth it.