We need to have a conversation about the possible paths to make programming more widely accessible. Chris saw the problem clearly and tried to fix it. Respect. I am most impressed with the number of iterations he was able to crank in the context of a startup. But ultimately a startup may not be the best place to start, because low-end programming is not a short-term growth play. We have to play the long game.
> because low-end programming is not a short-term growth play. We have to play the long game.
This is true, and one of the reasons we opted to look at the acquisition route rather than raising another round; with raising another round, a whole lot of expectations would be attached to that surrounding growth, while we were still operating heavily in a research capacity.
Languages like Rust and Go that have nice corporate incubators are a great model, but these languages tend to serve their corporate benefactors in some direct way. It's hard to pitch a greenfield project like Eve, because there's no real great way to quantify the benefits of a language before it's been fully realized.
> It's hard to pitch a greenfield project like Eve, because there's no real great way to quantify the benefits of a language before it's been fully realized.
Plenty of languages have been developed without major corporate backing (especially in the beginning). Off the top of my head, elm, elixir, Haskell, ML, ruby, python, coffeescript, purescript, Idris, Agda, pony and I’m sure many others were developed at least initially by hobbyists and/or academics.
I honestly don’t know a whole lot about Eve. Maybe it was too large in scope to be a side project or fully volunteer-driven. But, even if that’s the case, that’s one of the main things academia is for; perhaps Chris should go for a PhD instead of a startup. I’m sure there are plenty of schools which would give him ample time and resources to work on it for research.
Each of the languages you mention are very small deltas (relatively) from our current state of the art (some more than others). It's not hard to convince one of the benefits of such languages, because they are already anchored. e.g. X language is like Y but with Types. Or, W language is like Z but with pure functions. etc.
Eve was very different from any language in widespread adoption, to the point where using it didn't even feel like programming at times. Therefore, it was difficult to quantify the benefits to others (especially companies) who have not experienced them first hand. With Eve, we didn't even know the full scope of supposed benefits until we started actually using the system we imagined in our heads.
Moving people in class forward before mastery. I guarantee you that 95% of the problems with math is students don't have one or two concepts mastered and then they are forever unsure.
My strategy is I use Khan Academies Math assessment with kids all the way through their parents. https://www.khanacademy.org/math
My two youngest have been doing for a few years. My 1st grader has almost completed 3rd grade right now. I know he has accomplished all the steps previously. My 6th grader she is in per-algerbra and every time she brings in homework I know she doesn't understand what she was taught. It can take 20 minutes for her to get her head around the concept but then she is fine. I have found that if I use Khan Academy after the drama level is lower and I have found her telling me she watched the Khan Academy videos and she is fine. I'll watch her do two problems and if she nailed it we move on. Math requires as much intervention in 4th to 6th grade as reading did from K-3rd grade form me.
In the UK they have started trials of Mathematics mastery. You can't move on till you have mastered the previous step. They did it one year and saw a slight increase in scores. This has to be started from Kindergarten till 5th grade to really see a difference.
https://en.wikipedia.org/wiki/Mathematics_mastery
I'm unsure programming can be made accessible, and there's a very specific reason for this.
Simplifying the medium of something (in other words making it so somebody could genuinely learn a programming language, from nothing, in 5 minutes) does not necessarily make task implementation any easier. Consider writing. It's something that anybody of mean intelligence is more than capable of. And writing a novel requires nothing technically more than just writing, yet somehow there is this entirely different skill set there that is not really learned and one that many people for whatever reason seem incapable of developing.
Even if I could get 24/7 attention from Dostoyevsky, Stephen King, and George R.R. Martin for years on end - I'm not going to be a strong novelist. This could be genetic issues, but even if we want to discount that (which is not really a reasonable thing to do) - my potential would certainly be killed off my by my interest. I simply don't get pleasure from writing.
Programming is very much the same thing. Even if you make the 'human interface' to programming as trivial as possible, you've barely scratched the surface of actually creating desirable things. And unless you get pleasure from this creation and have this sort of 'feel' for doing as such, it's going to be a low involvement grind with a mediocre final product.
I think the thing we're fooled by is the confounding issue that programming languages are not easy. And so it's easy to think that if we just make the languages easy, that programming would thus be easy. But I see absolutely no reason to think that this is where the 'real' barrier to entry is. Making a tool easy, does not make the task that tool is used for easy.
> Simplifying the medium of does not necessarily make task implementation any easier.
Except when it does. Programming in an IDE with code completion, IntelliSense, on-line help and an attached debugger is easier than going all macho developer and programming exclusively in raw text with the pico editor, using println traces as debug.
Sometimes, merely reducing the cycles that the human brain wastes on interacting with the medium makes the whole task simpler. If you don't force the user to think about building a program, they are freed to think about solving a problem.
That's comparing writing longhand to using Microsoft Word. The latter helps, but doesn't make writing novels fundamentally easier; in the same way, an IDE doesn't make programming fundamentally easier.
If by "fundamentally" you mean "devoid of any practicalities that may affect the actual task of doing it", I agree.
However, there may ''also'' exist tools that help in the fundamental task. In the example of writing a novel, professional writers use spreadsheets that help them keep track of the general picture, and follow the details of each character, scene and plotline. It doesn't help with the task of being creative and artistry, but hell if they don't help the writer keep track of every detail that's important to the final result.
Yes. Programming obviously has such tools too, and in both cases their ultimate goal is to help you with all complexity that's incidental to the core of the work you're doing. Sometimes the overall reduction in complexity will allow you, as Bret Victor puts it, to "think new thoughts" - but in the end, you eventually hit the core of essential complexity of the task, and you can't be more accessible than that.
What you can do, though, is cheat with the scope. Not all writing is novels, and not all programming is large distributed systems. Writing a semi-structured limerick, or a half-assed bash script, is much easier. Ultimately, I feel that the goal of programming literacy should be to let people use computation to solve their own specific problem. That is much simpler than general programming. For instance, plenty of people program in Excel. Or in Tasker (an Android automation app). There is lots of uncharted space in designing better interfaces and paradigms for such small-scale programming (unfortunately, this goes completely against modern UI/UX "wisdom", that is all about removing user's agency from the equation).
> What you can do, though, is cheat with the scope. Not all writing is novels, and not all programming is large distributed systems. Writing a semi-structured limerick, or a half-assed bash script, is much easier. Ultimately, I feel that the goal of programming literacy should be to let people use computation to solve their own specific problem. That is much simpler than general programming.
There, that's where most programmers miss the picture. I'm glad that you get it, and I couldn't agree more. There's the whole discipline of End-User Development dedicated to explore that area. General programming is like general nuclear physics; not everyone ''needs'' to know the details, but everyone may benefit of plugging a device to the wall and use the power, without a "priest of electricity" who creates a six-months agile project to do the wiring for them.
> There's the whole discipline of End-User Development dedicated to explore that area.
Thanks for the name, I didn't know there was a field for that.
> General programming is like general nuclear physics; not everyone ''needs'' to know the details, but everyone may benefit of plugging a device to the wall and use the power, without a "priest of electricity" who creates a six-months agile project to do the wiring for them.
If you didn't know EUD, you may enjoy the seminal works Watch what I do [1] which is available online (mostly; some figures are missing) and Your Wish is My Command [2]. These books are compilations of early articles which explore very interesting approaches to building software artifacts, which look nothing like programming in a general language (though some of them resemble what Bret Victor is making popular nowadays).
BTW, spreadsheets are considered the most successful End-User Development tool; it's not coincidence that modern web languages resemble more and more its reactive programming model.
> We need to have a conversation about the possible paths to make programming more widely accessible.
We've been doing this for as long as computers have existed and have made virtually no progress (pun intended) since 3GL languages. No other industry has ever tried as hard as ours has to make itself redundant. Programming is about as simple now as it will be for the foreseeable future, any simplification sacrifices the versatility.
If you want to make programming more accessible then you need to work on peoples abstract reasoning skills.
I strongly disagree with this. Today, it is "think to program", but the computer is right there, it can help out a lot more beyond just interpreting the program you thought hard about writing. Instead, the computer can help us think, it might not be able to do the reasoning for us, but it can help us break it up into smaller pieces.
A nice analogy is the difference between Ultron, Hawkeye, and Iron Man. Ultron represents the singularity where computers just program for us; it will come someday but it isn't very interesting to us. Hawkeye, the "super" archer, is analogous to the programmer with advanced abstracting reasoning skills, he is amazing, but there just aren't going to be too many of him. Humans aren't getting much smarter in general.
Then there is Iron Man, who makes himself awesome by using technology, from his power suit to his holographic design environment and interactive voice assistant. That is the sweet spot for us.
It's actually possible that we are: https://en.wikipedia.org/wiki/Flynn_effect ; summary: "The Flynn effect is the substantial and long-sustained increase in both fluid and crystallized intelligence test scores measured in many parts of the world from roughly 1930 to the present day." (But both the reality and the interpretation of this are matters of some dispute.)
Moreover, without any computer assistance until very recently, our ability to do mathematics has advanced marvelously over the past few thousand years. We shouldn't underestimate the power of education, culture, better notation, and accumulated wisdom to advance human capacities even at purely intellectual endeavours such as math (or, for another example, chess).
Of course, none of this says that computers themselves can't help us program, which is your primary point. I just think you're being a little pessimistic about "merely human" ways of improving ourselves.
Well said. Programming is just simulating the world. If you don't understand problems well enough, we could teach you any language there is on earth and it wouldn't matter.
Accessible tools and laptops don't make programming any easier than cheap hammers, nail and glue make carpentry easy.
After a while its just you. And that's the biggest thing.
Ultron, Hawkeye, and Iron Man are fictional characters. It’s easy to imagine an omnipotent programming AI, but we have no reason to believe any such thing could exist.
No, we don’t. Nuclear fusion is a theory, there is evidence it is possible. We can observe it by looking at the stars.
Omnipotent AI is not a theory. There’s no evidence it is possible. I have yet to even see a a falsifiable hypothesis about how it could come about. The most advanced hypotheses are something along the lines of “they’re getting smarter do eventually they’ll be infinity smart” which is not a strong claim.
Evidence suggests intelligence is niche specific. Humans are smart, but in the middle of the ocean a jellyfish is smarter. A better prediction about AI is they will surpass us in some ways and not others.
That's not what was suggested. Human-level intelligence is sufficient to cause a singularity (because then they can start improving their own programs).
> better prediction about AI is they will surpass us in some ways and not others.
That isn't a better prediction at all. Regardless, it is uninteresting to my original point, which is we want to live in the Iron Man, not Ultron, phase now anyways.
> Human-level intelligence is sufficient to cause a singularity (because then they can start improving their own programs)
Your assumption is that human-level intelligence is bound by offline simulation capacity.
If humans are already optimally utilizing offline simulation (in the biological world we call that imagination) then the “human level” AI will have just the same limitations as a human.
That’s the counter-proposal you have to make falsifiable for your guess to become a hypothesis:
That any human-level AI, in order to become human level, will be bound to the same interactive constraints on learning that humans are.
Think about World War II for example: were strategies limited by offline simulation capacity, or were they limited by the fact that you can only try (and therefore sample consequences for) one at a time?
I’m not making a claim here either way: I’m just saying you wave this whole debate away by saying “human intelligence plus unlimited simulation equals superhuman intelligence”
I’m not saying your wrong, it’s an interesting thought experiment: I’m just saying not only is there zero evidence for that, there’s not even (to my knowledge) a robust model describing the mechanism by which that could expect to be true.
1. Making programming accessible probably means making software more accessible.
I remember a study of what mental capacity was most correlated with being able to learn foreign language. It was empathy. So a large part is wanting to communicate. If you want to communicate, you will find a way, and I think we see the same thing in programming. So called "non-programmers" learn the most absurd programming languages and systems if they are motivated. Also see Minecraft. And amazing/horrifying Excel spreadsheets.
One of the issues I see with "novice environments" is that they tend to be very separate from everything else on the machine. I would find that very demotivating.
What I would love to see is "open source as if we meant it", meaning programs that do something we want to do, that we would use, written in such a way (probably also: in such a language) that tinkering/adapting is a reasonable proposition. Yes, that means I don't think that currently is the case: except for the core-dev team, is it a reasonable proposition for people who want to adapt GNUmeric to download the source code and start tinkering? For novices?
2. I am not convinced by the low-end vs. high-end distinction
I think a lot of the same things that make programming awful for beginners also make it awful for advanced programmers. We have just gotten used to the pain and accept it, though I am not sure why.
3. I am not convinced by starting over from scratch
There are reasons why we have what we have, and not all of them are bad.
4. I am not convinced by not starting over from scratch
Of the reasons why we have what we have, a lot of them are bad. A lot needs to be reexamined and rethought.
Resolving that contradiction (thesis, antithesis?) is difficult, it requires looking at what we have in a lot of detail, including the history of how we got here, where we need apply tweaks and where we can interact properly with the rich computing tapestry we have.
Just extending what we have is probably not a solution, because one of the problems is too much cruft, but just starting over from scratch is likely to lead to cool but ultimately superficial projects.
I see an analogy in calculus. People learned it because Newton gave them a reason to. Consider finding a use case where the language lets you do something not otherwise possible. If your use case is successful, people will take note of its precursors. (If you can’t, explore your language as an academic might.)
>We need to have a conversation about the possible paths to make programming more widely accessible
I assume that means you think that programming is not accessible enough as it is? Why do you think that? As far as engineering disciplines are concerned I consider software engineering one of the most accessible to complete newcomers. All you need is a computer, something that's not very hard to procure these days (at least in wealthy countries). Then you have endless tutorials online, from learning to code websites to video games to raytracers to databases to physical simulations to neural networks.
As a self-taught programmer (who started coding in BASIC at around 15) I never found programming particularly hard to approach.
As a self taught programmer, my question to you is "Can you solve the problem space problem?" Do you understand the difference between problem space and solution space?
I know of many excellent programmers, yet they have problems solving problem space problems. They write great code, but the code is not solving a real world problem.
Many of the available programming languages (especially in the highly popular languages, like C, C++, Python, C#, etc) have too many "gotchas". Any language which has "undefined behaviour" or "implementation defined behaviour" makes itself a language for the "elites" and instead of being an adequate tool for solving problems, becomes instead a tool for elitism.
Programming languages should be a tool for specifying a solution to a problem that communicates to those who follow in the support roles (including the original author) as well as fully defining the behaviour that a senseless machine will follow.
The complexity of a solution should be predicated on the complexity of the problem and not on the "features" of the programming language involved. This is one of the reasons that I enjoy programming in Icon/Unicon, Failure is an option and allows simpler ways of saying things. The language is in no way perfect, but I find it so much better for solving the problem at hand than having to fight another language because of its "gotchas" that end up getting in the road of solving my actual problem.
I think the right answer is intermediate tools that don't even necessarily create programs, but give people reasons to use a compiler and learn that type of problem solving related to their current day to day needs.
It's an interesting question, but programming is not hard. One of the reasons that boot camps were so successful at getting people to attend them is that it makes people who thought they couldn't program realise that they can.
But the act of programming is not the only thing that a professional programmer needs to do. Understanding requirements, planning development, dealing with the complexity of large systems, communicating -- those are difficult. You don't need to do those things just to program. You can start your own project and build something that most professional people would call a "toy", but is still really satisfying for the average person. There is a limit to how complicated something can get if it is under a few thousand lines of code, and you are the only person working on it.
I find that most movements for simplifying programming are attacking the wrong part of the process. It's fair if you want to make programming more approachable, but it's already very simple. If your goal is to get more people able to do a good job as a professional programmer, then you should concentrate on being able to handle complexity in large systems, or being able to discover and reason about requirements in the system.
I might argue that building a bridge is harder now taking the whole process into account (just the metallurgy is vastly more complex); but they don't fall down as often. Which they often did in early railroad days thanks to hidden cracks in the metal that were both frequent and undetectable back then. But ordering a bridge to be built might be easier, now.
Specifying routines completely disambiguously will always be hard, but I do think techniques will come along to provide the equivalent of railings and safety nets while you do this; such as AI to query you about what you really want to happen, and to draw your attention to edge cases and unusual combinations of circumstances that you need to make a clear decision about.
Programming is much much easier now than it was in the 80s! It's been years since I actually had to write any code myself - computers do it all for us now. Amazing things, these "compiler" programs.
That's a very interesting way of looking at it. I always kind of looked at my preferred language's grammar as "code", but now I feel like I'm calling TLS SSL when I do that.
I wonder what else to call it. "Source code" is obvious and what everyone uses, but now I want something else to use.
But overall it's so much easier these days. Even if you're spoiled for choice, you can get the tools and documentation for free. Back then it was either the BASIC interpreter that shipped with your computer (QBASIC for me) or the assembler (debug.com) or paying big bucks to buy a compiler. I was lucky that my dad could obtain a PASCAL and a C compiler through his work.
These days you might have too much choice but back then for a kid like me, the choice was what was available in the local library. So not much choice at all.
Sure there is, that's why mastering a concept is hard and takes thousands of repatitions or hours to become masterful.
Similarly not simply building the same old bridge, because who the hell wants bridge 1.0 I want warp-bridge 20.0 now, requires I have a mastery of basic Bridges and additional pioneering topics currently of interest of which few individuals are experts.
I think this is a really great point. Many people always dreamed of making games and now a days there are make-a-game kits that enable people to do just that with almost no programming knowledge necessary. But now the sort of games capable of being made by such things are relics of the past. By the time you create tools capable of enabling anybody to do 'average' tasks at one time, those tasks end up being relegated to triviality with the net effect that you're basically treading water.
And similarly the explosive growth of the products of these make-a-game kits renders the skill almost entirely worthless. What would have been a very viable publishable product 25 years ago, is now considered shovelware. I think the only way this would end is if we somehow reached a skill cap in development (you've officially made the best bridge that's at all possible!), but if such a thing may exist we're certainly nowhere even remotely close to getting there.
I disagree. It's pretty easy to, say, find a suitable log, stand it up by the edge of a stream, and topple it over. It's more about the strength required than the thinking; and if it's a struggle, we have JCBs, chainsaws, etc. to help.
For something a little more robust, hire an AVLB. For more permanence, buy something like a Bailey or Medium Girder bridge.
Of course, I'm being facetious; but my point is that "programming" covers a vast spectrum. We don't teach kids to write under the expectation that they'll be modern day Shakespeares (or build Danyang–Kunshan Grand Bridges); we do it because writing is incredibly useful, even if it's as mundane as a shopping list on the back of one's hand (or a log over a stream).
If it is easier, it could be so because there's less inventing of new procedures and more following existing procedures (that is there's less "programming").
I remember learning to program in Logo on a Commodore 64 when I was in the third grade. Even back then, I was decomposing my code into functions, playing around with recursion, etc. 30 years later, I was presenting to my son's 4th grade class, and was getting reasonable questions on CSS, github, etc.
The basic concepts behind programming just aren't that hard if they're accessible to elementary school students.
Where things do get tricky is when you start to scale upwards in terms of complexity, volume, and duration of service. But even then, it's all too common for engineering efforts to overestimate the need for those things. That then leads to architectural choices that not only make the system more difficult to work with, but also likely fails to meet the initial targets.
I disagree, programming is hard from the perspective of understanding the problems that are needed to be solved.
I agree, programming is easy, if the language allows you to formulate code that is logical and doesn't bite you because of "undefined behaviours" or "implementation defined behaviours".
Logo was and is useful in providing you with specific conceptual patterns for writing code, but does not give the necessary abilities to provide solutions to problems via code.
One can learn the features of Calculus and how to solve Calculus problems. but it is a different matter to learn how to use it as a tool to solve real world problems. This is a different subject altogether. In my undergraduate days, oh so many decades ago, a common refrain from us engineering undergraduates was how often we would be taught something with no reference to how to use that tool for real world problems.
As the years have passed by, I have been pleased to see that in some areas, this disconnect between learning a tool and using that tool for solving real world problems has strongly diminished. There is some excellent material now available that goes a long way to solving this problem. Kudos to those men and women who teach these courses.
It's not. You need someone to patiently and properly explain concepts, though. This applies to any area, not just programming, but programming IMO lacks good teachers.
Comments by banned users get killed by software. Community members who think the comment shouldn't be killed can always vouch for a dead comment (if they have karma > 30) by clicking on its timestamp to go to its page, then clicking 'vouch' at the top. If enough users do this, the comment is unkilled.
Killed comments are visible to anyone with 'showdead' set to 'yes' in their profile. We never delete comments outright except in the rare case where the author asks us to.
> We need to have a conversation about the possible paths to make programming more widely accessible.
You say that like it isn't a robust conversation that a significant segment of the community has not only been having but trying out solutions to for decades.
I think the conversation we should have is why we make everything seem so hard. What a bunch of hooey.
In 5th grade everyone talked about how Algebra was hard. When I got to Algebra, it was pretty easy. I was very surprised! Of course my dad had to explain it to me, because my teacher was incapable of explaining it.
10th grade, same thing with calculus. Thank god my dad knew calculus and was patient. Because my teacher did not.
And so on and so on.
One thing I’ve noticed is that nothing humans have invented or discovered is hard. Learning things requires curiosity, a lot of patience, and someone showing you how to get from A to B in small steps. Overwhelmingly, the inventors and discoverers are the people at the bleeding edge of those small steps who find a next step.
Programming is no different. We need to stop infantalizing people. Most humans are incredibly capable. We don’t need programming languages for children. A child can learn Python or JavaScript. You just have to show them that it’s cool. But instead most adults show kids that sports, video games and Netflix are cool. So, c’est la vie.
Today unfortunately we have a culture of “you can’t, it’s hard, it’s not for you, other people do that.” People always refer to “they” when they’re taking about people inventing something. This is good for the rich and easy for the poor. Who is “they”? What if instead of “they” it was “we”?
>In 5th grade everyone talked about how Algebra was hard. When I got to Algebra, it was pretty easy. I was very surprised! Of course my dad had to explain it to me, because my teacher was incapable of explaining it.
Surely the concept of outliers, survivorship bias, and statistical noise is easy too then!
>Of course my dad had to explain it to me, because my teacher was incapable of explaining it. 10th grade, same thing with calculus. Thank god my dad knew calculus and was patient. Because my teacher did not.
So, how do we produce millions of capable teachers (as opposed to the non-capable current bunch), with extra patience, and each dedicated to one or a tiny group of students?
Because this is not about how subject X is easy, but about how subject X can be made easy (or easier) under ideal conditions.
And even then, only if we assume you were a typical student, and not especially gifted or motivated.
>Programming is no different. We need to stop infantalizing people. Most humans are incredibly capable.
What if the problem is not how to teach motivated people under ideal conditions, but how to motivate or even how to teach unmotivated people, under all kinds of conditions (e.g. working class family, problematic household, poor school district, etc).
"and each dedicated to one or a tiny group of students?" ... I think you are onto something there. Anecdotally many of the 'survivors' that claim $subjects are easy did have 1-to-1 tutor for some parts of the journey.
Well, not everyone has the benefits of an educated parent who takes time out of their day to tutor their kids through calculus. Maybe "everything seem so hard" to people without those kinds of advantages.
he would have learned it regardless of his dad teaching him - because he is smart. education can't do much to change the population distribution of intelligence.
I don't know about that. "Being smart" isn't a big enough hammer, sometimes.
I'm "smart". I taught myself how to program, and now I'm an architect-level developer that's respected by everyone I've ever worked with. But I'm also a college dropout. I had a terrible education ("Christian" "school" with no qualified teachers) and my parents were no scholars themselves; and I gave up on college because I couldn't get a CS degree without making it through Calc 1, which I didn't even get to until my second year of college because my whole first year (and the summer prior!) was spent in remedial math.
I worked every night in the math lab with the smartest tutor I've ever met. I stayed up until 2a every night trying to get my Calc 1 homework done. But it wasn't enough. I didn't have a strong enough foundation to pull it off. It was too little, too late.
Years later, I think I'd do much better, as learning how to develop enterprise-grade data warehousing and ETL solutions has taught me /how/ to learn. But at the time, I had no good "tools" to use - I just knew how to memorize, memorize, memorize. And it just wasn't enough.
If my stupid, fucking, batshit insane religious-maniac parents had just let me go to a public high school - like I begged them to - all of this could have, and would have, been avoided.
If you are a parent with a school-aged child, and you want to fix a deficiency you perceive in calculus education, it's easier to get strong at calculus and teach your child yourself than it is to reform the education establishment.
I've had professors that I absolutely confirmed did not know the subject they were teaching. I had a web application development class during college where I had to dispute my exam grades every time because the professor would mark things wrong that were not wrong, and I'd have to go show him the code running.
Except for the final, I think he gave up trying to fight it and just gave me a 100 on that one, because I know I missed one of the CSS questions.
We never actually built a webapp in that class, either. Luckily for me, I'd been doing it for years already (older student, this time around), but I feel bad for the other students who were new to it.
That is definitely true, but misses my point. Most students have to deal with this "much bigger problem" but most students don't have a parent with the time, education and patience to teach their kid calculus. The GP just comes off as arrogant and out of touch when he laughs off the rigorousness of calculus while simultaneously humble bragging that his dad was his personal calculus tutor. He also didn't miss an opportunity to shit on normy parents who pollute their children's brains with such banal pastimes as sports, games and netflix. It's ridiculous.
>>One thing I’ve noticed is that nothing humans have invented or discovered is hard.
That is true if you are discovering or inventing it. Trying to understand what other people discovered or invented is hard as you are skipping the journey and arriving straight at the results, without going through the dead-ends, u-turns, restarts and various other ups and downs in the journey making many side discoveries including many meta discoveries alone the way.
There is this anecdote from the life of Richard Feynman, that he sort of put his physics textbook aside and worked through all of that on his own and discovered all of it on his own. The reason being its impossible to truly understand things in a deep fundamental way unless you discovered it yourself.
You can learn anything you want. But you have to embark on the journey of rediscovering all of that on your own.
> "One thing I’ve noticed is that nothing humans have invented or discovered is hard."
This is one of the most absurd and arrogant statements I've ever read on HN.
If this is true, why don't you read some textbooks for a few years and start flawlessly performing surgery? Why don't you write a world-renowned novel?
> "A child can learn Python or JavaScript. You just have to show them that it’s cool. But instead most adults show kids that sports, video games and Netflix are cool."
Lots of kids know Python and JavaScript. They may think it's cool or not. They're right either way because nothing is objectively cool.
I'm a programmer and I myself see it as a means to an end (a fun one, sure). It's not meaningful to most people, not any more than plumbing or chess is meaningful to most people.
People don't only fail to learn hard programming languages because they're lazy or haven't unlocked the magic of programming. Many of them just don't care to learn them or enjoy them, and that's fine.
> why don't you read some textbooks for a few years and start flawlessly performing surgery?
You missed the point because you weren’t open minded enough (admittedly OP left it ripe for you to jump to this conclusion). What I believe OP was trying to say was that the example you mentioned above isn’t some impossibly hard thing that only those with a special gift can do, but rather that with the right motivation, enough practice, and possibly the right teacher, you too can achieve it. OP also did not put time boxing on how long it would take (that’s on you there), but said it’s achievable if the above is met.
You've moved the goalposts from "hard" to "impossibly hard." That doesn't seem like what they were saying to me. The original comment even contrasted it with "pretty easy."
What helped me to learn programming as a 10 year old child, was the 80's programming books targeted at children.
Always full of drawings, or little stories, and along the process I got introduced to BASIC, Z80 Assembly and all the intricacies of Spectrum hardware.
I used to think those books were gone, but they seem to have come back regarding Arduino and Rasperry Pi programming. Just focusing on Python and Scratch now.
Giving them a board with such books and some electronic stuff is probably the best way to teach them. Regular computers have too many layers that just hinder learning and its hard to see things happening.
The massive problem is, learning Z80 assembly for the Spectrum was tricky, but not overwhelmingly so.
The C64, the Spectrum, the TRS-80, etc, were all "small" machines that were simple to learn.
The Arduino, Raspberry Pi and Scratch are also tricky, but not overwhelmingly so, and simple to learn.
The absolutely massive difference with the environments of the 80/early 90s and today is that
a) The environments learned are heavily sandboxed, caricaturized facsimiles of computing
and
b) Learning Scratch won't lay down a mental model you can immediately apply to Python, and learning Python won't lay down a mental model you can immediately apply to C++, Rust, etc.
Learning assembly language didn't quite leave you with an immediately-usable understanding of C, but what it did do is give you a working knowledge of what everything else was based on under the hood. I'd argue that's an even more valuable gift: the knowledge that you understand what the more complex environment is based on - good for confidence - and the knowledge that if you absolutely need to, you can pull everything to bits and unravel bits of it - which immensely helps with discovery.
Nowadays, it's like worst-case simulated annealing. "Climb Mt Everest, then climb all the way back down to climb Mt Everest²."
> Nowadays, it's like worst-case simulated annealing. "Climb Mt Everest, then climb all the way back down to climb Mt Everest²."
I don't think that's a good analogy, I think it's more like "Climb up to base camp, then climb halfway up, then climb ...".
I learnt Python before I learnt C and learning C I was constantly having these "Aha!" where I suddenly realised why Python is the way it is. I didn't have one mental model for Python and a different mental model for C, I had one mental model for computing which was enhanced by learning C.
And on top of that Python isn't a fundamentally different language to (for example) JavaScript, sure the details differ but somebody who knows Python won't have to spend months learning JS. Arguably a language like Haskell is a fundamentally different language to something like C, but there are plenty of things that you learn when learning C that can be applied to Haskell.
I don't think the issue is all these concepts are orthogonal, I think the issue is that there are just so many concepts. Back in the 80s learning assembly might have got you 30% of the way up Mt. Everest, whereas nowadays it might only get you 1% of the way up.
Just seeing those covers gave me a huge nostalgia-trip, from my own time programming the ZX Spectrum in the 80s. (First in BASIC, then in Z80 assembly via the fine manual.)
No, that's something I don't think I've ever seen before.
I used to read a few of the Spectrum magazines back in the day, Your Sinclair, Sinclair User, etc. But over time they all dumbed down and started to be more about advertising and game-reviews, rather than having a mixture of programming-content and other stuff.
I did get some POKEs printed though, so it wasn't all bad!
While I was a Commodore boy in the 80's, I know what you're talking about and I came across a book that had the same feeling "Clojure for the Brave and True": https://www.braveclojure.com/ - it was like a jump into the past.
There are most certainly human inventions / discoveries that are, in some sense, "hard" (either to discover in the first place, or to explain / grasp). We have a biased view here: we can look back at, e.g., Einstein's general relativity and summarize its development as "well, he just thought to explore the logical implications of having a constant speed of light!" We have that luxury of hindsight (and several decades of steadily improving explanations of the original insight).
Then you go and start reading Actual Science: Minkowski spaces? Tensors? Paradoxes about long objects and barns? Gravitational lensing? Figuring out where to go with that initial idea is decidedly non-trivial.
In the extreme, there are papers at the bleeding edge of mathematics that can only be meaningfully reviewed by half a dozen people worldwide. Understanding these papers (or anything at the limits of species-wide knowledge) is a far cry from grasping algebra / calculus: eventually you hit a point of diminishing returns, where further advances in your understanding of the problem require months or even years of dedicated study...
...and yet (assuming we refrain from destroying ourselves) these same problems may eventually find their way into high school textbooks, where they will be succinctly (and approximately / incompletely) summarized so as to make them look easy and incremental. That's great! It means we've permanently moved the pedagogical starting point forward for future generations, so that they can go and struggle with truly hard problems that we don't even know to ask.
My point: yes, some things are hard, even if most are not. I'd say that the trick to advancing humanity lies less in convincing students that everything is easy given enough time / patience / curiosity, and more in getting them to stick with hard problems: to normalize failure, as it were. That said, I do agree that we should avoid presenting things that aren't truly hard as though they are :)
Having this conversation on HN risks bias. Most of us have been selected for our cognitive skills. Regular people have a terrible time learning programming. See https://cacm.acm.org/blogs/blog-cacm/224105-learning-compute...
I suspect that programming is so hard not because it is necessarily so, but because it has been developed by people like us for people like us, who can tolerate or even enjoy massive complexity.
From teaching programming for some time to "ordinary" people, there is a difference in programming languages. Many have accidental complexity, like compilation, tooling, linking, braces and APIs. With some, like LOGO and BASIC, I have much better success rates in making people understand programming. I produced a lot of cargo cult developers when I've started with Python or Java.
programming is not impossible. a lot of us do it. maybe as you say more people should instead of believing it to be impossible.
but most of us that do are pretty frustrated at the how much time it takes, and how many artificial concepts we have to become accustomed to.
compared to the systems we use, the fundamentals of information processing seem really quite simple.
maybe, there is a way of describing simple programs that doesn't require months or years to master. I certainly wish personally that little ideas that occur daily didn't take weeks to realize....and I'm pretty sure given the simplicity of some o those notions that there isn't anything fundamental that requires that amount of brute force work.
and if there were a system, that cut a bit more quickly to the chase - maybe programming would become a useful tool for people that don't have the time or temperament to spend a couple months learning how.
> Today unfortunately we have a culture of “you can’t, it’s hard, it’s not for you, other people do that.”
"The brick walls are there for a reason. The brick walls are not there to keep us out. The brick walls are there to give us a chance to show how badly we want something. Because the brick walls are there to stop the people who don’t want it badly enough. They’re there to stop the other people."
Lately, I’ve been learning sleight of hand and card magic and I’ve seen this mentioned in the magic community too. It’s advised not to teach friends who ask you to teach them, instead you should point them in the direction of self study resources (books, courses, search keywords) and let them figure it out for themselves. The reason given for this is that many people want to learn just so they know how something is done, but are not motivated enough to actually put the hard work (practice) in. By learning how it’s done without practicing, they ruin magic for themselves (you see this a lot in YouTube comments, where people don’t enjoy a routine for the skill and performance that it is but instead feel the need to figure it out and reveal it to others, often in a derogatory way “I figured it out, it’s shit” but of course they figured it out if they can watch frame by frame over and over...)
Basically what I’m attempting to say is that the brick wall is a useful tool to make sure that the people who want to learn without putting any work or practice in are filtered out from the ones who want to learn and do put the practice in.
I recently said to someone that I believe that (almost; obviously disabilities and such exist) anyone can learn (almost) anything, all it takes is three things:
1. Motivation. You have to want to learn it. In the in text of learning to program, I do believe almost anyone can learn it, but many people have no interest in programming and they will struggle if they try to learn.
2. Practice. It takes time to get good at anything. If you don’t put the time in, you will struggle.
3. A good teacher. Sometimes even with motivation and practice, some things are still really hard. A good teacher can break things down or give you a different perspective. A good teacher makes a massive difference. You don’t always need a teacher (good or otherwise), but some people will. Sadly, not all teachers are good and not everyone has one.
The big assumption you are making is that we have already found all the 'algebra and calculus' models - i.e. the good mental tools - in computing.
You could do division in roman numerals and perhaps people even then justified it with 'math is just hard, learn and deal with it' but things can get easier when you have better models, better representations and better mental tools. A big advantage of the computers is that they themselves are dynamic and interactive and so they can actually amplify the mental tools.
I feel that problem is educational system. At least in my country it is optimized for grading what children know but it fails badly to teach or motivate them. So children are forced to learn it by themselves or find someone to help and it is enough to learn for exams. This is not fun. They are just learning things they do not understand and they don't know how to use it for anything else than exam. All attempts to improve it usually start by redesigning exams which couldn't help but it is the easiest approach with fast results.
you are smart. the vast majority of the population is dumber than you. the average person will struggle to master basic algebra and simple programming concepts.
Squeak and Pharo are both pretty obviously prior art for Eve. For example, the bouncing balls demo someone linked to elsewhere in this thread looks very similar to Squeak + EToys, albeit with a dark theme.
Squeak and Pharo could both use more developers, BTW:
While Eve is an interesting problem, the main issues that makes programming hard are the same thing that makes law hard:
1) Specifying something in an unambiguous way.
2) Knowing what assumptions you are making.
3) Anticipating contingencies.
Read a contract or the tax code. Even though they are using English (in the US), it requires an advanced degree to be able to craft such legal documents well.
It is the same thing in software. No matter how easy and intuitive you make the language and environment, you will still run into those 3 issues, and it will require dedication, training (whether formal or informal), and time to make a great programmer.
Yeah, people talk about necessary complexity vs unnecessary complexity.
"Unnecessary complexity" is all those arcane coding rituals that we do, that our users don't know or care about, but we have to do because our current tools demand it.
We can't get rid of necessary complexity and so we probably can't design a programming interface for someone that is completely illiterate in math & logic.
But what we can do is 1) Eliminate unnecessary complexity, and 2) Provide helpful, structured, interactive, learnable tools, which help guide the author as they do all the "necessary" steps. If we just did that then that would be a huge improvement from the state of the art.
> "Unnecessary complexity" is all those arcane coding rituals that we do, that our users don't know or care about, but we have to do because our current tools demand it.
What exactly do you mean by this? I can think of many things that fit this description but none that don't have good reasons to exist or remain.
In the 80's, you would turn on your C-64 or ZX-Spectrum or Oric-1 or Dragon-32 or Atari 800, and type:
PRINT "This is nice"
and it would.
Then, you would type:
10 FOR I=1 TO 10
20 PRINT "This works too! Number ",I
30 NEXT I
40 GOTO 10
And it would.
Today, you go to your computer and ...
open an IDE, define a project, pick your Java environment, select "file new", write some random incantation of "public static void main", accept the suggesting missing imports if you are lucky (if your ide doesn't figure them out, you have to do them yourself), and after a while you actually start running.
In the BASIC example, the only "unnecessary detail" are the unreferenced line numbers 20 and 30, which indeed become optional in later basics. In the Java example, everything I described above is incidental. It is not useless in the grand scheme of things of projects with millions of LOC and tens of developers and stuff -- but it is unnecessary for small examples. Java does not scale down.
Python mostly scales both up and down, but it still isn't as easy to start writing stuff the way it was in the 80s.
The most popular programming environment, by far, is a spreadsheet - it's not imperative or object oriented, but rather functional. and it also fails to scale up or down.
I appreciate this response... As you say, it's not too bad, although the {1..10} and the semicolon would be more confusing to the uninitiated (based on the fact that they are confusing to many bash veterans as well...)
Now, how do you replicate
SAVE "myprogram"
and a later
LOAD "myprogram"
RUN
without having to teach vi, redirection, chmod, source, $PATH, and friends?
My point is that the old environments, while limited, scaled from zero to useful with much less requirements than today's environments, which scale much higher at the expense of a beginner's joy of experimentation.
I do get what you mean; my 48k Spectrum was certainly a simpler machine than my current laptop!
But I still think there are simple things that beginners can do. The whole SAVE, LOAD, RUN concepts also had to be learned. And you only need to know some text editor to be able to save a file. It can then be run with `bash myprogram`. And then later you can add the concepts of adding a shebang and making the file executable.
Also, the huge advantage of teaching shell commands is that it is actually, currently useful. I use those commands every day at work, despite having learnt them initially on an old Sun server many moons ago. I can't say the same for Spectrum Basic! :)
You raise a good point, but we could replicate that with a couple of trivial shell scripts too. Throw a curses library (https://bashsimplecurses.readthedocs.io/en/latest/) into the default profile and you've got an environment capable of simple apps and games too.
Can't try on an Android right now, did watch the YouTube and looked at the examples.
First, it looks awesome. You should do a Show HN sooner than later.
Having a few decades of experience under my belt, I'm sure I'm blind to some things newcomers/starters would see, but what I did notice:
- Camel case is bad for mobile typing, and bad for non-coders in my experience; underscore is better for the latter but not for mobile typing, so that's not really a solution unless you can add an underscore or some other character (non-minus-dash? center-dot?) that can be pressed directly.
- Case sensitivity in general is best avoided in beginner languages, which if adopted might make the above point moot.
- I personally find the pairing of "end" and ":" confusing, possibly because of years of "begin/end", "{/}" and "indent: dedent" experience. Most people seem to have a better intuitive grasp of the indent/dedent than any other block scoping, but for mobile I guess ":/end" is acceptable although "{/}" might be better.
- What is the distinction between "let" and "variable" ?
- The "#number" syntax is not intuitive - I goess it's a replacement for function names or line numbers, for refernece purposes?
- One of the conceptual problems that some people have is with the "=" sign for assignment, as it conflicts with its use in mathematics. All overcome it sooner or later, but when I've shown APL ( where assigment of 3 to a is written A <- 3 ) or Pascal ( A := 3 ) to non-programmers, this confusion never comes up. I would recommend keeping = for comparison, and using another symbol for assignments.
- Consider adding a variable value display. I've used PythonTutor[1] to teach basic JavaScript and Python non-programmers with great success. A complete value trace is likely beyond the scope of codechat, but a variable display might be very useful.
All in all, I think it's awesome. Are you familiar with Atari VCS 2600 BASIC[0]? That's what it reminded me of.
Case sensitivity: I'd prefer to make the language case-insensitive (or at least avoid uppercase in anything builtin), but then I'd need some other way to distinguish types and other identifiers. Any suggestions? There is already an issue for this: https://github.com/stefanhaustein/codechat/issues/1
Blocks: I was using {} before but it was inconvenient on mobile. Significant indent also seemed tricky. So I went for this combination for now.
let vs. variable: let declares a constant. Typing effort for let vs. variable is on purpose O:)
The #number syntax is for identifying specific instances if they are not explicitly named. Makes sure editing an "on x" trigger replaces it (opposed to adding another one).
Assignment syntax: Will change to := as suggested.
Variable value display: Will add "something" to do this.
I wasn't familiar with Atari VCS 2600 BASIC, but I must admit that my first programming language was (Spectrum) BASIC and I am trying to avoid some of the issues I found confusing in 1983 O:)
Seriously though, not being thrown into a program editor at boot is not what prevents getting started with programming. It probably is the fact that you can do non-programming on the computer. I mean, anyone using a computer would then know programming by definition.
Here's some really silly examples. They all have reasons, and it makes sense in context, but it's part of the deal.
- Java code needs to be in a class. I can't open up a file and start typing immediately
- bool(midnight) being false
- C header ifdef silliness to make sure you don't double include a header
- circular dependency import issues in many languages
- strings and bash as the lowest common denominator meaning that almost everything that glues commands together must first write a parser for the output of `ls`
- our not putting encoding explicitly into files meaning that there have been decades of people getting weird glyphs when sending files to each other.
- implicit str to bytes and back in Py2 causing billions of spurious decode/encode calls. And many more incorrect calls
All this stuff is small , and has reasons. But it's also stuff that's been around for decades and decades and cascades into massive weirdness. Death by a bazillion idiosyncrasies.
One finding by the people doing Eve is that common users don't get lexical scopes. If I give something a name, why can't I access it everywhere using that name? Scopes are an
artifact of how programming languages have evolved as an abstraction over the runtime stack; that complexity is unnecessary.
(Note that this is orthogonal to namespaces - i.e. ways to handle naming conflicts- nor encapsulation -providing a single entry point for changing a variable. Lexical scope was not created to solve either of that).
Correct me if I'm wrong because I'm not familiar with it. But isn't that a way to declare dependencies and share the information with other developers? That's something that has a good reason to exist. Replacing that functionality would involve a way more complicated solution to the same problem.
Everyrhing depends on the scope. Webpack is a complicated tool solving a complex problem, which makes it useful in large projects. But it's also way overcomplicated and unnecessary for small projects. A lot of incidental complexity comes from misapplication of powerful tools to simple problems.
An upper bound to the amount of information you need to transmit to have something built the way you want it built is how much you need to tell a good developer about what you want made before it is made to a satisfactory degree.
And an upper bound to how long it can theoretically take to build something is the same as the time to transmit that information for a human plus some small epsilon: computers can run very fast.
So getting a computer to do what you want ideally should not be close to as slow and difficult as it is today. This is sort of the true goal, rather than formally specifying what you wish the computer would do. If you believe the goal is formally specifying what the computer does on top of bulletproof abstractions, we're already very far away from that - programmers today don't tell a CPU how to do branch prediction, or a compiler how to optimize their code, for instance.
Probably a really good system could do even better than the upper bound above - what if you knew what someone wanted before they could even describe it to you, and you built that for them?
Agreed, it's hard and it takes a certain type of person to be effective at it. I'd go a step further and say that tools designed to make programming "easier" or for the masses not only are a wasted effort but actually detrimental (I'm not taking about learning environments such as MIT Squeak here, that is altogether different).
We need better tools that let someone who masters programming to take their vision and realize it in production ready code as quickly as possible. This is a very different goal from "allowing a newbie to pick up a programming language in 15 minutes".
I desire master tools. Tools that I know if I invest the time to learn (whether that takes months or years) will allow me to achieve unparalleled productivity as a programmer.
Another good example of the issue here is any programming language that was meant to be able to be read/used by non programmers. SQL is a great example there in my opinion. The syntax and structure is clearly meant to try to mimic relatively natural language, yet in practice I think SQL is vastly more challenging and complex than most any programming language intended to be used by programmers.
Of course this goes all the way back to all the programming language that used to be used around the time of SQL's development, such as things like COBOL. They're just really tough to work with, even for those of us who learned it as one of our first languages. Anecdotally at least C structure languages now feel 'natural' to me, whereas the natural language programming languages always felt (and feel) like trying to fit a square peg in a round hole.
> The syntax and structure is clearly meant to try to mimic relatively natural language, yet in practice I think SQL is vastly more challenging and complex than most any programming language intended to be used by programmers.
I don't; and the fact that I've known plenty of people who are not programmers before and do not consider themselves such afterward who have picked up SQL fairly quickly seems consistent with it not being that much of a barrier.
I think SQL had some design decisions that are particularly frustrating to programmers who are familiar largely with currently popular general purpose languages, though.
I find SQL to have a usage profile that's, not surprisingly, similar to other declarative languages. Then tend to have a sweetspot of functionality that, if you stay within, makes them very easy to use even for less experienced people.
But outside the sweetspot, things tend to go south very rapidly and you end up needing detailed implementation knowledge that the language will fight against.
I'm sorry to say but I am happy to see that this is coming to an end.
These guys have delusions of grandeur. At best, they're good researchers looking into new and simpler programming paradigms. But the way they're talking about it is just horrific to me. It's full of "imagine a world where you could just..." and "we're building that world for you" BS.
I just hope Eve's failure is going to be a lesson in humility.
The BS-heavy style of Chris' communication is indeed extremely off-putting, unfortunately, but it doesn't get any better if you ignore the style and go straight to the content. LightTable was an idea I rooted for, but almost every aspect of its execution was broken, and ultimately LT got dumped anyway as soon as it started getting bigger than a tech demo. They then started working on Eve, which was much more ambitious and much less precise idea, and tried to make it into a business at the same time, in typical startup timeline. There was no way it could have worked, I think.
Meanwhile, the IDE and editors situation stays the same, stuck perpetually 20 years behind Smalltalk and Common Lisp environments of the '80s and early '90s.
Their communication showed only the amount of that that is typical of every start up; it may be more irritating to many technical folk because, unlike most startups, programming was the problem domain and not just a tool used in the solution, making the hyperbole more obvious than normal.
> I just hope Eve's failure is going to be a lesson in humility.
Eve's failure does nothing to change the structural factors that drive hyperbolic public messaging in the startup world.
Yes! All my real world interactions with Chris Granger have been very positive, he doesn’t come off as arrogant at all. I admit that I and others were initially put off by the hyperbolic messaging, but I see the value in it now.
That's standard startup speak, though. How else are you going to get investors and early adopters excited? Everyone is competing for everyone else's fleeting attention this day and age.
That's exactly the problem: building a programming language doesn't fit - I think - into the startup framework unless you're prepared to be in stealth-mode for a decade. Even the biggest tech companies have problems with making their languages mainstream - for every success-story like Go or Swift we have some kind of Dart or Dylan that simply didn't work out, despite companies investing massive resources into them.
> How else are you going to get investors and early adopters excited?
With a programming language? It's easy: have a good, battle-tested codebase which implements your killer features efficiently and correctly, supports non-primary features just as well, and has a huge collection of written (open source) software/examples in that language.
That's why I think PLs are not easy (or maybe even impossible) to make into a startup. LightTable had a much higher chance to work out, IMHO, because it was well-defined, was not without precedent and some other similar projects proved to be viable businesses (for a time).
EDIT: I forgot, the Red[1] lang team tries to do something similar, ie. write a language and be a profitable company at the same time; we'll see if they manage, but then again, they are looking to "just" displace REBOL instead of "revolutionalizing the way you think about programming" (again).
I've always admired Eve but thought it was a longshot. Had it worked it would've changed the future of computing.
I'm also working on making programming easier but instead of taking a moonshot approach -- where you build the language, the IDE, and the tools -- we're making existing tools and languages easier to use.
I hope Chris doesn't give up, there's a lot more to do.
I don't think there is a better possible illustration of losing focus than that.
You start with the goal of making it easier for people to deal with real world problems through unobtrusive programming, citing Excel as on of your key inspirations, build a unified browser and server side runtime on the basis of applying (and, if I understand some of the dev blogs correctly, perhaps significantly advancing from) cutting edge research in the space of implementation of rules engines / production systems / databases, you design and implement a simple, expressive, beginner-friendly language around the semantics of runtime, and your demo of how all of that comes together is...using an IDE property editor style interface to manipulate bouncing balls?
I really want to like this, but it's really not much more impressive than this, once you take into account the fact that the author clearly memorized the exact demo steps: https://www.youtube.com/watch?v=UIZO1TKPlzY
As someone making something in the same general space as Eve, very sad to see this happen. Thanks for taking a shot and proving that there are other ways of thinking about programming!
Just curious what's the general space that Eve is in? Programming environments for non-programmers?
I applaud the ambitious efforts of both Eve and Light Table -- it's definitely worth trying, and the negative result is valuable too. But part of me wonders if they were tackling too big a problem, or at least MARKETING a (potential) solution to too big a problem.
IIRC, I believe they figured out through talking to users (which they did, which is good!) that most people want to do stuff with data. For example, teachers might want to do some simple analysis of homework assignments and students. And there are many tasks that fall into the category of what you would do with "mail merge" in Word or a macro added to an existing spreadsheet.
I wonder if they would have had more success by nailing a narrow use case and expanding outward. The marketing always felt too "general". I feel like a lot of non-programmers want to get their jobs done more than they want to learn programming. You probably have to trick them into programming. Even 10 or 20 hours of up front "learning" is too much.
FWIW, I've worked with technical artists in games (e.g. writing MEL scripts in Maya), and statisticians doing big data (e.g. writing R), so I have thought a lot about non-programmers programming. And I've also looked at and modified their code.
There is probably a more polite way to ask this, but I think it has to be asked. How many people in the world have the IQ to write a program of say 100 lines that does something useful? If you have a job as a technical artist, a statistician, or an actuary, I think you fall in that category. (On the other hand, it's also true that not everyone with those jobs can write a program that works, let alone that can be maintained by their coworkers.)
But there are plenty of people who can barely use computers. I remember that my former boss (a very empathetic man) was recruited by a neighbor to teach his son programming. The son played a lot of games, and was interested in making games. And my boss's realization was: "this person needs to learn how to use a keyboard and mouse first".
I think with the rise of phones and tablets, you might see basic skills like text editing going down the drain. Contrary opinion: if you don't have the patience to learn how to use a text editor to say write HTML files, you don't have the patience to learn programming OF ANY KIND, even visual programming. (Incidentally, there is a similar inspiration behind Raspberry Pi -- basic skills like knowing how hardware works had gone downhill for Cambridge students since the 80's.)
Anyway, this was a random bunch of thoughts, from someone also working on a programming language. But I'm working on a language for experts, not for beginners. (It's better Unix shell, which both developers and non-developer professionals -- like sys admins, data scientists, technical artists -- use.)
The point is that I'm wondering what space Eve was really in. When I look at their website, I see
A moonshot to make programming accessible to everyone.
Let me conclude with some tactical advice. If you want to take on a problem as big as the ones I've discussed, don't make a direct frontal attack on it. Don't say, for example, that you're going to replace email. If you do that you raise too many expectations. Your employees and investors will constantly be asking "are we there yet?" and you'll have an army of haters waiting to see you fail. Just say you're building todo-list software. That sounds harmless.
So basically I think it would have been better to just tell everyone "we're making a better mail merge" or "we're making better excel macros", or even just a better SPREADSHEET, which is itself huge enormous problem, but actually smaller than "programming for beginners".
Then you'd be helping them get their jobs done without all the programming mumbo jumbo. But the whole time you could be trying to generalize it into a way of programming in other domains, for programming "in general". Just like Python can be applied to many domains.
-----
Also, even if you SUCCEED at making a better language/IDE that subsumes what people do with mail merge, excel macros, and every spreadsheet in the world, you HAVE NOT succeeded at overhauling programming! Programming is a very big field now!
There are / will be languages for distributed computing with various consistency guarantees, languages for computing with privacy, with security, languages for quantum computing, languages for machine learning. Those will not be touched by something like Eve or Light Table.
That's one of the reasons I'm interested in shell -- because in many contexts, it's the lowest common denominator that connects all these different domains.
> . How many people in the world have the IQ to write a program of say 100 lines that does something useful?
Have the IQ necessary? Probably 90-95%.
Have the training? Significantly lower. The training required consists of both being taught a language/medium to express thought it, and being taught a structured cause and event method of thinking.
There's this incredibly pervasive view in a lot of areas that programming requires some amazing IQ. It requires about the same level of IQ as reading & writing. But just like reading and writing, it requires training, and from a younger age the easier. If someone wants to approach it later in life, then having already learned or being familiar with transferable skills and thinking helps immensely.
This is actually the thesis behind Eve. Humans are incredible problem solvers, and we did it for millennia without computers. Computers are great tools for problem solving, but for many problems they are obtuse not because they are inherently hard, but because we have made them unnecessarily so. Take away all that ancillary complexity, and you're left with a tool and the essence of a problem. Most reasonably intelligent people should be able to take it from there.
Even assuming that programming is no harder than writing (which I highly doubt), I'm doubtful that 90% of people can write 100 lines of effective prose, even after 16 years of schooling. For what it's worth, I'm not quite sure I could.
How many of them want or need to use Macros/VB properly?
What people know and don't know is often strongly impacted by that, sometimes more than the underlying difficulty.
By "general space" I meant "rethinking programming to a certain extent".
Our particular version is at http://ellenandpaulsnewstartup.com. A good way to think about it is "build instagram in an afternoon", though our internal guiding light is lowering accidental complexity for backend apps.
Our goal is to make it easier for existing programmers to code (which will have the deliberate side-effect of enabling non-programmers and programmer-adjacent roles to code too).
I've always wanted to make a programming language, but I just tell everyone it's a shell, since that is more understandable and less "threatening" [1]:
The thinking is that if I fail to make a new programming language, which is the 99% case, then at least I can make a "better bash", which I know people will use if it works.
It was on the front page last week, although this is the second time it has become a flame war between sys admins and developers:
Anyway, I think the shell has a lot of potential for beginners because it's the first "language" you'll encounter when booting up a PC. My first "language" was actually MS-DOS batch files!!! (Although note that the project isn't prioritizing beginners over experts.)
The shell is just stuck with decades of legacy, to the point where people actively avoid it (as shown by the HN thread). But the only way to "kill bash" is to reimplement it, which I have almost done! So now that I have a clean architecture / prototype, the plan is to speed it up, and evolve it into a different language without the legacy.
There will also be lots of extensions, like a built-in awk paradigm of streaming, and a built-in make paradigm of incremental and parallel computation, etc. There are a lot of good concepts in Unix, which are not even available in Python, but they're hidden behind these crappy 70's style macro languages. It's interesting that the development of shell, awk, and make were about the time when people started figuring out to write parsers! Historically, awk and yacc were almost co-developed.
When I was learning about linux I used to find bash cool, then I moved to Windows for a long time (for game development) and when I came back I couldn't hate it more. It needs to be deprecated for something better.
I mentioned I would write a post called "Python Is Not an Acceptable Shell". I think it's also clear that Perl isn't an acceptable shell, simply because shell still exists. And awk and sed still exist -- in fact I believe it's common these days to use shell/awk/sed but NOT Perl, which means that Perl fell short of its mission. (I certainly prefer shell/awk/sed to Perl.)
There is overlap between Oil and Perl, which is ironic because I've written probably 100x-1000x the Python code that I have Perl code. But I would say I'm tackling some of the same problems, but not using the same solutions.
An obvious thing is that Oil will not be nearly as line-noisey as Perl, which is the thing that is most often complained about (rightly or not).
Perl also doesn't have some obvious shell constructs like pipelines. I'm pretty sure you just "shell out" for such things, which isn't a good solution.
I've been looking at Perl 6 too, and there is overlap there too. But I honestly think a new shell has more of a chance of being adopted widely than Perl 6. There is apparently still an unresolved schism between Perl 5 and 6:
As far as I can tell, Perl 6 is even further from a shell than Perl 5 is. Python 3 is also less suited for shell-like tasks than Python 2 is (due to its Unicode conventions).
As a tool that touches the big problem indirectly, I'd want to have a good tool for data cleanup ("data wrangling") that felt more WYSIWYG.
Programmers have their sed and their grep and their regular expressions APIs in Python, but that doesn't help someone who doesn't know programming languages or how to build a precise expression to catch data, and prefer point-and-click and copy&paste.
I'm aware of OpenRefine, but it's somewhat hard to feed it data and create stable cleanup workflows.
Yes I think that is a good use case. It has the benefit of being concrete and useful, yet there is less notion of "time", which is a difficult part of programming.
There are a bunch of academic papers about "example-based data cleaning", but I don't know much about them. I think the idea is that you provide examples and the system tries to deduce a data cleaning function. Of course, if you do this for the user, then they might learn fewer programming concepts, but be able to get the job done more quickly. So it's a tradeoff.
I like the approach of the (experimental) Lapis editor, which allows you to select several examples and it infers a selection rule written in an English-like language.
And it works on plain text, not just tables.
You can tweak the rule, and then simultaneously edit all the records selected with it.
Programming languages for non-programmers needs to be embedded in a vessel.
Excel is probably the quintessential vessel. When people are trying to solve a need they have, they will invest the effort to figure out the tools, including learning enough programming to solve their problems.
The Eve team emphatically, from day one, had this stance.
It is perhaps indicative of a key strategic error that the course of the public work on Eve did not involve as clear as picture of the target vessel as of the programming language, especially given the churning on the PL implementation that dominated the last phases of the public effort at Eve.
I'm very sorry to see this. We need investment in programming language and environments – and right now there is pretty much zero of it. No, new programming concepts and tools will not pay of immediately but in the long term they can be incredible efficiency multipliers, empowering pros to do more work in less time and non-pros to be programming in the first place. As long as no one is interested in funding this kind of research & development we stay where we are today.
There's huge investment in programming languages and environments. It wouldn't surprise me if there's more than there has ever been at any one time.
And it's entirely healthy that some fail. That's a primary means of learning. I would anticipate that some of the ideas from Eve will find their way into other environments, as did ideas from LightTable before it.
Languages: Swift, Go, Rust, Julia, Haskell, JS, C++ etc etc
Dev envs: VSCode, Electron, more build systems than you can shake a stick at, VMs, containers, playgrounds etc
Non-textual programming. For general purpose programming, I can't think of any. There are some dead ends that will probably remain so. For niche programming, there are loads.
And to my point, Lisp, Smalltalk and Plan 9 are all good examples of failed experiments that folk took ideas from and used in subsequent projects.
Now, you may not like the investment and innovation. It may not be what you had in mind. Each to their own. But there's plenty of investment and no small amount of innovation.
No, it's not about personal preference. It is about what is (im)possible when the only context to be considered is the status quo. The technologies you list are innovative but in the end just iterations of inventions of the 1970s. The 70s were so special because stuff happened there was the result of increased spending into science and research of the 60s after the Sputnik crisis [1]. Today's R&D is not focused on what could be possible in 10, 20, or 30 years but in 1, 2, or 3 quarters since companies funding the research need to be focused on their bottom line. The Eve project was special insofar in that it received funding not tied to short-term goals – but only to an amount that in the end wasn't enough. Similar things happened elsewhere and this bars the way to more meaningful inventions [2].
[1] The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal. Mitchell Waldrop.
This. There is little funding for the kind of research that will lead us out of the tarpit (and less with the demise of HARC). My hat is off to Eve's investors for being unusually far-thinking for SV. But ultimately this is the responsibility of our research institutions, which unfortunately have other priorities.
Kind of saw this coming after the third reimplementation in yet another language. And why Rust? 2 years ago they decided to use Rust to implement a new language (IIRC Rust wasn’t even 1.0 at that time). That’s a huge amount of technical risk.
I wanted to be exited about eve, but it was always too light on details, had too much risk to ever allow it to succeed, and after several pivots it failed.
This story probably been a bit different if only it was a bit less ambitious, and a lot more pragmatic.
As an Angel investor on the Eve project even though I lost my investment, I think that Chris, Robert and the whole team deserve recognition as they changed the whole industry, and now everyone is following them. I am proud to have had the chance to be involved with them!
Huh? What industry are you talking about? The software development industry? Their influence was not even a rounding error. The let's-rethink-programming-completely industry? No idea, but that's not really an industry.
They influenced swift and a whole bunch of other interactive IDEs with the research they did. Don’t use iPhone or Android if you don’t want to use these new languages
The only quote I can find about swift and Light Table is this:
> The Xcode Playgrounds feature and REPL were a personal passion of mine, to make programming more interactive and approachable. The Xcode and LLDB teams have done a phenomenal job turning crazy ideas into something truly great. Playgrounds were heavily influenced by Bret Victor's ideas, by Light Table and by many other interactive systems.
No worries, I understand and have gratitude towards the folks working on it as well. But I don't agree with that they changed the entire industry.
The second link you posted mentions nothing of Swift. The first one has the following quote about Swift:
> The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
Which seems to agree with the previous quote I posted. Light Table inspired Playgrounds, not Swift.
yeah, good point. Sorry I said Swift, but I meant Playgrounds. I guess I see Playgrounds as the interactive IDE part of SWIFT, like Visual Studio is for C++, so I don't really separate the two in my mind. Anyway, we can agree to disagree on the other point, I just like the discussion :)
Kickstarter uses “Swift Playgrounds for iterative development and styling. Most major screens in the app get a corresponding playground where we can see a wide variety of devices, languages and data in real time.” (The Kickstarter iOS app is open-source [0]; see their playgrounds [1].)
Here’s [2] Brandon Williams’ talk Finding Happiness in Functional Programming (starts at “Playground-Driven Development,” but I suggest watching the whole talk), [3] the Swift Talk episode “Playground-Driven Development” (Brandon Williams and Chris Eidhof), and [4] the Swift Talk episode “Test-Driven Reactive Programming” (Lisa Luo and Chris) wherein they build a small but complete PoC all within Playgrounds.
There are many more examples, such as Chris Lattner’s own Dynamic Member Lookup proposal [5] Python Interop playground [6] and Reactive Swift’s getting started playgrounds [7].
Playgrounds are very much entangled with Swift’s core (value proposition).
Thanks a lot for the comment, gives me a new view of Swift. Didn't understand Playground was so embedded with the language and/or vice-versa. Very interesting.
>> And also, Swift hardly "changed the whole industry", as far as I know.
> Yeah your right. Swift didn’t change anything
This is the second time in this comment thread that you have strongly misrepresented something said by someone else. For the sake of a rational discussion, please refrain from such behavior in the future.
Oh, sad. I always get very excited about the stuff Chris does, it’s right on the interesting corner of the leading edge. I hope whatever comes next is a combination of his ideas which finds fertile ground.
They made some really cool tools. The tools did not end up being a financial success or mindblowingly popular like a mainstream programming language, but they were still relatively popular. In that a lot of people at least know what it is. Most projects, no one even knows they exist.
Don't confuse popularity with merit in general, or not becoming massively popular with not being successful.
This is very sad. I’ve been looking for something like Eve to develop difficult user interfaces in and was hoping it would hit 1.0. Wonder if there is any framework trying to do similar things in the JavaScript ecosystem, or targeting that ecosystem with code generation.
I really admire the way that Chris and his team were open about their ideas and their journey. They inspired me and I'm sure many others through this. Thank you Eve.
That's a little sad. I used LightTable for a little while, and thought it was very cool. Was looking forward to Eve, but it was beginning to seem like vaporware, so I guess I can't say I'm surprised by this announcement.
Bret Victor's group is still going (they got their own funding); I believe John Maloney (of Morphic/Etoys fame) is also still going with his own funding. The rest of us are all gone...
None of us really know what programming is, and we don't have institutions at present that will allow people to explore the problem thoroughly. These guys deserved a better culture to work in.
> we don't have institutions at present that will allow people to explore the problem thoroughly
What about academia?
Academia is not a great environment for creating tools that people will actually use, which Eve was definitely trying to be; but "exploring a problem thoroughly" is exactly what academia is for.
Bret Victor has been writing a lot recently about why the current academia can only churn out incremental improvements and not “fundamental research”.
The problem with both startups and academia is the need to meet constant short term goals. Startups have limited runway and often investors who want to see a return. In academia, most funding comes from small National Science Foundation grants which are usually only enough to cover a fistful of months of research and require publishing a paper with novel findings.
Many of the big breakthroughs of the mid-to-late 20th century came from long term fundamental research that was financed either through large public spending (ARPA, NASA) or large monolithic corporations who thought “meh, this’ll probably benefit us in the long term and we’re pretty flush with cash” (Xerox PARC and Bell Labs).
Unless we can find ways to fund long term research (which btw will be high-risk, you have to be OK with getting no return for years or possibly ever) we will not be able to tackle large fundamental problems.
I agree in principle, but the academy is not what it was in the 50s, 60s, and 70s when most computing breakthroughs took place. The whole funding model has changed. Short term thinking is the way of the world now.
This is the same man, Chris Granger, that did a Kickstarter for Light Table, a so-called "next generation IDE," with promotional materials touting as new and revolutionary features that had been standard in IntelliJ, Visual Studio, and even Eclipse for years:
This crosses into personal attack, which isn't ok at any time on Hacker News, let alone kicking someone when they're down. Please read https://news.ycombinator.com/newsguidelines.html and don't post like this again.
Perhaps, but it's pretty easy to make judgements from the sideline about what people should or shouldn't do.
I was the PM on Visual Studio. I can assure the things I showed in Light Table weren't there. Nor were they in Eclipse, as I studied that as well. You can pick and choose any number of things from our work over the years and say "Hey, but this looks like that" - I'm sure it does, but the question is does it work like that? Does it enable you the way the things we showed did?
> Light Table was also ultimately abandoned before completion.
Unfortunately even after paying ourselves only just enough to live in the area (around 40% of what my peers were paid), we needed to find a way to eat. No software is ever done, but we had more than 40,000 people using it and Light Table was a stated influence on tools at Apple, Google, and Microsoft. We did our best. Was there more to do? Of course, there always is, but at some point we had to make the hard decision to leave LT in the hands of the community. I'm curious what direction you think we should've gone instead?
> Perhaps Granger should rein in his ambitions somewhat
Maybe, but at the same time, we had people willing to let us try. You paid literally nothing for access to our work, nor to the effects our research had on others, so I'm not sure why there's this much negativity here. We need people testing the fringes because where we are is so far from where we could be. Lots of things did work in Eve and there were cases where we were so much more efficient it hardly felt like we were "programming" at all anymore. We've shared everything we've done and we'll continue telling more people about it in the hope that others can benefit from what we've learned.
And yet you're the one acting hurt, while I'm the one having to shutdown the project. That's very different from the HN that rallied us to do our kickstarter in the first place - the one that encouraged innovation and trying to do crazy things in the off chance that they work. There's so much more to do and I sincerely hope HN doesn't become so cynical and demeaning that it's not worth sharing people's efforts here.
For whatever it's worth, I backed light table on kickstarter and as far as I'm concerned it was well worth the money. I got an interesting prototype editor which has had a big impact on the wider ecosystem (the swift playground comes particularly to mind). I've gotten to read your consistently interesting blog and watch you experiment with different pieces of the language design space.
I can't imagine how hard it must be to shut it down after all these years, but I know whatever comes next in PL and IDE design will owe your project a huge debt. Thank you for everything!
> Graduate school is a great place for "people testing the fringes."
Testing, yes, but not exploring. Grad school gives you one experiment and then you have to start publishing. So if you have something you want to try out, with a direct implementation strategy and a clear set of possible outcomes, then you can do that at grad school.
If you want to try many things and iterate, grad school will not work well. You will be expected to publish digestible “learnings”, and so you will end up skewing your work towards ideas that are likely to produce compelling presentations.
There’s no good place to experiment, in either the corporate or academic world. Your best bet is to move between many domains, trying small ideas in context while also delivering value, and only ever doing your real Hail Mary experiments at home on your own dime.
If you don’t care if you have anyone in your lab, and you don’t have any expenses, you’re pretty free as a tenured faculty.
But if you want students,
you need to find grants for them. And if you need materials/travel/services/etc for your research that’ll require grants too.
I'm happy for all the work you've put out, and I agree we need to try and experiment more and innovate, and so I found Eve was a more interesting experiment. That said LightTable had more potential for success and impact. I still dream for a more polished and full featured LightTable editor. A Jupyter notebook on steroid usable for generalized programming. ProtoREPL in Atom has picked up some of it, but overall there's just not the man resources behind any of these to really flesh them out and that's sad.
You are being a bit harsh. These were ambitious undertakings with high levels of risk going into them, which, I bet, were well known. I've struggled in this field for 10+ years now and there are lots of dark alley ways that end in walls. Then everyone constantly tells you this has already been done when, no, it really hasn't (at best, the technology is there but so piss poor designed that it isn't useful).
If you reign in your ambition, well, isn't that why innovation in the programming experience field is so stagnant (basically stuck at the Smalltalk level) in the first place? We should be free from disdain to take risks, possibly lose, with a chance of hitting it big.
As a side note, and as someone else working in this space, you do yourself a disservice by telling yourself work in this space is stagnant since SmallTalk.
It’s not that programming ergonomics are stagnant, it’s that all of the gains have been made by professionals. And the tooling is nearly impossible to leverage for a beginner environment because all of the assumptions of a pro developer are built in to their design.
The innovations are there, Heroku, Git, CSS, Markdown... these are all triumphs of programming ergonomics. They’re just all inevitably coopted by professional “Foundations” and amended to the maximum level a Pro developer can handle. JavaScript is almost useless for beginners now because the tooling is so complex, but full time front end devs can crank out HTTP packets like no ones business.
So you will need to watch those developments and lean on their ideas and some of the low level tooling if you ever hope to build a beginner environment. But you can’t use the tools themselves.
Still, if you start with SmallTalk you will fail. If Chris Granger and Bret Victor started their and failed, you will fail too, because those guys are rockstars.
You need to take the ideas being tested in the Pro tools and use them, without adopting the Pro implementations or even the interfaces. It’s or easy.
None of those are interesting experiences beyond the command line. They represent exactly the stagnant thinking that we are fighting against. Especially GIT, it could have come out in the 70s with the interface it relies on. It makes me sad that people see this as progress.
It doesn’t take a rockstar to make progress here, and anyways, I’ve been relatively successful on the academic side (e.g. just got a 10 year influential paper award for my 2007 live programming paper). I don’t think Bret Victor failed so much as lost interest, and he accomplished a great deal in getting people interested in this area again. Chris hasn’t really failed either, he is still young. And let’s not forget all the ex-HARCers...
Also, not all of us see this as a make programming more accessible problem; e.g. that’s not my thing, I really want to improve programming en masse. But we all agree the existing way is a dead end and we need to experiment ei5h radically different approaches.
Interesting experiences beyond the command line are game development tools; Each iteration try to make the barrier between artists/content creators and programmers thinner.
Indeed. Also interesting is a look at games themselves, especially those that show a long trajectory from beginner to experienced player with significantly different interaction patterns between the two groups.
It would seem that practically no programmer's tool has ever received the attention to interface detail that is common in the best games.
It took ideas from the 70s, dropped the interesting parts, and was hailed as a revolutionary approach to marking up documents. Ie, the past 30 years of computing have been about narrowing the interface between programmer and computer to the equivalent of a straw (everything as text!) and then try to build an entire system around that.
+1 - Watching from the sidelines I admire the work both from you and Chris. It is incredibly hard to move away from the current local maximum because of network effects.
Which right now is mostly going to the deep learning train :(. There are some academics working in this area right now, Ravi Chugh, Phillip Guo, to name a couple.
Isn't generalized AI and even more ambitious long term goal though? Why program anything, just tell the computer what you want and he'll make it for you.
That's funny, since Engelbart's work in the mother of all demos was seen as irrelevant by much of the CS community at the time because they thought generalized AI was just around the corner! After a 2 or 3 AI winters, they changed their minds.
Generalized AI is like practical fusion, always 20 years away. I mean, it will come eventually, but until then...there is still value in making humans better.
I know, and I rewrote my post several times before submitting it, trying to minimize the harshness. But the pattern with Granger is pretty obvious, isn't it? Grandiose ambitions, over-commitment, and a resulting failure to ship.
There have been plenty of people out there with grand visions who tried and failed the first few times. Then either they give up and tone down their visions, or they keep trying and perhaps eventually succeed.
LightTable and Eve are still very influential and inspiring even if they aren't successful. I am personally glad they existed, as they provide experience and lessons for future efforts.
> There have been plenty of people out there with grand visions who tried and failed the first few times.
Most of them a con men, selling something they know will never work, especially on kickstarter and the like.
This was nothing unique, it's something people have been trying to do since computers were invented and it has failed every single time. COBOL was conceived with the idea that people in other disciplines would be able to do their own programming.
When there is every reason to think you will fail and you ask people for money to try then you start to look a lot like those con men.
If LightTable's features "had been standard in IntelliJ, Visual Studio, and even Eclipse for years", how is that evidence that Granger et al were being overly ambitious? Wouldn't it suggest exactly the opposite?
Vice-versa, if LightTable was overly ambitious, wouldn't that mean it tried to do things that were (too far) beyond the scope of existing technology?
My opinion is almost the opposite: I find Granger's ambitious ideas and presentations stimulating, and I like that they're out there to inspire people to imagine more and experiment more agressively.
I wish his audiences were a bit less credulous though. Experiments like Light Table and Eve are just that: interesting and potentially useful explorations of design space, but as unlikely as anything else to radically change the discipline of programming overnight.
Can you describe the right-sized ambitions and effective delegation practices that have served you well in your career, perhaps with examples of the resulting successes?
How was Light Table a failed kickstarter? It got funded and Chris delivered exactly what he said he would. You can go and use Light Table today. The reason he stopped working on it was the realization that his ultimate vision for Light Table is something exactly like Eve, and would be impossible with languages like JS, Pyhton, and Clojure.
Since this is your third such comment, it seems that you created this account for personal attack. That's obviously a bannable offense, so I've banned the account. Please don't create HN accounts to break the site guidelines with. We like HN users who like Smalltalk, but not being an asshole is more important.
He raised a seed funding for a programming language, that itself deserves credit. I think with your attitude, many innovators would have quit from their first or second try? Get a life buddy.
Stupid idea. They had a nice platform for dealing with data, then they pivoted into another nice platform for dealing with data, then again. But in the end they wanted to create a visual programming language for non-tech people or something like that. That's too much ambition.