Hacker News new | past | comments | ask | show | jobs | submit login

Interestingly, not much about ML. Surprinsing for lisp which, if I understand correctly, has roots in AI...



The AI of the first Winter had nothing to do with ML, rather expert systems and symbolic processing.

Python is starting to look like the Lisp of the second Winter.

https://norvig.com/python-lisp.html


> The AI of the first Winter had nothing to do with ML, rather expert systems and symbolic processing

Between 1988 and 1992 I worked for a UK company participating in a multinational project to use Common Lisp to build an expert system building tool. After creating the thing, we worked with clients (internal and external) to try to solve real customer projects. Our conclusions matched others, and contributed to the AI winter:

* the rule-based expert systems were extremely brittle in their reasoning

* once you got beyond toy problems with small rule sets, you needed some programming skills in addition to the domain expertise that you were supposedly encoding

* you sometimes spotted the actual algorithms that could be coded in a conventional language rather than rules + inference engine.

We eventually abandoned the AI side and kept going with the underlying Lisp, and started to get real leverage in the company, rapidly becoming a prototyping / risk reduction group who delivered usable functionality.

[Edit] We were using Lisp processors embedded in Macintosh hardware, with outstanding (for the time) IDEs and thanks to the Mac interface, we could create some really slick apps for the end users. One of our Lisp systems that got rave reviews internally was a drag-and-drop network modelling tool that replaced a frightening mass of Fortran and data entry spreadsheets. No AI/ML at all, but it really improved the throughput of our network modelling group. As we were a comms company, this got positive reaction from senior management, offsetting the non-return on investment in the rule system.


Are you referring to the NuBus boards mentioned here[0], the Symbolic MicroExplorer or MacIvory? Those look interesting!

https://en.m.wikipedia.org/wiki/NuBus


It was the TI MicroExplorer. We also used Symbolics machines. By the time I left we had switched to conventional Macs running Procyon Common Lisp [0], which had a stunningly good native IDE.

[0] http://www.edm2.com/index.php/Procyon_Common_Lisp


I don't think Python has the same issues. It was very popular as a scripting language long before it became good at numerical and data science work. It has always been free/open source and never required special hardware to run. Even if ML tanks, the impact to Python would be minimal.


Are we not heading for a 3rd AI Winter? The first being in the 1970's and the second in the 1980's and 1990's (which I experienced)?


I don't think there was a post-70s winter. There was just not enough 'there' to over-hype; it was toy problems that did not even try to masquerade as real solutions other than in sci-fi and popular culture. Luminaries in the field definitely made a name by pumping out speculative paper after speculative paper, but IMHO there was more of an ember waiting to spark than there was a fire consuming all of its fuel.

By the late-80s and early-90s you have venture-backed companies, big institutional efforts, grifters who had honed their pitch in academic tenure-track positions prior to moving to richer waters, and the first real claims being made regarding just-around-the-corner deliverables that would change everything. Maybe I am jaded from experiencing that same winter, but from what I recall the prior decades were more consumed with people making broad claims to try to establish intellectual primacy more than making claims about what could be delivered.


The term showed up in the 80s, but there was an earlier (70s) failure in AI. There were a lot of grand ideas and promises, people really did anticipate AI moving much faster in the 60s and early 70s than it did. Since a lot of ideas (both regarding AI and CS in general) were in their infancy, the limits of computers (fundamental limits) weren't yet fully recognized, but also the hardware was itself creating limits (non-fundamental to the field) that weren't escaped until the 80s and 90s. See chess AIs of the 90s finally "solving" the problem, which would've been technically conceivable (how to do it) in the 70s but totally unrealizable (unless, maybe, you hooked every computer of the time together).


doubt it's going to happen this time. the current crop of AI research is producing a lot of real results in all kinds of fields. voice transcription, recommender systems, translation, object detection -- all cases where neural networks have gone from research to widespread commercial deployment.

there might be some kind of a contraction when people realize they're not going to get HAL-9000, or that you can't use deep learning to replace every white collar employee. but the results this time are much more "real".

I don't think we will get another AI winter where the entire world completely gives up on the research area.


By 1980's and 1990's, are you referring to the Japanese 5th Generation project?


No, there was general failure of AI due to overhype in 1980s, hitting hard in early 1990s, with some marking a big chunk of the hype starting after (non-hype) papers about Digital's XCON paper.

Some of it was due to non-delivery of hyped predictions, some of it was due to sudden disappearance of military funding that helped spread the hype.

The 5th generation project was part of the hype, but at least in the west, lesser part of the winter.

The original AI Winter was, iirc, related to UK report in 1960s/1970s.

The second AI winter killed a lot of advancement in computing in general, combined with new waves of programmers coming from very limited environments who were never exposed to more advanced, capable techniques - by the time they got to play with them, AI winter was in full swing and they were pushed out as "toys".

While expert systems in naive form were definitely not the answer to many generic problems, they are often part of the answer, and the amount of time I hit cases of "this could be done as rule system and be clearer", or even "Why can't I depend on features of 1980s XCON in 2020??? It would have saved me so much money!" is simply depressing.

(n.b. I suspect a significant portion of why many modern shops don't even look toward features like XCON - which ensured correct configuration in computer orders - is because the norm became that customer pays for anything like missing necessary pieces)


Interesting. Thanks for the detailed answer.


The focus of "AI" has shifted over time from symbolic processing (something lisps star at) to neural network machine learning, which requires more brute-force power.


I think the issue isn't "brute force power" (which python doesn't really have compared to CL if you look at the language itself), but rather the quality and completeness of numerical routines and GPU support. Matlab was very popular for early ML because it had the former.


Python is doing the actual lifting with C or Fortran, so I'd say brute force still applies quite a bit.


Same for the Lisp libraries which rely on CUDA, OpenBlas, etc.


I would say AI roots and research are lisp, as 99% of all early research work was done on lisp.

Lisp was the language people like Richard Stallman or John McCarthy(the inventor of Lisp) used at MIT AI laboratory:

https://en.wikipedia.org/wiki/MIT_Computer_Science_and_Artif...

Everybody used Lisp there and young people that learned from the masters learned lisp too.

But that was AI 1.0. Then came the AI winter and 2.0 spring with GPUs that were programmed in C dialects and gave incredible levels of raw power.

So, as a high level access to low level C code, python was picked by most people.


It would be great if Python had a PEP to automatically compile down to a binary executable file, and resolve all dependencies for it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: