most of the ground breaking inventions / theories are "discovered" rather than invented. do you feel the same way about lisp ?
another one: lisp and c are both (rightly ?) considered peaks of language design. they both are predicated on widely different underlying philosophies. one modeling the abstract nature of computation, and the other modeling the abstract nature of a machine.
now, given the current state of development, would you consider a middle ground to be most fertile for the next big language ?
What do you mean by "big language?" Importance (a subjective value) or most widely used (overall or for specific domains) or something else?
Lisp is a functional language, c is an imperative language.
Wouldn't a language "in the middle" risk being a compromise, effectively diminishing or negating the advantages of each?
Lisp is not a functional language except in the trivial sense that it supports HOFs. Common Lisp is not even functional in the sense of Scheme, where purely-functional programming is not enforced but seems to be encouraged (disclaimer: IANAS). Common Lisp is a pragmatic, multiparadigm language in which functional, imperative and other styles can and should be used as appropriate. In a very real sense Lisp is that language "in the middle."
"LISP has a partially justified reputation of being more based on theory than most computer languages, presumably stemming from its functional form, its use of lambda notation and basing the interpreter on a universal function."
- John McCarthy, LISP - notes on its past and future (Page 1)
Conference on LISP and Functional Programming archive
Proceedings of the 1980 ACM conference on LISP and functional programming table of contents
Stanford University, California, United States
Pages: .5 - viii
Year of Publication: 1980
My guess would be through some sort of computer club / group. I got to meet Ken Thompson because he was visiting the university I worked at and the unix users group on campus convinced him to stop by. Join up these sorts of groups, get on mailing lists, ask around.
Luck, mainly. I'm taking an AI class at Stanford, and it turns out that the professor knows McCarthy. The class is philosophical, not technical, so I don't know how all of these technical questions are going to be received, but I can't not ask them.
There are technical AI classes here that are required for a CS major. This one is an elective called "Can Machines Think? Can Machines Feel?". We read famous AI papers (including McCarthy's "Ascribing Mental Qualities to Machines") and talk about what's possible and what's not.
Does Dr. McCarthy consider himself politically conservative, libertarian, or something else? Do any current presidential candidates or other politicians come close to reflecting his views?
Lisp was meant to be a language for AI research originally, but as we all know, no progress has been made in this field since Lisp came out. I'm curious what JMC thinks about this. Doesn't he think, for example, that there is no connection between Lisp and intelligence? Or maybe it's too early to talk about this?
> but as we all know, no progress has been made in this field since Lisp came out.
Although I totally disagree with this, I think it would be a great question for McCarthy.
Also, if I were to meet with McCarthy, I would ask him about his fascinating opus on the sustainability of human development. (Summary: A: Nuclear power will be the future B: With huge amounts of energy, humanity can do anything)
Nonsense. There has been plenty of progress for AI research. The explanation usually given is that once a solution to a "ai" problem has been implemented, it is no longer considered an "ai" problem. e.g voice recognition, facial recognition, compilers, chess master slayers, etc...
If humans can perform simple arithmetic operations, does that mean calculators are a solution to another "AI problem"? So are voice/face recognition systems (which, in fact, never gave satisfactory results so you can't even call them "solved").
""Lisp was meant to be a language for AI research originally": false premise": false premise.
A programming system called LISP (for LISt Processor) has been developed for the IBM 704 computer by the Artificial Intelligence group at M.I.T. The system was designed to facilitate experiments with a proposed system called the Advice Taker, whereby a machine could be instructed to handle declarative as well as imperative sentences and could exhibit ``common sense'' in carrying out its instructions.
Saying that nothing has happened in AI since seems a bit...exaggerated. But I don't know much of anything about the field - anyone else care to comment?
I'd say that the algorithms are key in AI, not necessarily the language. You can program neural networks, genetic algorithms, search in Blub.
The early AI programs tended to solve toy problems. No one had thought much then about the implications of the curse of dimensionality: that each entity you add to the problem (say, throw another object into your object-manipulating robot's environment) increases the problem space exponentially (or even factorially).
But you're right, there has been a ton of progress in AI. The initial bias toward ontologies, expert systems, and top-down algorithms has given way to bottom-up systems that are data-driven rather than abstraction-driven.
One major example is using neural networks, SVMs, RBFs to discover implicit features in a data set rather than depending on an expert to code up those features explicitly. Experts don't scale, but data will always be with us. Thus we've seen increasing interest in information retrieval as opposed to ontological knowledge engineering.
But a lot is going on in the field even today. I found this talk very interesting:
Well, we have vehicles that drive themselves across deserts, but not machines that will argue with you over whether or not Will Ferrel is the greatest comedian ever. I say that we are making progress, though not as much as some had hoped.
My view may sound rather radical, but I think not everything that's called "A.I." has to do with intelligence. In other words, if a progress is reported by an AI researcher, it doesn't mean anything.
There are many forms of intelligence and for example Turing machines are just one, probably. If we are talking, however, about intelligent survival machines like ourselves, then we are not even close to understanding how it works.
as we all know, no progress has been made in [AI research] since Lisp came out
You may have some unorthodox views on what "AI research" constitutes or how much progress had been made, but prefixing your comment with "as we all know..." is obnoxious. The consensus view is certainly not that no progress at all in AI has been made since 1958.
Care to show anything that we use in everyday life and that evolved from the AI science as a result of its 50-year existence? Ok, except CAPTCHA maybe (just kidding).
honestly, i think hard AI (as propounded in various SF genres) will always be next 20 years away. a borg like future with man-machine symbiosis seems more likely i.e use machine to _augument_ human capacities, rather than supplant them.
did u understand lambda-calclus or your whole awesome languagecreation called LISP was just a fluke?
in terms of AI, what will be next big steps, what kind of models will we go towards?
what do you think the future programming languages will look? lets say in 20 years and then in 50years?
will there be more singleparadigm-oriented languages or most languages will be multipardigm?
The first LISP interpreters were indeed flawed, see the 1995 footnote in Mc Carthy's famous paper. This is not uncommon when inventing something new. Do you blame Edison for not using wolfram in his first light bulb?