Hacker News new | past | comments | ask | show | jobs | submit login

"I haven't read them but I'm familiar with the area of discussion." "What we are doing now is a thundering herd attack on the global mind-space of algorithms+data."

Nice way to put it. Ok, you really need to read those references since most programming is supposed to suck if it's done by humans. It used to bother me when I was younger but I accept it as inevitable. Much like evolution itself producing tons of waste on its way to overall dominance of most of the search space. Once you get basic principles driving it, you must then work within that to squeeze as much of that good engineering as possible into those constraints. It's only way to have high impact or shift segments of the herd in better directions. Hell, it's a little like CLP for people & development processes. ;)

I'm including original essay, a great commentary tie-in in historical proof, and one by ex-high assurance guy Steve Lipner at Microsoft:

http://www.dreamsongs.com/RiseOfWorseIsBetter.html

http://yosefk.com/blog/what-worse-is-better-vs-the-right-thi...

https://blogs.microsoft.com/microsoftsecure/2007/08/23/the-e...

" The Prolog version is less than a page of code and half of that is direct translations of the puzzle hints to CLP constraints (the other half just sets up the variables and such.)"

It was beautifully concise compared to the others. Of course, they were trying to re-invent the parts like execution strategy that are hidden in Prolog. Your point seems to be that nobody taught them this or they didn't learn enough. That comes from combo of education system training people for workforce and popularity of imperative languages for FOSS. The environment is real problem. Changing that leads us right back to above essays. Two, good examples are Ocaml and Clojure. One gives them escapes back to imperative on their way to gradually learning functional. It's done better in uptake than most FP. The other made some changes to LISP while dropping it on top of one of evolutionary-strongest ecosystems. Got uptake no other LISP had up to that point. A subset of its users also started learning other LISP's.

"They're bad because they both would be better off implementing the resolution algorithm (i.e. mini-kanren) and then using that to solve the puzzle. It still would have been shorter and easier and faster."

" The machines are vast and fast enough now. Why is everybody..."

It would've been. Now we get to the point where you find that you may be caught in the same problem you're accusing them of. I can't remember Prolog or most programming now due to my head injury. Yet, I remember quite a bit about the market & what people did with it back when what you suggested was tried on a large scale by smart people. That was the AI boom where they coded in LISP, Prolog, Poplog, OP5, etc. I read all that, built expert systems with some of it, and tried to stretch it in new ways. I confirmed myself that it was very difficult to apply it to all kinds of problems where other models allow concise expression of problem or dramatically, higher efficiency. We collectively needed that development pace or efficiency to be competitive. The Japs even threw piles of money and brains at Prolog-specific hardware in their Fifth Generation project to bootstrap the goal you're talking about. All of that failed miserably. The AI Winter finished most of those companies off with a few pivoting and surviving.

So, in case you're wondering what we learned, it was that you don't want to write all your code in Prolog. Even those doing logical search in some cases. The default on bottom of the stack should not be Prolog. You want to use the most powerful language you can that supports DSL's & FFI's. You then embed things like Prolog in it to use when ideally suited for problem. Anything that's easier to handle a different way is done differently. LISP and REBOL were main proponents of this approach with Allegro CL still bragging about their Prolog implementation. "sklogic" added Standard ML to his LISP for coding safer, easier-to-analyze modules on top of DSL's for parsing & Prolog. Haskell has recently joined the fray where a number of DSL's are letting one mix benefits of Haskell and low-level languages like C. Galois Ivory language & Bluespec come to mind. If a tool such as SWI Prolog is used, the typical case should be prototyping & verifying Prolog source that's then embedded in a better tool like those. There's times like Zebra puzzle, NLP, etc where constaints allow it used standalone. Also, possibility of doing things in reverse extending Prolog with foreign primitives instead. More space to explore in R&D.

Point is that was the hard-learned lesson of decades of failures to do everything in Prolog. It just didn't work. It was ideal for some things but slow, hard-to-write, and hard-to-maintain for other things. Same was true for many models. Hence, a unified tool can express and integrate each of those models to let builder use best tool for each job. Alternatively, data formats, calling conventions, or protocols standardized to integrate separate programs using separate models. High-assurance recently did something similar for verification in DeepSpec program that led to CertiKOS OS. A bunch of DSL's are used. Prior efforts tried to build & verify it all in one tool like FOL or HOL but work was miserable.

" I've maintained for years that anyone who can solve a sudoku puzzle has the intelligence to learn to program."

I've never thought about it. Especially in light of Prolog. Very interesting. Now you got me wanting to drop Prolog on some Sudoku fan sites to see what happens. Have to have syntactic sugar, libraries for common things, and great tutorial so the start is painless. I'll hold off for now but keep it in mental peripheral if I see someone messing with sudoku.

"P.S. I'm still working on replying to your kind and excellent email."

Cool. :) Also reminds me I still need to take black and yellow highlighters to that book. Probably take it to work with me to mess with on lunch break.




    It used to bother me when I was younger but I accept it as inevitable. Much like evolution itself producing tons of waste on its way to overall dominance of most of the search space. Once you get basic principles driving it, you must then work within that to squeeze as much of that good engineering as possible into those constraints. It's only way to have high impact or shift segments of the herd in better directions. Hell, it's a little like CLP for people & development processes. ;)
Reflecting on that calms me down a little. Evolution has no purpose, so it cannot be inefficient. I think my problem may well be in unrealistic expectations. ;-)

    I'm including original essay, a great commentary tie-in in historical proof, and one by ex-high assurance guy Steve Lipner at Microsoft:

    http://www.dreamsongs.com/RiseOfWorseIsBetter.html

    http://yosefk.com/blog/what-worse-is-better-vs-the-right-thi...

    https://blogs.microsoft.com/microsoftsecure/2007/08/23/the-e...
I'll read them ASAP. I'm changing jobs at the moment so I'll either have little to no time or too much.

    ...re-invent the parts like execution strategy that are hidden in Prolog. Your point seems to be that nobody taught them this or they didn't learn enough. That comes from combo of education system training people for workforce and popularity of imperative languages for FOSS. The environment is real problem. Changing that leads us right back to above essays. Two, good examples are Ocaml and Clojure. One gives them escapes back to imperative on their way to gradually learning functional. It's done better in uptake than most FP. The other made some changes to LISP while dropping it on top of one of evolutionary-strongest ecosystems. Got uptake no other LISP had up to that point. A subset of its users also started learning other LISP's.
Part of it is education, part of it is environment, and part of it is just the state of the industry, what counts as "professional" education and behavior is kinda grotesque compared to most other groups of people who call themselves "engineers". I'm working with a guy who has never heard of Alan Kay. What's worse is that he doesn't care. He's not ashamed of his ignorance. Yet he has zero qualms about pulling down six figures as a hotshot developer.

When I finally learned LISP I got mad. I didn't even learn it: I read the TOC of pg's book. That was all it took. My experience and brain cells were so primed that I "got" LISP just from that table of contents. And for about twenty minutes I was really pissed off at all of my fellow computer geeks en masse. How much time and energy, how much sweat blood and tears shed? We could have just been using LISP the whole time!

I really really think it's time we collectively turn our attention from chasing our brain-tails, and focus on the real issues: How to map human intention to automation in the most efficient manner? If we can just get out of our own way I think this physical "human condition" is mostly licked. All our problems are psychological now.

But, uh, I rant...

    "They're bad because they both would be better off implementing the resolution algorithm (i.e. mini-kanren) and then using that to solve the puzzle. It still would have been shorter and easier and faster."

    " The machines are vast and fast enough now. Why is everybody..."

    It would've been. Now we get to the point where you find that you may be caught in the same problem you're accusing them of. I can't remember Prolog or most programming now due to my head injury. Yet, I remember quite a bit about the market & what people did with it back when what you suggested was tried on a large scale by smart people. That was the AI boom where they coded in LISP, Prolog, Poplog, OP5, etc. I read all that, built expert systems with some of it, and tried to stretch it in new ways. I confirmed myself that it was very difficult to apply it to all kinds of problems where other models allow concise expression of problem or dramatically, higher efficiency. We collectively needed that development pace or efficiency to be competitive. The Japs even threw piles of money and brains at Prolog-specific hardware in their Fifth Generation project to bootstrap the goal you're talking about. All of that failed miserably. The AI Winter finished most of those companies off with a few pivoting and surviving.

    So, in case you're wondering what we learned, it was that you don't want to write all your code in Prolog. Even those doing logical search in some cases. The default on bottom of the stack should not be Prolog. You want to use the most powerful language you can that supports DSL's & FFI's. You then embed things like Prolog in it to use when ideally suited for problem. Anything that's easier to handle a different way is done differently. LISP and REBOL were main proponents of this approach with Allegro CL still bragging about their Prolog implementation. "sklogic" added Standard ML to his LISP for coding safer, easier-to-analyze modules on top of DSL's for parsing & Prolog. Haskell has recently joined the fray where a number of DSL's are letting one mix benefits of Haskell and low-level languages like C. Galois Ivory language & Bluespec come to mind. If a tool such as SWI Prolog is used, the typical case should be prototyping & verifying Prolog source that's then embedded in a better tool like those. There's times like Zebra puzzle, NLP, etc where constaints allow it used standalone. Also, possibility of doing things in reverse extending Prolog with foreign primitives instead. More space to explore in R&D.

    Point is that was the hard-learned lesson of decades of failures to do everything in Prolog. It just didn't work. It was ideal for some things but slow, hard-to-write, and hard-to-maintain for other things. Same was true for many models. Hence, a unified tool can express and integrate each of those models to let builder use best tool for each job. Alternatively, data formats, calling conventions, or protocols standardized to integrate separate programs using separate models. High-assurance recently did something similar for verification in DeepSpec program that led to CertiKOS OS. A bunch of DSL's are used. Prior efforts tried to build & verify it all in one tool like FOL or HOL but work was miserable.
Yeah, I get it, I do. I'm old enough to know about things like "Fifth Generation" computers and the AI Winter, and I agree with the pragmatic issues. I still have what I guess amounts to faith that there is a better way. Personally, I think I'm onto something with a system based on George Spencer-Brown's "Laws of Form" and Manfred von Thun's "Joy" notation, implementing something similar to Hamilton's HOS but without the deficiencies. I ahve no idea if I'm a crackpot or not here, but I think I see a glimmer.

At the very least, we today have the benefit of hindsight, if we'll avail ourselves, eh?

    " I've maintained for years that anyone who can solve a sudoku puzzle has the intelligence to learn to program."

    I've never thought about it. Especially in light of Prolog. Very interesting. Now you got me wanting to drop Prolog on some Sudoku fan sites to see what happens. Have to have syntactic sugar, libraries for common things, and great tutorial so the start is painless. I'll hold off for now but keep it in mental peripheral if I see someone messing with sudoku.
To me, the essential piece was learning the Logical Unification algorithm by walking through mrocklin's port of Kanren to Python. Once I understood how resolution worked, the next time I was figuring some puzzle (in programming as it happens) out, I was startled and pleased to be able to recognize the resolution/unification process as my mind performed it to solve the puzzle.

I'm not sure that folks could go directly from Sudoku to Prolog, although I would wager that anyone good at Sudoku would be able to learn Prolog under some conducive conditions. In my experience the limiting factor is interest. I've told one friend of mine several times now that "the only difference between you and a 'real programmer' at this point is number of lines of code written!" but he has other priorities or something...

----

edit: the quoting came out weird, but I'm going to leave it. (And now I know how to do that quoting.)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: