Hacker News new | past | comments | ask | show | jobs | submit login
Marvin Minsky's Homepage (media.mit.edu)
69 points by jessup on Aug 16, 2015 | hide | past | favorite | 51 comments



An interesting quote from his biography page next to his Perceptrons text:

> ... Many textbooks wrongly state that these limits apply only to networks with one or two layers, but it appears that those authors did not read or understand our book! For it is easy to show that virtually all our conclusions also apply to feedforward networks of any depth (with smaller, but still-exponential rates of coefficient-growth). Therefore, the popular rumor is wrong: that Back-Propagation remedies this, because no matter how fast such a machine can learn, it can't find solutions that don't exist. Another sign that technical standards in that field are too weak: I've seen no publications at all that report any patterns that porder- or diameter-limited networks fail to learn, although such counterexamples are easy to make!


Is Old Man Minsky the only one who still thinks that machines should do AI instead of machine learning? He certainly seemed to believe that in 2007. Who is still on his side?

http://snarkmarket.com/blog/snarkives/societyculture/old_man...


Just visit any mainstream AI conference like IJCAI. You will see many things in AI out of reach of current ML. ML is just one track.

http://ijcai-15.org

Also, even in things like question-answering, pure ML is not the state-of-the-art. E.g., IBM Watson had a bunch of different algorithms.

http://ieeexplore.ieee.org/xpl/tocresult.jsp?reload=true&isn...

We will eventually get there with a pure learning system, but not yet. Till then, we need less FUD. :)


It seems everyone took my comment to hint that Minsky was wrong. I did not mean to indicate that. I wanted to know who was still doing non-ML intelligent machine stuff, since ML is all I ever hear about in HN.


Ah sorry my bad too:)

HN is an echo chamber when it comes to certain topics. IJCAI, AI magazine, AGI conferences etc would be the best place to look at scientific cutting edging work on intelligent machines that includes non-ML work. The research reported in these places is not always flashy like what the HN crowd loves, but represents unsolved problems being solved in a variety of ways.

The Berkley course on AI on EdX is also a good option to understand what is happening in the field. A lot of $$ goes into non ML stuff. Don't get me wrong ML is important, but right now, is not the only game in town. Would be nice to have pure learning machines, but we are quite far from that (despite all the hype and genuine exciting developments in recent years).


> HN is an echo chamber when it comes to certain topics.

There's a way to change that: educate the community by submitting meaningful stories on new topics. Those are the best submissions and we're always on the lookout for them.


In the field of machine translation, I'd say GF/MOLTO ( http://grammaticalframework.org/ ) is pretty cutting-edge non-ML AI (though the definition of what's AI keeps changing, I don't know if people would call this stuff part of AI any longer?).


Funny, GF is what made me reconsider my views! Amazing that a small team of non-ML academics outside the US can beat an army of ML researchers in companies like Google, MSFT etc.


There are still others working on artificial general intelligence, but it's definitely more marginal, and half the time I find someone doing something interesting it's some work to figure out whether they are crazy or brilliant :) E.g. Ben Goertzel: https://en.wikipedia.org/wiki/Ben_Goertzel


http://www.goertzel.org/benzine/WakingUpFromTheEconomyOfDrea...

It's hard to tell. But the thing that seemed most concerning was that at one point, they suffer some big setback and change things around. Except he's pleased that a large amount of code is still useful, doing maintenance work. I have no real basis, but my instinct says that if dealing with e.g. socket code is even remotely a concern at all, you're probably not actually solving real AI problems.

Actually, the bigger thing is that it's just highly improbable they were so close to creating AI and for lack of a few dozen megabucks they aren't able to do so. It's such a tiny amount of money to supposedly bring on the singularity.

Especially when you see he was selling a Market Predictor product. It's like those scams where they're going to take your money and buy gold in the Caribbean and bring it back so ya get 2x back. If you actually had market predicting software, I dunno, you could ... put your own money in? Even more recently, Peter Thiel was funding MIRI right? So if anyone close thought this was actually in reach and just needed a bit more Java code to be written, it seems rather probable they'd be able to find _some_ funding. Shit, if you really believed it'd bring about a positive GAI then you'd probably work on it in the open, find some way of completing it over a decade or two (barring severe procrastination and motivational issues).

None of this adds up. I'm not sure what he was doing at SIAI/MIRI, but I've no real clue wtf they actually do anyways.

Edit: http://wp.goertzel.org/important-research-avenues-on-my-mind...

OK so, "important research" avenue. Apart from making AGI so we can have robots (wtf?). And growing brains in a vat. We should apparently just use Google's "databases of text" to dev a search engine that "really understands stuff". As if Google hasn't though of this "avenue". Oh, and cold fusion and femtotech. I think my estimate of this guy actually making AI has dropped significantly. "Avenues of research" eh? You'd have better luck reading the Player's Handbook and picking your top 8 favourite spells you wish were real.


That the fashionable opinion has turned against his beliefs so quickly shouldn't be used as evidence he's wrong. If anything, it shows that the jury is still very much out.


Huh, I don't think he's wrong. I just wonder who is still working on AI as AI.


Douglas Hofstadter is a clear example. Here's a 2013 article about him: http://www.theatlantic.com/magazine/archive/2013/11/the-man-...


I appreciate the article. That was an interesting read. Might have to look at that book at some point.


Machine-learning is a kind of AI, right? Assuming you mean human-style general intelligence, it's more interesting to me since it helps us understand ourselves, and we have a working model to work from.


No, ML is basically stats when you have lots of computing power. Classical AI tackled very different kinds of problems. There used to be a "war" of ideas between connectionists (which kind of turned into ML when neural networks no longer tried to be remotely biological) and computationalists, which is where Minsky would probably be closer to. He wants us to understand how minds work in order to model them better.

With ML you don't really try to understand. You throw a dumb and simple-minded algorithm at lots of data and get back good results. You don't try to model all of the complexities of an intelligent problem.

So, no, I would not say ML is a kind of AI.


Yes it is AI. You could quickly go through the Table of Contents in the classic Russell and Norvig "AI: A Modern Approach book" to see what is considered as AI. To be classified as AI, the mode of learning (statistical or not) is not as important as the notion of what an intelligent "agent" is supposed to do. There are quite a few definitions/statements of this notion here. One of them (again from the book) is that AI must "act humanly" - which Kurzweil is described as: "The art of creating machines that perform functions that require intelligence when performed by people". Note that the behavior of the AI is stressed, not how it produces this behavior. Machine Learning falls under this categorization of AI (Section 1.1 in the 3rd edition of the book).


You can disagree without calling him names.


That's a term of endearment he uses with his students.


[flagged]


[flagged]


"Please resist commenting about being downvoted. It never does any good, and it makes boring reading."

https://news.ycombinator.com/newsguidelines.html


I apologies, downvoted comments are usually interesting so I never ignore them, but my because my eyesight is slowly failing I find the greyed out comments very hard to read.


> I find the greyed out comments very hard to read.

We recently changed it so that if you click on a comment's timestamp to go to its page, it won't be greyed out anymore. Still an extra hop, but easier than it used to be.

(And thanks for responding so politely to dredmorbius' reminding you about the guideline.)


Thank you.

So long as you're considering styling changes: http://stylebot.me/styles/2945


That crosses into the kind of look-and-feel change that we're unlikely to make to HN, partly because no two users agree about what would improve it.


What? People online disagree about styling? That doesn't happen, does it? ;-)

Breaking that into a set of discreet changes some of which I should update ... ah, here's what I'm driving presently: http://pastebin.com/KhdhzY6b

1. Fonts styled in user default size preferences and specified in rem units. This addresses issues with scaling across displays with different dot pitch / pixel density. "html { font-size: medium; }"

2. Increased contrast between foreground and background. HN's margin / main body colors are pretty much precisely the inverse of what they ought to be.

3. Most of the rest of the changes are niggling. My own preferences, if you will (the stylesheet's primary consumer is: me). And it's no skin off nose what HN uses -- I've solved my problems.

But ... if I may, your response is "any change would annoy someone, so we can't change anything". I'm finding that ... curious coming from a tech/startup site.

I do appreciate HN's not hopping on the latest trend. I'm pretty underwhelmed by most online Web design (it's the problem, not the solution). But some principles borrowed from Readability / Instapaper / Pocket strike me as generally sane.

And there's always A/B testing.


> if I may, your response is "any change would annoy someone, so we can't change anything". I'm finding that ... curious coming from a tech/startup site

HN isn't a startup. We're not after rapid growth. Indeed, rapid growth would kill the things we care about here.

I do understand why my comment sounded like your description, but that was an artifact of haste on my part rather than a true picture of what we think.


And I didn't mean to imply that HN/YC is a startup, but rather, a company that is steeped in the startup ethos. Part that's "growth", yes. But the other part is "build a better mousetrap".

I'm actually something of a fan of Bershire Hathaway's homepage: http://www.berkshirehathaway.com/

Yeah, it uses explicit font sizing, but that's trivially zoomable, and the contrast works. These days I'd probably suggest they swap tables for a flat structure but CSS columns on a responsive design based on display width. Preferably in REMs, though that can have problems necessitating fallback methods:

https://ello.co/dredmorbius/post/JifX_Y90GmFyKvVwgD3qKQ https://d324imu86q1bqn.cloudfront.net/uploads/asset/attachme...

Anyhow: I've made my pitch. You've seen it. I've got something that works for me. I think it might improve HN, but that's your call. Thanks for your time, and I appreciate pretty much all you do, particularly the editorial management/moderation on HN. S/N here has stayed quite good over the years.


Depending on your browser / platform, you may be able to use a browser extension to modify site CSS. I use Stylish for this (Firefox/Chrome).

Or: copy/paste text into a text editor or terminal window.


Huh, so many downvotes. Because I called him "Old Man Minsky"? That's a term of endearment.

http://donbot.com/DesignBot/Bibliography/Bib02MinskyDiscover...

https://books.google.ca/books?id=tNZLF8gtx-EC&pg=PA82&lpg=PA...



Anyone have opinions on his "The Emotion Machine"? I'd read "The Society of Mind" as a teenager and it was hugely influential on me—but I read a few pages of Emotion Machine last year, it felt very different, and I ended up not continuing.


It was designed to be more continuous, and flowy. I believe Marvin, during one lecture, mentioned that a younger audience preferred it.

I don't compare the two. They are different presentations of an idea. I find reading them to be a great way to think deeper, and ask better questions about, the ideas abstracted.

Whenever I read TSoM and EM, I always feel like I missed so much -- this feeling seems to only increase read after read.


Why would you put your relative's homepages and email addresses on your homepage? It strikes me as strange, but I'm sure there's a sensible reason.


You wouldn't understand because you didn't experience the web when it was new and friendly. Before we littered it with cat pictures, we had links to family and friends pages. We had our email on it too before the spam wars.


Exactly. Also there was no facebook, twitter, type services. If someone got an account some place to create a website (like say a university professor), that was rather special and they would also use it to host their family's pictures, maybe have a section for his wife's cake decorating hobby and so on.


I think it was even before Facebook, remember how everything was completely public at first (and by first I meant when it was already launched globally, just before it became a hype). I remember being utterly surprised seeing all girls (I know) phone numbers out in the clear. So I guess we were already not naive anymore about the web.


Ahhhh! That's the thing. That makes total sense, thanks!

It was a matter of age. I was just a kid. No wife with a hobby or digitizer for photos of a family I didn't have.

As a kid I was mainly interested in music, porn, games, and weird stuff.


> You wouldn't understand because you didn't experience the web when it was new and friendly.

Yes. Yes I did. I bathed in the firehose when it was just dribbles. I was first online in 1993 when I was still in High School. I remember finding porn on people's personal pages. I remember the nascent scribbbles of some of my favorite tech journalists. I remember looking at my url history file of 3 megs and now I know most of those places are gone. I certainly experienced the web when it was fresh.

However I didn't experience it in an academic or professional setting. I think that's the delta there, er, the change. Maybe you don't know what a delta is ;)


Homepages as a handy filing page for information you want to reference, particularly in a world where Web search wasn't particularly effective (allowing others to access it), and the size of the Web overall was limited, can be useful.

I still maintain local only pages with similar types of information.

The Web's changed a bit, and grown some, over the past 25 years. Minsky's been around since well before then.


> Homepages as a handy filing page for information you want to reference,

Yes, the literal Etymology OF "Homepage". It was not just the page you wanted to see when you logged on. It was not just the page you wanted people to see first from you. It was where you did all those things online that were important to you. That's why old ones look kinda like lists. Well, that's also a discussion about design, and the evolution of symbols.


The minimalist HTML style reminds me of Stallman's site

https://stallman.org/


Pretty much everyone in academia does this, because they know how to make functional, timeless websites that are compatible with all devices and browsers in the present, past, and future, all while maintaining their busy schedule. (Or perhaps they just don't know beyond basic HTML.)


The videos of Minsky's lectures on Society of Mind are on youtube

https://www.youtube.com/watch?v=-pb3z2w9gDg&list=PLUl4u3cNGP...


Not to complain, but I find it surprising that the web pages for such prominent academics/people are often so ugly. Why not pay some web designer to spruce it up? Or maybe have the department/school use a nice template? Seems worth it to me.


This already is a nice template. Plain text that responds well to zooming by the user; default colors that are easy to read and make links obvious; a few important images that are large enough to see, but don't take up half the screen like some sites' banners. If you prefer white-on-black text and have a user stylesheet for it (which I do), the site responds well to that too, without any nested <div>s or background-image:s complicating things (unlike some). I don't think that HTML5 web fonts or a whitespace-heavy designer template would aid in conveying any of the brief and well-organized information on the page.

See also: http://justinjackson.ca/words.html and http://motherfuckingwebsite.com.

(The only thing I would add to either of those is this bit of CSS: body{max-width:900px;margin:0 auto;} to make it easier to read on wide screens.)


Similarly Oleg Kiselyov repository of wonders:

http://web.archive.org/web/20080303222623/http://okmij.org/f...

He added a ~sidebar~ since, now it's all ugly. /s


I think it's great because it exemplifies the spirit of the Web, like back in the Geocities days. He probably wrote the site himself and really has nothing to sell or market. His name, the domain, his work, and his CV are all that's necessary.

It's all about the text.


That's a fair point.

http://justinjackson.ca/words.html is one of my favorite articles.


I was suprised too, but glad he didn't see the need for the latest bootstrap template. It's just his hompage. I imagine he will be surprised so many people are now looking it right now.

I hope he keeps it the same. A few years ago I was asked to put up website exactly like The Drudge Report. I was suprised it was just html. I don't even think they bothered with CSS? It must have changed?


Super super common for professor sites, particularly at CSAIL. They value the functional aspect more I suppose.


<BLINK>Agreed!!!</BLINK> Marvin Minsky should spruce up his home page with AIML [1] (Artificial Intelligence Marketing Language).

[1] http://www.art.net/~hopkins/Don/text/aiml.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: