Hacker News new | past | comments | ask | show | jobs | submit login
Sebastian Thrun Has Left His Role as Google VP and Fellow (techcrunch.com)
115 points by BIackSwan on Sept 23, 2014 | hide | past | favorite | 60 comments



Thrun's role as Google VP has been mostly moot for several years now.

Those close to Thrun have made no secret of the fact he's focused on Udacity and thus, whether he would like to work at Google or not, he can only do so much.


Ironically, if he has left to concentrate on Udacity, my own belief is that he is a poor teacher. Of course it depends on which pedagogical teat you are sucking from, but my experience of Thrun's delivered Udacity courses is poor. Many of his video segments seem to involve him slavishly following his 'script', occasionally hissing out encouraging phrases such as 'Isn't this great!'. He hasn't sequenced a series of logical teaching moments in which the concepts flow nicely. It's as if he's decided his first draft is good enough. Just my opinion of course.


Udacity has done what any good start-up should and launched a solid MVP and iterated on it, delivering free, in-demand courses in a short amount of time. No, the materials and scripts have not always perfect but they are still more valuable than many college courses I've taken. If he is in fact leaving to focus energy on Udacity, isn't that a positive move in regards to your critiques?


Yes, you make a good point. My opinion - and that's all that it is - is that his (many) abilities lie in the Google [x] direction, not as a poster boy for MOOCs.


I've recorded my self and many others teaching: there is only terrible teachers, and people who survive the humiliation of seeing themselves try more than twenty times. I would never let anyone watch any of their first five takes first.


But who are we to say where he should apply his abilities? He may be interested in something new and that is where his next best contributions lie.


Isaac Newton famously had to give three lectures to students at Oxford in order to maintain his status as a professor. His first was so dense and impenetrable that the final two were given to utterly empty rooms. Apparently that did not alter his teaching style at all.

One does not have to be a good teacher to have something worth learning from.

That said I did one of his first courses and found it simple to follow and better in the bite sized format than traditional lectures.


Weird. My opinion is that he is a fantastic teacher. I did his Stanford AI course online and it was by far the best learning experience I have had.


One doesn't have to be a good teacher to make a good, free, online University. But one does need at least some content to get started ;)


Is Google X anything more than a PR heavy version of Xerox PARC or any other R&D lab that tech companies have had?


My former boss (CTO of Panasonic) once told me that the only way to do real R&D is to have a monopoly that allows you to hide the margins in cutting edge R&D. Everyone else was doing just product development.

Google X is amazing in that it is about the only one left doing true bluesky R&D. Everyone else is just doing product development.


> Google X is amazing in that it is about the only one left doing true bluesky R&D. Everyone else is just doing product development.

That is quite wrong, really. There are many industrial research labs that are pursuing 'bluesky' research, in pretty much all sectors, including computer science, EE, chemistry, pharmacology, etc. Just to name a few, Mitisubishi Electric Research Labs (MERL), IBM Almaden/New York/Zurich, Microsoft Research, traditional pharma companies, traditional chemicals/materials companies, GE Global Research, etc. all have 'blueskies' agendas to varying extents.


IBM absolutely does not pursue bluesky research anymore. (I was part of IBM Watson Research)

Basically all research is tied to some sanely forseeable bottom line item.


Just because there are research labs, it doesn't mean that they are doing truly free blue-sky research. Lots of the labs you mention have very antiquated IPR focus, especially Mitsubishi and IBM. And as an example, Almaden still feels closer to the lair of a a James Bond enemy in the 60s than a free spirited R&D lab.


Re: "And as an example, Almaden still feels closer to the lair of a a James Bond enemy in the 60s than a free spirited R&D lab."

It does look that way. You drive in the front gate, and go along a winding road through pastures for about half a mile before reaching the big glass buildings on a hilltop overlooking a huge park. The view from the cafeteria is spectacular. That's what people thought research centers should be like in the 1960s.

Today, it's a shadow of what it once was. I happened to be there the day IBM exited the disk drive business. Major layoffs followed. It's sad.

IBM employees have won five Nobel Prizes. The last one was in 1987.


If IBM's Watson does not count as "free blue-sky" AI research, I don't know what else counts.

http://youtu.be/P18EdAKuC1U?t=2m20s

https://www.google.com/search?client=safari&rls=en&q=an+acco...

Funnily, Google completes my phrase "an account of the principalities of wallachia and moldavia inspired this author's famous novel" but is completely clueless.

As the other poster said, Google seems to be winning the advertising game (after all that is how they make $).


IBM's Watson is about the level of research that Google or Microsoft do in their regular research labs - i.e., it definitely has a longer-term scope, and it's reasonable to expect that investments in this area will pay off in the next 10-15 years rather than 2-3 years as with normal product development.

It is not blue-sky research, though: it's pretty clear what the value proposition is, who the (potential) customers are, etc.; IBM just put on the publicity upfront with their "winning Jeopardy!" grand challenge to put themselves on the map in a space that is currently dominated by small boutique companies and a very limited numbers of bigger players such as Thomson-Reuters and (partly) Nuance.

Which means that IBM will have an easier time selling to existing enterprise customers who want to get in on this "big data" thing using text analytics (and, frankly, it's also good advertising for their existing data analytics business).


"It is not blue-sky research,"

As someone who dwells in AI/Machine Learning for a living, let me tell you: Watson is the most extreme blue sky research out there. It happens to have a customer base that intersects with IBM's current base. Why should this be a diminishing factor?

Google's blue sky research is also ultimately for customers. IBM is good at actually selling its research as products. No company does anything for truly altruistic reasons. If so, Google would be spending billions of particle accelerators.

http://www.research.ibm.com/haifa/info/news_ibm_help_cern.ht...


> Watson is the most extreme blue sky research out there

Do you mean it is the most blue sky research being done in the field of AI by large companies? Even if it was closest to blue sky research being done in machine learning by large tech companies, that really doesn't automatically mean that it is blue sky research.

> It happens to have a customer base that intersects with IBM's current base

The definition of blue sky research is "research without any clear path to a product". It isn't diminishing the importance of research to say that it is blue sky (in fact many people would argue the opposite). Watson was already being touted for an actual product launch by the time that it was on Jeopardy.


What would an example be from, say, Google?


I don't know that Google has too much blue sky research itself. I guess maybe something that Kurzweil is up to, have they released anything about him?

I guess probably anything that Hinton was hired for which is more fundamental neural network research, maybe the cat recognizing neural network research they did on youtube video?


If nothing, at least IBM's radio astronomy effort should count as 'bluesky' research literally and figuratively speaking,

https://www-03.ibm.com/press/us/en/pressrelease/37361.wss


Also, just because there is no research labs doesn't mean employees aren't allowed to do some blue-sky research.


Microsoft killed their Microsoft Research Silicon Valley lab last week. It's not at all clear at the moment what the future of blue sky research is at Microsoft.


>only way to do real R&D is to have a monopoly that allows you to hide the margins in cutting edge R&D

This idea is from Joseph Schumpeter. The same guy who coined the phrase "creative destruction."


What about Microsoft Research?



According to your link, there were 75 people at that lab. Microsoft Research has over 800 people:

http://research.microsoft.com/apps/catalog/default.aspx?t=pe...


That headline is confusingly worded, but Microsoft only laid off 50-75 people in one MSR location. That was far from the entire division: http://research.microsoft.com/en-us/labs/default.aspx


That's not all. There are other parts being shutdown too - Robotics apparently https://twitter.com/AshleyFen/status/513392391467048960


Still, probably proportionally less layoffs in MSR than the rest of the company overall during these cuts. The insinuation was that Microsoft is shuttering Research entirely, which is very far from true.


I think the insinuation is the Microsoft Research is beholden to business drivers, and therefore not truly 'free, blue-sky' research.

That doesn't seem unfair, given the layoffs.


I read the comment "what about them?" along with the poorly headlined link, as insinuating that Microsoft Research was being shut down. If someone thought that Silicon Valley was their main lab or didn't know either way, that headline would easily give the mistaken impression that Microsoft was canning the whole endeavor. My apologies to rasz_pl if I mistook his/her meaning.

For what it's worth, MSR historically hasn't been beholden to find commercial applications for their work. If anything, it's usually disappointing how much really cool stuff they come up with that isn't ever productized in an accessible way.


Well that's the second time then! I was working with the Robotics team when they got seriously downsized a few years ago.

(This might be a different robotics team, who knows!)


> Everyone else is just doing product development.

I believe IBM still does quite a bit of fundamental research as well.


I heard an old timer describe it well ... true research needs to be altruistic. Today, I think this can only be done through govt sponsorship. In places like Canada, my impression is that even the stewards who dole out govt funding have forgotten what true research means. They dole out money for commercializing innovation, which is not research by definition. From what I've heard from other scientists, the US is doing a better job. They are focusing their research dollars on 'promising' areas. This isn't as bluesky but at least it is research.


That would be the point of government investment in research though - to drive commercial advantage in their country. Everyone in governmenthas an agenda; there's no free money. That leaves insanely wealthy companies or individuals with an altruistic bent to drive free research.


Oracle's Labs division does a mix of fundamental research and very long term product development. One way to tell how much fundamental research a company is doing is to see how much partnering they do with universities and DAPRA, although not all companies may advertise much about these partnerships.


I recently had to sit through a presentation by an X-er in which was played a very smug GoogleX video that spliced an annoying amount of apollo- and shuttle-era space exploration footage to pump up what was essentially floating cell towers (Loon). I found it quite disrespectful to the apollo engineers, and grasping of Google PR, to try and equate what they're doing to landing on the moon. It went quite beyond the bounds of what is reasonable with a straight face, for my tastes.


Google Brain apparently came out of there, so I'd say definitely not. The other projects have always been longer term, and it seems silly to ask why self-driving cars aren't done yet when it was never expected for them to be done yet.


You say that like it is a bad thing.


Well, at least three of its products are being used in the real world (Glass, Loon, Self-driving cars) and at least one is actually being sold (Glass), so yes.


So you are saying it share's PARC's affinity for inventing things that were both difficult to manufacture and not economically viable? (Only half joking there)

I think it interesting that Sebastian left, he's a smart guy and I think he understood what he could do wielding Google X as a tool, that he left means he figured he could do get more of what he wanted to do outside the confines of Google X than he could with it. That makes for a really interesting thought exercise. And that brings us back to PARC, which discovered that the people who left PARC had a bigger impact outside of the lab than in it. The reasons for that were many but mostly they consisted of Xerox not actually being a 'research' company so much as it was a 'copier' company. What does that say about Google X?


I think you typed "yes" where you meant "no." Tech company R&D gave us things like the transistor and lasers (both from AT&T) and Mandelbrot fractals and hard drives and the entire concept of a relational database (all from IBM).

PARC in specific gave us Ethernet, which I think is safe to describe as "being used in the real world."

Google has a long way to go; so far, all of its real contributions have been in software, and all of their hardware R&D has seen very little "real world" deployment. Otherwise it's very much just a well-marketed implementation of traditional corporate R&D.


And fibre optics from Martesham and before that Colossus from Dollis Hill


Glass is a botched PR stunt, not world-changing research. So this underscores the GP's point.


Sergey Brin, in my view, represents part of the problem here. Too powerful to be told that he's not delivering much.



Probably not, but isn't that enough these days when so few companies do any sort of long-term bluesky R&D?


Do you mean PARC circa 1970s?


I wonder what he'll do next. Udacity isn't a research problem; it's a marketing and quality control problem. "Massively Online" courses have turned out to be something of a dud, anyway.


The people I've seen calling MOOCs a dud seem to have had the expectation that they'd see completion rates somewhere near traditional courses, which is laughable when you consider the differences in structure, demographics, and buy in. Even despite the high dropout rates, I think they end up teaching more students than the comparable class session at a university.

I took a grad class to completion on Coursera, and I think it was great - I learned quite a lot, it was free, and the quality of instruction was excellent, rivaling that of my non-lab university courses.

That said, how to effectively teach an online course is very much an unsolved problem. It's not nearly as cut and dry as you suggest. As of now, they've mostly just moved the lectures to online video, put up some online quizzes, and some forums. It's promising, but it's a naive implementation still.


Is how to effectively teach an online course the problem to focus on?

It's interesting to note that if MOOCs turn out to be a dud, yet people are learning effectively from MOOCs, then it's proof that people pay for a college degree, not for education. In other words, prestige seems to be the issue, not education.


Well, I'd say that if lots of people end up getting good educations from MOOCs, then that's a success, commercial results be damned (as long as it's at least sustainable).

If peoe could get a provably good education from a MOOC (to the same extent that a traditional degree+GPA proves that), then it might say something about the prestige question. Needs to become a better signal than it is, though.


> yet people are learning effectively from MOOCs

Statistically speaking, they aren't, that's the problem. Completion rates with Coursera are ~5%. Some argue that many people don't _want_ to finish the courses, but this views the product as something fixed. For any other kind of business, is "people sign up but then don't value it enough to continue" an excuse for lack of retention?


Retention is not a goal. For example, if you are just trying to make money, then you don't really care if all your content is consumed as long as people still pay their subscriptions or whatever.

Of course, the reason completion rates are so bad with MOOCs is free access. If you force people to filter themselves out unless they are willing to pay a lot, you will see more normal "completion rates" expressed as a percentage of those who paid a lot.

It would be perverse to argue, however, that MOOCs should be pay-only for this reason. It's a confusion about the meaning of completion rate as a performance metric for MOOCs.


There are plenty of free in-person classes in the world (I'm sure you can sign up for a dance class, for example), and an in-person class that had a 5% completion rate would be a disaster.

I'm not sure what you're arguing. You think people are learning well from MOOCs in their current form? You might note Thrun's own thoughts on the subject: http://www.fastcompany.com/3021473/udacity-sebastian-thrun-u...

"We were on the front pages of newspapers and magazines, and at the same time, I was realizing, we don't educate people as others wished, or as I wished. We have a lousy product,"


You think people are learning well from MOOCs in their current form?

Yes. I am, for example.

Online classes aren't physical classes. Physical classes don't have 5% retention rates because people have to show up for them, so they're more likely to go next time. Being in a group makes people want to continue being in it. But with online classes, there's no such hook, and so people skip out.

Also, it seems mistaken for stats on retention rates to include everyone who originally signed up. They should measure the retention rate for people who made it halfway through the course. I'd bet that they'll see a much higher rate, because if someone hasn't left halfway through the course, they're unlikely to leave just because they suddenly feel like it. And if that's true, then it's evidence that people in that situation take MOOCs as seriously as their normal classes.


Yeah, the one I completed, I was completely locked in by ~halfway through, due to the already-invested time.


They're at 5% because people like me sign up for every course that sounds vaguely interesting, well before they start, and then when it comes time to actually do the work, I almost always drop it because I'm preoccupied. I treat it like signing up for an alert to let me know when it's going to start, at which point I make the actual decision.

There's also a more structural problem with synchronous learning, in that if a real world problem like my job conflicts with a self-imposed homework deadline, the deadline isn't going to win. But that seems solveable by offering async options for those it fits better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: