Hacker News new | past | comments | ask | show | jobs | submit login
IBM is not doing "cognitive computing" with Watson (rogerschank.com)
481 points by Jerry2 on May 23, 2016 | hide | past | favorite | 173 comments



If you want to figure out what Watson can do and bypass all the marketing hype, you can just try all the services available at http://ibm.com/watsondevelopercloud.

I won't argue that the PR goes often too far, and that's a big debate we have internally (I work for Watson). But it's a pity that most of the negative opinions expressed here come from people who haven't even bothered to try any of the services we put out there or read any of the scientific papers that have been published by our team.


I have tried it -- but for me, at least, it reinforced the negative opinion that other have expressed here. Not because many of the services offered aren't good -- they are -- but because you can't help but note the enormous chasm between what I can actually do with Watson today to what you see on IBM ads.

To me, the resentment is completely understandable. When Watson goes on Jeopardy -- an incredible technical and PR achievement -- the implication to the rest of the world is that ".... and now you can use it too!" The ads, too, convey that idea. But that isn't the case. Instead, I can access APIs for language translation, sentiment analysis, object recognition, etc...

You guys are selling Watson as a new paradigm of AI never before seen by man -- that I can plug my business and my app into this and do things never-before conceived in the history of computing. The service you let me access is basically a bunch of algorithms for which there are countless alternatives -- even in places like Algorithmia. There's a huge disconnect.


You will indeed be able to do Jeopardy too, and hopefully much more (we found that answering factoid questions isn't the most critical function so we haven't prioritized it, but it's coming).

Many of the services that are on the platform can do things that were inconceivable just 5 years ago (like having a general visual classifier handle earth imagery with high enough accuracy to be useful http://www.huffingtonpost.com/entry/ibm-watson-california-wa...). Yes, there are alternatives - which is the sign of a healthy market - but we keep improving ours to stay state-of-the-art and to introduce higher level functions.


>we found that answering factoid questions isn't the most critical function so we haven't prioritized it, but it's coming

This should be priority number one. As others ITT have been saying, your marketing and PR departments have been writing checks that the product can't cash.

When people see that Watson can't do what it does on TV, you lose a sale, and that potential customer is now going to be skeptical of IBM's future marketing claims.

It doesn't matter if "answering factoid questions" doesn't have much real-world value — it's first of all what you're marketing the product as, so it should have been done and omigod amazing yesterday, and second of all the reason why your very smart marketing and PR people are pushing that angle is because it's a demonstration that is immediately and intuitively impressive to technical and non-technical people alike.

Getting a computer to "answer factoid questions" viscerally screams "major breakthrough!" in a way that "general visual classification of earth imagery" simply doesn't. It wows people at the sales meeting instead of boring them.

IBM needs to make sales. You need to build up a userbase toot sweet before Google et al start moving into this sector in a major way, as indeed they already are. There's a limited pool of enterprise customers, they don't change vendors often, and you need to get as many as you can as fast as you can, before Google/Amazon/whoever does, or you're done.


> toot sweet

"Tout de suite" has never before sounded so appetizing!


And I thought toot sweet referred to cocaine[1], which might explain some marketing...

[1] https://books.google.com/books?id=FFMADAAAQBAJ&pg=PA80&lpg=P...



Tu cherches la petite bête !

It's the common spelling when used in English; I didn't want to seem pretentious. https://en.wiktionary.org/wiki/tout_de_suite#Alternative_for...


I'm beginning to believe that answering factoid questions is still too expensive, from a computational point of view, to put in an API.


I'm beginning to believe IBM is a sinking battleship. Considering the job losses and the - insane - direction set by the board, it's not obvious IBM will still be around five years from now.

Besides, a really good AI should literally be able to sell itself, surely?


Let's not make any grandstand statements - IBM may or may not be a sinking battleship (I'm inclined to think they are), but not being around in 5 years is a pretty specious prediction. They'll still be around, and still be peddling malformed solutions. They may be significantly different in leadership, presentation, and execution, but they'll still be around.


I'd love to see how IBM themselves have used Watson to improve their own business.

This, as a potential Watson customer, would give me a much better idea of Watson's potential and real world results. Plus it would establishes more confidence in IBM.

A lesson from my high school economics teacher that has always stuck with me is that you have to be your own product's biggest believer.


No. That's not the issue. We are actually able to shrink wrap it to a point that it has reasonable footprint (a few servers). The problem with factoids are twofold: - It's always a small part of a broader use case (how many queries do you do on Google that are answered by the factoid pipeline?). - It has a lot of domain dependency. It's fine to do it like Jeopardy for general domain, but our customer want domain specificity and that's non-trivial.


I agree. I think this is exactly why it isn't available as an API. They probably had supercomputers worth backing Watson, which they couldn't provide for every single API query. Although I still don't know why they didn't provide it and price it accordingly -- $1 / question or whatever it would require. Or shoot -- price it at expense as a loss leader. Even if they had a person filtering and approving each jeopardy answer they could price it to allow that.


That's all great to hear, and I don't mean to belittle the very real achievements behind Watson, but it seems to miss the core complaint here: Watson can't do any of what the advertising shows.

This isn't a matter of PR overhyping things a bit. Watson has (impressive) capabilities, and IBM's ads show Watson doing things, but the two circles essentially don't overlap. IBM is directly marketing Watson to the general public with promises that are entirely unsupported by the actual product. That's not just questionable business advertising, that's a level of inaccuracy that undermines AI work and understanding across the entire field.


>> Watson can't do any of what the advertising shows.

Google, Facebook and MS also can't (try using Google translate in any other language combination than English and some other Romance or Germanic language). Why is that a problem for IBM specifically, or why will IBM's overpromising bring about the next AI winder, like Schank says?


I'm not sure I've been seeing the same ads you have.

Granted, Facebook Graph Search was horribly oversold, and Cortana is in the same "smart enough to be stupid" bucket as Siri and Alexa and every other vaguely-feminine chatbot on the market. Google Translate is incredibly fragile, and will get me from English to French but not from French to Arabic, or even from French to Quebecois.

But all of that is roughly consistent with their advertising. Obscenely selective demos have a long history in software, and it seems standard (though not good) to assume that you're seeing the best-case uses of whatever product is being advertised. Car ads don't work any differently.

Watson's advertising seems to take things to a whole different level. There's that Bob Dylan ad. They've got a North Face ad where Watson recommends an appropriate jacket for the Pacific Northwest in early spring. And so on. The commercial product Watson can't do those things. They aren't just showing off fragile best-case features, they're making stuff up, or hardcoding specific answers, or offering the North Face a different Watson than what you can get on their developer site.

That bothers me. It bothers me when Google claims Translate can live-interpret foreign street signs, too, but even that's a less wild claim than IBM is making. Promising people a product you simply haven't made isn't kosher, and that's especially true when you're selling AI kool-aid, which is a field that's already ripe with hysterical (and baseless) warnings and promises.


>> Promising people a product you simply haven't made isn't kosher

Agreed- but I'm saying that that's exactly what I get from ads of the products you mention, Siri, Alexa et al.

Case in point: that real-time Skype translation ad that made the rounds a while ago.

In any case, I don't pay that much attention to what is advertised. But I do pay attention to two things: on the one hand, (scientific) papers published by the various teams and on the other hand, the announcements those teams make in the technology press.

I've read most of the Watson papers and quite a few of the NLP stuff from Google and followed the assorted tech press announcements. I have to say that my impression is that Google is brazenly overhyping their stuff, they constantly try to fudge things as much as they can.

Their dependency parser beats previous efforts on the Brown corpus? It's the best natural language parser ever! Their image recognition beats imagenet? It's showing superhuman ability! And so on. IBM's advertisement is relatively tame in comparison.

All the other major players in this are equally unscrupulous and there's no point in singling anyone out.


This is an interesting take - I hadn't really considered it, but on reflections Google's more academic claims are hyped up in a way that IBM doesn't seem especially guilty of. Some of their image recognition breakthroughs have been truly remarkable, but I agree the claims are a bit out of line.

I think I was probably put on edge by the Watson ads not just because of their scope (the Skype thing is similarly inaccurate) but because they conform to a very specific pattern of AI over-promising that I've grown to hate.

To a lot of nontechnical (or even just non-CS) people, strong AI is synonymous with Samantha from Her. We've been seeing people mistake existing tech for that since the 60s, when people were taking ELIZA seriously. Cue Cleverbot, Siri, and most obnoxiously Eugene Goostman (you don't get to claim you passed the Turing Test by convincing 30% of English speaking observers that you're a Ukranian child!)

So I look at the Watson ads and see exactly this pattern at work, but worse than ever. Most people know by now that Siri and Alexa are just voice assistants, and as good as the ads look no one is really claiming otherwise. The Watson ads, by contrast, throw around "cognitive computing" to imply real understanding, and then air scripted exchanges that are simply false, rather than selective. Worse, they're doing mass-market ads for a developer-only product, so viewers won't even try it out and see that it doesn't do what's being shown.

It may not be the worst on an overall-inaccuracy level, but I get twitchy when I hear people respond to discussions of strong AI with "Oh, basically what that Watson thing does?" And I do hear that, somewhat regularly. Crappy product hype is annoying, but I really wish we could stop telling people we've built real intelligence when we haven't.


Have used the google translate app with a camera to translate Japanese menus, museum cards, etc. It wasn't perfect, but considering there are three separate 'alphabets' for it to handle it was pretty amazing.


Produce independently reviewed citation that you are state of the art, or even close to it, in any of these functionality.


Sure here are two (you may have to follow some links to go to the actual publications):

Speech to text: https://developer.ibm.com/watson/blog/2016/04/28/recent-adva...

Emotion: https://developer.ibm.com/watson/blog/2016/02/29/another-ste...


In my experience, Speech to text has been a huge disapointment. I do give you that in the country where I live (Argentina) we speak a different... "dialect" of Spanish that what you've probably used to train your engine. Even then, it's entirely useless for my country, whereas Google has zero problems.


Very fair point. Our STT service, even though it is based on a technology with the very best results for public benchmarks, is not as robust to accents at the moment - for the exact reason you mention, we didn't train it on all the dialects. But that will come with usage in the field. Google has a head start on this but with the usage of our services growing quickly we will be able to catch up quickly.


For what it's worth, I tried the translation demo[1] on some Wikipedia text in Italian and French, along with Google Translate on the same text, and Watson seemed nearly on par but consistently slightly worse. (I also tried Bing, which seemed more even with Google.) I guess this is to be expected, given the relative lack of importance to IBM, but I was vaguely hoping that Watson could somehow afford to spend more CPU time on it or something (after all, their main business is selling access; Google and Bing also do that but mainly exist as a free tool for consumers) and get a better result. Oh well.


I think this may boil down to dedicated intelligence. Google Translate is an entire project devoted solely to efficient, human-useful text translation, where Watson is a more general-purpose language processor. Certainly Watson must have dedicated translation logic, but I would be surprised if there are any narrow domains where Watson beats strong single-purpose competitors.


We do have a team dedicated to Machine Translation (the same team that actually introduced the statistical approach - the "IBM Models" - in the 90s) but that team had been focused more on non-European languages. If you search the literature you will find that the IBM team is leading for languages like Arabic and Chinese but trailing for European language. We are working on that.


This is a really interesting response, thank you (and thanks for answering so many questions in this thread).

That result makes a lot of sense - Google has been working with the huge corpus of English-European translations (e.g. UN reports) and getting impressive results largely because there's so much data available with such direct equivalences.

Just based on personal experimentation, they're much less capable with longer "translation distances" like phonogram<-->logogram and Indo-European<-->Sino-Tibetan. I didn't know IBM was leading on those long-distance translations, that's cool to hear.


Coincidentally I spent the last couple of hours playing with Watson's services (I was looking for a decent Speech-to-Text API for a toy project).

Marketing aside, I must say that BlueMix's user interface is the worst. thing. ever. Buggy, extremely slow, fragmented, with cryptic error messages[1]. Took me 45+ minutes to reactivate my account (the old one was kindly deleted due to inactivity), and create the the TTS/STT services endpoints.

Funny; after all the investment to develop Watson, you'd think UI would be the easiest part to get it right.

On the positive side, the TTS and STT APIs are simply a pleasure to work with. The documentation is excellent, accuracy is pretty good, and the demos are spot on. Plus you have support for streaming audio through WebSocket for STT (which is a must for my project), and a few voices to chose from for TTS.

[1] http://imgur.com/3s4KUPv


We are very aware of the Bluemix usability issues and are working with that team to address them. Did you try the new Bluemix by any chance?

Thanks for the kind comments on our APIs. We are really trying hard to make them usable.


I tested the new UI yesterday, and initially thought it was equally bad: too slow to be usable, couldn't figure out where things were, etc.

But I just tested again today, and the slowness is gone, so maybe it was an isolated incident. I was able to play with it a bit more, add services, create a few containers, etc. Definitely an improvement versus the classic interface.

I'm still getting used to the logic grouping and how to access your services (I'm coming from years of AWS / Google Cloud), but the ability to go back-and-forth quickly helps a lot.

Minor nitpick: any reason for API icon to be pink instead of blue, like all others? I keep looking at it as if it were a different state (e.g., "activated").


Can someone point me to description of Watson's sentence realization API? I guess it could be called the semantics-to-text part. Realizers like FUF/SURGE, KPML/Nigel, and RealPro are quite complex to use. SIMPLENLG and OpenCCG are simpler, but the results I saw were not impressive. The demos of Watson are quite good, so I'm really interested to see how it performs on operations more complex than delivering factoids.


Good lord, are the IBM websites bad... It's taken me almost a full day just to successfully download some DB2 or Sametime SDKs, by the time I'd drilled down through the labyrinth of their non-google-indexed download site, had to generate an account, wait for that to actually be created, then snake my way back down to the download I wanted in the first place,


I just wanted to say thank you for making these tools easily accessible and mostly free to try.

I work in academic research and learned long ago to ignore any PR related to AI/ML products. The ability to easily play with and test the abilities of Watson services is definitely helpful and enjoyable for us.


I've tried Watson; I didn't find it particularly useful, but I did think that the visualisation and interface was superb.

Most of my playing with it was focussed on the 'Explore' parts, not the 'Predict'/'Assemble'/'Social Media' functions. I was disappointed in the level of documentation for some stuff (you have to look quite hard to find a definition of 'summary' - on my data, it just sums the column, for example).

I was also disappointed that you can't fit distributions or access the distributions that Watson fits, and at the lack of statistics (you can't get at skewness, kurtosis and so on).


I tried their voice recognition. It was actually objectively the worst one of the 5 or 6 I tried (the best was Google by a long way, although I didn't try Bing and I couldn't try Baidu becuase you need a Chinese phone number).


You should reach out to us or ask questions on the forum. Our experience is that people experiencing very bad accuracy may not have passed the right parameters (e.g., you need to make sure to use the right audio model). We are aware of the usability issue and are trying to address it.

In general Watson's STT is slightly better than competition for long audio (like phone conversations), slightly worse for short utterances (like search queries). That's due to a bias in training set.


Did you try Dragon speech-to-text or similar? I haven't paid for recognition services, but I'd love to know how the dedicated ones compare to Google's offering (which I find usable, but inconvenient).


>> I work for Watson

Hi. Do you happen to do any work with its parts that are written in Prolog?


Meh.

I'm no fan of Watson-the-marketing-term, but this sounds like the bitter remarks of a symbolic AI defender who is so sure that their way of doing AI is the only way that anything else is fraud.

Watson-the-Jeopardy-winner did "cognition" (which he implies means following chains of logical reasoning) as well as any other system that has been built.

See for example "Structured data and inference in DeepQA" or "Fact-based question decomposition in DeepQA[2].

It's true that the the Watson image analysis services don't use this. I'm guessing that is because they don't actually work very well in that domain.

[1] http://ieeexplore.ieee.org/Xplore/defdeny.jsp?url=http%3A%2F...

[2] http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=6177...


Weird, I thought he was outraged because "cognition" was being co-opted by IBM marketing and it's actual abilities being mis-represented.

He wasn't saying "Watson is crap", he was saying "Watson isn't doing cognition, stop saying it does"


Right, but I think he has co-opted the term "cognition" to mean only what he thinks it means. I don't believe that is true, and even if it does then Watson has done that as well as any other system that has been built.


But he talks a lot about analysing data in context (which it seems is shorthand for "prior information that is relevant to the current input") and it's not unfair to say that we're still a long way from having machines that can make the sorts of connections between different texts that the author describes.


It's true that we (as in everyone working on these problems - not just IBM) don't have this working.

Two points, though:

1) This is probably the most active area of research in machine learning/AI at the moment. The same type of progress that was made in image understanding 2 years ago using deep learning techniques is now being seen in NLP. See for example[1].

2) It isn't at all clear that "making connections" of the type he talks about is needed to generate the types on insights that he claims. For example, the work StitchFix is doing combining analogy reasoning on words along with other non-textual data[2] indicates to me that the current state-of-the-art absolutely could "understand" that Bob Dylan wrote about themes that were associated with anti-war sentiment.

[1] "Teaching Machines to Read and Comprehend" http://arxiv.org/abs/1506.03340

[2] http://multithreaded.stitchfix.com/blog/2015/03/11/word-is-w..., https://github.com/stitchfix/context2vec,


I agree that this isn't a particularly special accusation, and that existing algorithms probably could get "anti-war" out of Dylan lyrics (especially with some meta content; most people learn about Dylan from sources stretching beyond his lyrics).

I still think there's a legitimate complaint here, though: IBM isn't doing unusually badly, but they're promising an exceptional amount. Even if Watson is a good product, and symbolic understanding isn't needed for good AI, they're still making claims they can't deliver on.


This article mirrors some huge frustrations I've had in recent years as a long-time lover and pusher of machine learning. I spoke to a Watson booth employee for a few minutes at a machine learning conference a couple years ago, and almost right away had similar feelings. I don't like the term 'fraud' here though, 'insanely oversold' seems more appropriate. I looked more into Watson, and realized it's really just a large number of traditional (and not very innovative) machine learning algorithms wrapped into a platform with a huge marketing budget.

>AI winter is coming soon.

Perhaps, if so Watson is certainly evidence of this. It frustrates me to no end that machine learning has so much potential but is often lost in a sea of noise and buzz words (as much as I love deep learning, I'm almost tempted to lump that in here too given its outsized media coverage). Machine learning is in its infancy of impact, but the overselling by mediocre enterprise companies and ignorant press may shoot its credibility for years to come.


Machine learning is doing fine, please don't worry about that!

My wife was trying to find an old photo tonight and all I had to do was suggest that she go to google photos and search based on description. Same thing with Facebook recognizing images.

I am getting into using deep learning for NLP, and that looks promising. Google's parser is interesting but I want to see how quickly they can get many layer networks to do things like anaphora resolution (match up pronouns with previous noun phrases).


I agree machine learning isn't going anywhere - and yes the big companies are going to solve increasingly complex problems using increasingly complex methods using their data and resources.

The trend I'm seeing is medium sized companies (think 10's of millions to low billions market cap) not solving simple, highly impactful problems using potentially simple machine learning techniques. A basic regression model that was state of the art 75 years ago can often add millions of dollars in immediate value if applied to the right problems. Instead, these companies often prefer to do what they know best, which is hire people to perform repeatable-but-not-easily-scriptable tasks.

The moment machine learning-illiterate executives start getting burned (one CEO to the other: "oh yeah, we spent X dollars on <insert hyped machine learning platform here> and it was useless"), is the moment that the hype can start destroying value.


> The moment machine learning-illiterate executives start getting burned (one CEO to the other: "oh yeah, we spent X dollars on <insert hyped machine learning platform here> and it was useless"), is the moment that the hype can start destroying value.

This is a perfect description of how AI winters come to be. However, I think the upcoming winter will be more like an AI Fall. Some technologies over hyped and over sold will fall out of favor and people will be skeptical. However, companies like Google, Facebook, Microsoft, Apple who are using AI internally and can quantify improvements will continue to do so. The hype cycle is definitely going toward the slowly rising plateau rather than another crash. There is critical mass. Maybe this is what people though before in AI hype cycles (we are definitely in one), but in previous cycles we didn't have near human image recognition essentially solved.


Better parsers are interesting, but I feel like the parsers we have already are 'good enough.'

The biggest hole in NLP is going from a parsed sentence to 'meaning.' Once we get that right, and are able to feed meaning back into the parser, then all our parsing problems will be fixed imo.


Let me suggest that the meaning will require working effectively with concepts. That is, humans, who actually have real intelligence, can read words and identify and work with the underlying concepts, but apparently so far no one knows how to program a computer to do this.


The parsers available are good enough for English. Sadly, that's absolutely not true for other languages.


As a French with a strong accent, Siri probably understands only half of what I say and I have to make lots of efforts.


So you are saying, all we need is true AI to fix our parsing problems? Well then, all is clear and easy now ...

(How else would the computer understand "meaning" without beeing strong AI)


Sort of, more like, "our parsing is good enough for our use cases, we'd make more progress if we focused instead on getting closer to true AI"

We don't even need true understanding of meaning, even a lousy approximation would be an improvement at this point (or I should say, a slightly less lousy approximation compared to what we have now).


> a large number of traditional (and not very innovative) machine learning algorithms wrapped into a platform with a huge marketing budget.

Maybe that's the path to intelligence -- a bunch of ML and reasoning algorithms we already know about just working together and fed with a lot more data...

Once you take apart any AI thing (or any complicated thing n computing) you discover it is just a neural network, just SVD, just NLP + expert system etc. At least that is how I felt about it when I studied in college. The veil of mystery was lifted and it was just algorithms + math.


You know, all the AI research DARPA paid for, pre-Winter, paid for itself many times over in planning the logistics for Gulf War 1. But the funding dried up anyway. Perception is important.


What do you mean by this? Do you have any references? I'd love to read more about how DARPA used AI for the gulf war.




I spent a lot of time in the 1980s experimenting with Roger Schank's and Chris Reisbeck's Conceptual Dependency Theory, a theory that is not much thought of anymore but at the time I thought it was a good notation for encoding knowledge, like case based reasoning.

Having helped a friend use Watson a year ago, I sort-of agree with Schank's opinion in this article. IBM Watson is sound technology, but I think that it is hyped in the wrong direction. And over hyped. This seems like a case of business driven rather than science driven descriptions of IBM Watson. Kudos to the development team, but perhaps not to the marketers.

Really off topic, but as much as I love the advances in many layer neural networks, I am sorry to not also see tons of resources aimed at what we used to call 'symbolic AI.'


Back in the 80's, there were two AI camps: Syntax and Semantics. My favorite for syntax was Terry Winograd. For semantics was Roger Schank. Back then, I was trying to map AI onto biology. Syntax was easier; you could easily map the edges to a McCullough and Pitts neural model. Semantic nets was harder; the edges were more of a way of modeling symbolic relationships. So, I couldn't wrap my head around Shank. Wish I had; it felt like I was missing the point.


There is fortunately a good amount of money being spread across different AI techniques still. Not from every funding source, but the traditional ones are still following the usual model of hedging their bets somewhat conservatively. It's mostly industry, and closely industry-aligned nonprofits like OpenAI that are going 100% all-in on nothing but deep networks. But those kinds of groups have always been a bit short-term driven and susceptible to hype; in the '80s they were putting all their money into expert systems.

If you look at which AI projects the National Science Foundation is funding (or in Europe, the Horizon 2020 program), it's a lot more diverse. Even just in machine learning they're not putting all their eggs into the deep neural nets basket, with considerable funding going to the other major areas of ML (e.g. Bayesian methods). Symbolic methods have a decent amount of funding too, including some explicitly "cognitive systems" grants. Some other symbolic techniques are still funded but not as much by "AI" bodies, e.g. the logic-based branch of AI now gets a decent amount of its funding from the software engineering community, because verification is a big application of solver / theorem-prover techniques.

Attending this past year's AAAI in Phoenix was kind of funny in that respect. Maybe 95% of the consultants and recruiters there were solely interested in hiring people to tweak deep networks. But the scientific side of the conference didn't look quite like that.


Do you still think it has merit? I would be curious to know what parts of it you think are worth adopting (or have been) into modern theories.

Personally I'd love to see more research go into forms of symbolic AI that can deal with uncertainty, probability, partiality etc. The Research VP of Cycorp told me he wanted to go in that direction with Cyc, but couldn't find anyone with a promising and rigorous proposal. I think those sorts of considerations often lead either to pure theory with no thought to implementation, or the worst kind of adhoc "fuzzy logic" (scare quotes because fuzzy logic is a proper mathematical discipline).


I almost want to up vote the article on principle.

I try to go to various "machine learning" and "AI" meetups around my city.

The most frustrating, but relevant, lesson I've learnt is to just stay away from everything IBM/Watson.

I can summarise every single bloody presentation they give because it's the following:

"Now, when we say cognitive computing, we don't mean any of that sci-fi and AI stuff, now here's a marketer and marketing materials that will explicitly imply that we're talking about that sci-fi and AI stuff for the next 59 minutes. There will be no technical information."


> we don't mean any of that sci-fi and AI stuff

No one does.

Even with AlphaGo the hype was insane, but was mostly caused by people confusing weak AI with strong AI with Arnold Schwarzenegger. Any material that intentionally plays on this confusion is arguably false advertising and fraudulent.

The truth is, it's still impossible for anyone to talk about strong AI, because we have hardly been able to define what it even is. We just know we have it, and it's still unlike anything anyone is working on that has made it to the public. The people who write the papers absolutely know this. It's a common goal, but we have yet to engineer anything that remotely resembles strong AI.

For the most part, we're either still busy figuring out small but hard problems, or hacking resemblance and avoiding the important problems altogether.


I wouldn't even say that it's a common goal.

If I'm trying to solve problem X and writing a paper about a new, better AI/ML approach to do it - this is most likely not even in the direction of strong AI. In fact, the new and improved solution is quite likely to be more specialized to the problem and thus even further from a general AI.

If I could magically build a strong general AI then it most likely wouldn't be an improved or even a good solution to the problem I'm looking at - it would be a horribly inefficient overkill, something comparable to using a human brain to perform arithmetic calculations - it can do it, but less accurately and more wastefully than a simple calculator.


It takes years of training before humans can demonstrate human level intelligence. Unless the first strong AI is super human it's not going to look like a strong AI for several years.


It takes years of training before humans can be trusted with pants. Really, we should lower our expectations of AIs by three or four orders of magnitude. If they can walk in less than a year of trying, then they win.

http://www.gocomics.com/monty


Link requires login.


It doesn't for me. Here's a direct link to the image if they don't block hotlinking http://assets.amuniversal.com/6c1332e0fdec01335e11005056a954...


Yes, humans are also very hyped organisms. Lets dispel the notion that a defining trait of humans is producing or processing highly complex language patterns.


Years of training? Do untrained 3 year olds not demonstrate human level intelligence?


To what extent does this suggest to you that when we first develop strong AI, we won't know it?


Engineering requires understanding. If we're randomly mixing potions then we may get a result without any understanding, but there's a name for that, and it's called magic.

Strong AI will most certainly appear magical, but any technology with this level of sophistication will require focused, intentional, intelligent effort. The person who accomplishes it will know exactly what they were doing.

This mystical emergence of intelligence is also somewhat metaphysical and unscientific. There are no ghosts in shells, and intelligence doesn't just rise from bare metal. If we were to ask if someone could invent bitcoin and not know it, it'd be a joke. Yet, with AI, there are those who still intuitively argue for some form of emergence. But across the board, these claims are made with little or no understanding of what underlies physical intelligence.


If anyone has evidence of a scientific counterargument, please share. The bean sprout AI myth needs busting.


Excellent point. It seems to me that there is a very good chance that by judging it by our own ab/de/inductive metrics we could miss its blossoming. Your own two cents?


It's a brand, not a technology. IBM will pitch it for everything from help desks to curing cancer.


That's 100% typical IBM. Overpromise, underdeliver and sell their crap for an expensive amount

But it seems now they can't make sales in golf courses anymore and are trying to appeal to the technical staff for sales (but there might be one or two manager on these meetups that buys their BS)


No first hand experience, but I've everything I've heard seems consistent with what you said. Several engineers I've spoken to at companies using Watson/IBM had very few positive things to say and suggested that any usage was primarily for marketing reasons.


It's the latest product being pitched by a massively efficient sales force deeply ingrained in companies who are no longer purchasing big iron.

Does anyone really think the next Big Idea will come from a bunch of stiff suits in Armonk, New York?

They are great comercializers, consultants and support engineers. They haven't been technical innovators for many decades.


I too was frustrated by a recent IBM presentation at my uni - there was no technical content.

However I don't think an AI winter is coming, as the author's asserts (towards the end). Applied AI research is driven by orgs like DeepMind and OpenAI, as well as academics. This powerful symbiosis of industry and academia will, in my opinion, continue to yield technological breakthroughs in the coming years and decades.


I think we'll continue to see interesting insights and advances, but there's real reason for concern about the public response to AI hype.

We seem to be oscillating wildly between three poles: Hawking's "AI will doom us all", IBM's "strong, friendly AI is right around the corner", and the skeptic's "strong AI is a pipe dream, or centuries away!" All three of those voices interfere with the possibility of sharing a useful, accurate view of what AI is and what it can be.

As bad as science news and public knowledge of science generally are, the situation is much worse on strong AI, and I think it's right to lay some of the blame for that with IBM and similar hype artists.


You must go to very very dull meetups :( Lots of fun cool things happening and the hype has a good side to it as well in generating interest in the field but the buzz word and the click-bait marketing is out of control agreed.


Thinking of "Watson" more as a catchall term for machine learning research at IBM is more useful than thinking of it as a unified platform (as the marketers try to sell it). This includes research efforts in speech recognition, NLP, and reinforcement learning as well as fun stuff like the "Watson chef". The underlying technology is almost completely different, but it still falls under the Watson umbrella.

In general, every company (startups and big co. alike) seems to be hyping their "AI" capabilities out the wazoo, but three years ago saying those two letters together was a death sentence. I don't know if this is a good or bad thing (hype is bad, but general interest is useful for the visibility of the field) but it is definitely a sea change compared to the last 5-10 years.

I am extremely skeptical of most claims these days, and am a bit worried about AI Winter 2.0 due to hype around largely mundane technologies. There are exciting things happening in the space, but these things are rarely hyped to the extent the more mundane results with corporate backing are.


For those who are unfamiliar with the author of this article: Roger Schank is one of the early pioneers of AI research:

https://en.wikipedia.org/wiki/Roger_Schank


don't mean for this to come across as snarky but what has he done lately? is his best work from the 1980's?


That may be the problem. He sounds bitter.


It does, unfortunately, have a bit of an "Old man yells at cloud" tone to it. And I don't see any solid evidence for his arguments beyond appeal-to-authority in that essay.


He may be a bit of a curmudgeon - his blog is called Education Outrage, after all: http://educationoutrage.blogspot.com/

But he's done and continues to do a lot of good work. He helped found the Learning Sciences (an offshoot of cognitive science, AI, psychology, etc.) in the late 80s and early 90s. He continued and continues to do work in education, including founding experiential schools and writing books on education: http://www.amazon.com/s/?url=search-alias%3Daps&field-keywor...


Patrick Winston taught my first AI course around 1974. Things have come a long way, but back then I was flabbergasted to see a program perform Calculus Integration. It seemed to be a task that took a certain amount of insight and problem solving. Professor Winston then proceeded to break down the program for us and to my surprise it easy to understand and wasn't very complex.

I'll always remember his comments at that point that AI is mostly simple algorithms working against some database of knowledge.

I'm not sure I would still make that claim today, kernel based support vector machines aren't all that straight forward and many of the cutting edge machine learning and AI programs are far from easy to understand. Still, there is a feeling of disappointment when the curtain is pulled aside and the great Oz is revealed to be nothing that magical.


I'm wondering why, as a company, IBM doesn't seem to be doing good in their core business, yet somehow they want us to believe they are at the forefront of the latest darling new technology in ML research and cognitive computing?

If they are unable to attract talent and innovate in their core business, how are they supposedly pursuing sophisticated AI, and the biggest question, is why?

What other products or innovations have come out of IBM Research? What is their overall reputation, and why should we believe them? Why don't they release Watson to the world, like Microsoft did with their twitter bot?

If I were a recent grad or even mid level in my career and wanted to work on as interesting projects as I could, I wouldn't be going to IBM. My first priority would be access to interesting and varied datasets, such as what can be obtained at Facebook, Google, Amazon, or another such company. A close second would be any of the players in the hardware ML industry such as Nvidia.

I don't understand what's so special about Watson, it all seems like marketing BS to me for a company in the death throws.


"What other products or innovations have come out of IBM Research? What is their overall reputation, and why should we believe them?"

1986 Nobel Prize in Physics for the Scanning Tunneling Microscope

1987 Nobel Prize in Physics for High-temperature superconductors


...30 years ago. I'd suggest joining a research team that has done something important since before you were born.


87 is 29 years ago.


IBM had been milking the cash cow with their CICS OLTP system for 40 years.

Their blockchain project with Hyperledger is also full of baloney. There are plenty of groups that are innovating in that front, but IBM is far from it. But they're very good at marketing it.

Watson, Hyperledger, quantum computing... whatever. If it came from IBM it isn't worth the dime.


> What other products or innovations have come out of IBM Research?

Are you asking just about recent developments in AI and ML, or in general?


IBM spends billions of dollars in fairly fundamental technology R&D, they've built on of their strongest business models on licencing the patents they develop out to others, whether it's silicon improvements, disk density increases, or now, cognitive computing.


What actually-useful patents are you referring to? Genuinely curious.


For the sake of sanity, IBM was the company with most patents awarded last year. Is it that difficult to admit that they are well enough?

If it were household names like Google or Apple, would help?


To be fair "most number of patents" doesn't mean they were good. They have an army of people whose sole responsibility is patenting anything they can in the hopes of enforcing that IP later.


>> What other products or innovations have come out of IBM Research?

Also- the IBM models, like mentioned above. That's a bunch of NLP models that have been very influential and are still.

If you studied NLP now, for instance, you'd hear about them when learning about NER or WSD etc.


>> What other products or innovations have come out of IBM Research?

Deep Blue winning over Gary Kasparov?


> IBM doesn't seem to be doing good in their core business,

$80B / year in revenue is not Apple's $200B, but not sure I'd say it is "not doing good" either.


IBM's revenue has been declining for years, but analytics and cloud is their growth segment. The question is when the legacy stuff has become small enough to stop impacting overall results too much.


"analytics and cloud is their growth segment". Have been laying off employees in the segment as well as others.


Funny that he was Chief Learning Officer at Trump University. Doesn't diminish my feelings for him, but interesting.


Well, I suppose if anyone can recognize a fraud when he sees one, it's a former executive officer and Trump "University."


What if Watson analyzed 800 million pages of Dylan critiques and analysis, instead of 800 million pages of lyrics? I bet you could get to the anti-establishment theme. Maybe Watson was just given the wrong set of input data (garbage in, garbage out).


The themes it produced were not inaccurate when it comes to Dylan. The vast majority (and his most beloved works) are not protest songs. Pretty much everything he did after Bringing It All Back Home is not a protest song. Like A Rolling Stone is definitely not a protest song. In fact, most of his work IS about relationships in some form.

I would disregard what the author has to say about Dylan even though it seems to be author's primary example. Dylan wrote a ton of songs and encompassed a couple different personas through his career. He's not one thing.


Would Watson discover this, though? Even if you marked all the lyrics with a year, would it be able to make that sort of inference? I don't think so, I doubt it is able to form anything like a concept of time, or person, or a person changing over time, especially not from an input of song lyrics.


Why not? It is very easy, for example:

"if (sontexts[1980].contains("love") && songtexts[2014]contains("war"){print "Dylan changed his focus from lovesongs to protesting against war"}


It's entertaining that the argument centers around Dylan, given the huge controversy over Dylan 'going electric'.


If you're going to go that far, why not just tell Watson what the theme is directly?

Obviously the whole point is for it to figure it out for itself, not to be told.


I don't think that's going very far. Humans who figure this out also have access to context. You wouldn't know a song is protesting against a war unless you know about the war it's protesting against. 800 million pages might seem overkill but it pales in comparison to the amount of information humans (sub)consciously use to reach these conclusions. Think about the amount of information required to adequately describe the concept of a protest-song to a machine.


"time passes and love fades" makes for a better commercial than the anti-establishment stuff.


What is with the poor grammar and spelling errors?

"Recently they ran an ad featuring Bob Dylan which made laugh, or would have, if had made not me so angry."

"Ask anyone from that era about who Bob Dylan was and no one will tell you his main them was love fades."

"Dog’s don’s but Watson isn't as smart as a dog either."

etc


The spelling errors confuse Watson.


I sure wish some mainstream journalists would look into the whole "Watson" marketing campaign and apply some fact checking to it.

I like AI projects as much as the next HN reader, but compared to efforts by the other players in this space (Google, Apple, Tesla, Amazon, etc) whenever I hear a new marketing push about IBM Watson project my "BS detector" goes into the red zone.

(That said, it would be awesome if I'm wrong and IBM really is making some genuine advances...)


Talking about over-hyped marketing, it's ridiculous to include Tesla with the other companies in your list.


Actually I'm also a bit of a Tesla skeptic, but at least they are shipping actual practical AI algos in their cars and don't walk around intimating they're going to cure cancer. http://www.cio.com/article/2397103/healthcare/can-watson--ib...


Tesla buys that tech from some outside vendor.


Mobileye: http://www.mobileye.com

"It is used by nearly two dozen automakers, including Audi, BMW, General Motors, Ford, and Tesla Motors." -- http://fortune.com/2015/12/17/tesla-mobileye/


Reminds me of a similar article covering the work of Douglas Hofstadter, the author of GEB :

The Man Who Would Teach Machines to Think

http://www.theatlantic.com/magazine/archive/2013/11/the-man-...


Apparently Watson believes Mr Schank has 'mixed' feelings towards it and IBM. Go to http://www.alchemyapi.com/products/demo/alchemylanguage, feed it the article link and see what happens :)


AI winter is coming soon.

AI winter will come if it isn't able to latch onto strong business cases. For the past few years, we've seen a slow uptake of low-grade AI, for example, Siri. As long as those sorts of things continue, the winter won't have a chance to set in.

It's true there is a lot of baseless hype around AI, but there's baseless hype around every new technology (probably around everything that catches people's attention). That said, if someone predicted that Watson is going to die, I would believe them, because they haven't seemed to get much business traction at all.


ML as part of AI drives major industries already. Like web search and internet ads.


I think it's more accurate to say linear solvers drive internet ads. Which could arguably be characterized as machine learning, I guess.


Mostly logistic regression, and sometimes deep learning. Both are part of ML.


The ~1990 AI Winter came about because DARPA money dried up after the (pyrric?) failure of the Japanese 5th Generation Project and the end of the Cold War. After those things not enough people could afford $xx+K LISP Machines anymore ;)

As long as there's a continual source of money there won't be another one - so basically as long as companies like Google and Facebook are profitable, and preferably money to be made elsewhere, things should be good.


No, the AI Winter came because expert systems did not do much.

I did a Masters at Stanford CS in 1985, and met most of the big names of that era. Stanford CS was dominated by the expert systems crowd, headed by Feigenbaum, and the logicians, headed by McCarthy. AI was being taught almost as philosophy. (Exam question: "Does a rock have intention?") You could graduate without ever seeing an expert system run, let alone writing one. Very little actually worked. But the faculty was claiming Strong AI Real Soon Now. From expert systems, which are just rules you write and feed to a simple inference engine. You get out pretty much what you put in. It's just another way to program.

Feigenbaum was running around, testifying before Congress that the US would become an agrarian nation unless Congress funded a big national AI lab headed by him. Really. There were a number of AI startups, all of which failed. There was a fad for buying Symbolics 3600 LISP machines, a single user refrigerator sized box with a custom CPU.

None of this delivered. That's why there was an AI winter.


In 1985 much of the money going into Symbolics came from DARPA funding - direct or indirect. Strategic Defense Initiative, Strategic Computing, etc. Up to then almost all machines were sold into government projects.

That was a peak year for Symbolics.

In 1986 a Symbolics 3620 was about the size of a large tower PC.

http://bitsavers.trailing-edge.com/pdf/symbolics/brochures/3...


AI winter is coming soon

It would appear so, IBM hype aside. From chat bots to image recognition and playing Go; the media is having a field day around the AI theme.

If this hype feeds Investor and Consumer expectations, the next round of AI startups are doomed to underperform.


I agree that there's a lot of AI hype and I suspect that we won't see all that much come out of it.

At the same time, there's a bit of a cop-out that goes on when we privilege our own cognitive processes over those of an AI just because our own minds are, more or less, a black-box.

I think he does a lot of that in this article. At the end of the day "human intuition" is just a filler until we figure out what's really going on.


Most interesting statement is the last one. Author seems to think we are about to enter into another AI winter.

Seems odd given alpha go and recent success of deep learning.


Because when you start confusing pattern recognition and neural net training with "intelligence" and "learning" and "consciousness" then everybody who doesn't know enough about the technology will get wrong expectations, it's even worse as even people working on AI are having these wrong expectations, so yes a winter is coming. Besides that, the AI we have today is used all wrong, to spy on people even more and to consolidate central services and the big players.


Yeah, it seems pretty unlikely to me that there is an AI Winter coming, given that we now have programs that look at a photograph and say "Women wearing a hat, sitting on a bar stool and drinking wine" when 5 years ago such capabilities were unfathomable- The kind of capabilities that are currently being demonstrated have wide reaching applications and will take a decade to filter into the rest of the economy, even if you pessimistically assume that all research from now on reaches a complete standstill.


> given that we now have programs that look at a photograph and say "Women wearing a hat, sitting on a bar stool and drinking wine"

That may be; but then there's also this: http://arxiv.org/abs/1412.1897


So? Any generalizing learning algorithm necessarily accepts a large number of inputs (including some unintended ones) for each possible output.

Humans are no more robust against this kind of attack than any other system - consider stage magic, confidence scams, NLP, optical illusions, etc.

If you want a truly infallible system, then no, AI will never provide that. What you want is a magic deity.


Well, personally I think you could perform those exact same attacks against the human brain, but in the case of NNs it's much easier to do simply because computer algos allow for exact repeatable experiments.


and also programs that do the reverse: http://arxiv.org/abs/1605.05396


The skyscraper building frenzy always reaches its peak right before a recession. So a quick explanation could be that once a conceptual breakthrough happens ("deep learning") it gets applied better and better, and then suddenly winter. So you'll see new peaks conquered (so from the MNIST records from a few years back now to AlphaGo and everything TensorFlow et al. can do) but then the concept peaks, and back to the blackboard again! (Or wait for Moore's maybe-law for another already sort of okay theoretical concept to become implementable/applicable.)


Well, I would argue that the previous AI efforts basically had zero practical utility- I can't think of any piece of software I was using in the early nineties that contained any "AI code" from the previous 20 years of research (Maybe the A* algo in some games I played, and maybe some fuzzy search capability in word processor dictionaries)

This wave of AI from 2005 on, however, is affecting every facet of our lives significantly.

(Of course, the early AI research efforts lead to IMMENSE amounts of non-AI innovation in programming languages, database design, operating systems, etc so it may have been one of the most successful "failures" in history)


I'd say PageRank is pretty much AI, and anything running on distributed map-reduce implementations is some form of crude AI. [Except the word count! there must be always at least one instance of that running somewhere.] (Netflix, Amazon and anyone who was doing precomputed recommendations before 2005.)

Also, the ridicolously simple "recency algorithm" used by Mozilla to show things in the awesomebar (the merged search and location bar) probably saved more man-hours than the Netflix recommendations, but I completely agree that it doesn't feel AI at all. Just as the wheel doesn't seem like technology compared to the loom, and sometimes the inverse is true, as the James Webb space telescope doesn't feel like better technological achievement than going to the Moon, but it probably is.


>Suppose I told you that I heard a friend was buying a lot of sleeping pills and I was worried. Would Watson say I hear you are thinking about suicide? Would Watson suggest we hurry over and talk to our friend about their problems? Of course not.

Although the author may be right overall, this paragraph certainly assumes a lot, and is probably wrong. Computer systems have been able to make such correlations for some time now.


I loved this article. Not just the term "AI", I've seen startups abuse the terms "machine learning" and "big data" to such an extent that it literally makes me cringe when I hear them.

How many times have you seen a TechCrunch article where the writer parrots the buzzwords the founder has thrown at them such as "x uses machine learning to sync your contacts with the cloud".


Usually when you call someone a liar you present plausible proof. No proof was presented. Instead all we got was an opinion of Watson's cognitive abilities from a former Trump University employee.


I get his cynicism from an idealist engineer's prospective. It's a problem for anyone who has applicable knowledge that meets a marketing/branding agency. Watson was a new tool that could play jeopardy and IBM needed a way to sell the heck out of it. branding Watson as AI is the act of an increasingly desperate corporation.

While true AI is a decade or two off, with each AI winter, an increasing number of human jobs are displaced. This next wave promises to be devastating to human productivity and a boon for machine productivity. The effects are real even if the intelligence isn't. When true AI is birthed, it won't need to be marketed. All that will be left are a few trillionaires, and food lines for the rest of us.

About the future "Wealth will be based on how many robots you own and control."


His argument is that Watson supposedly doesn't have an opinion on ISIS. While I don't know if that is even true, it seems like a very weak argument. Even if it could only "think" in a very limited domain, it could still be useful.

The author mentions himself that today's 20somethings have never heard of Bob Dylan, yet uses Watson's alleged ignorance of Dylan to dismiss it. Yet 20somethings are thinking entities.

Mostly it sounds like sour grapes, because his 1984 book didn't receive the recognition he thinks it deserves.


>> AI winter is coming soon.

Not really. There's a lot more private funding for AI nowadays and a lot of research is happening in the industry, rather than in academia.

Machine learning is not cognitive computing, like Roger Schank puts it, but it's championed by most of the large tech corps (count them: Google, Microsoft, Facebook, IBM, Baidu). Those folks have the money to keep spring going for a long, long time, much longer than last time.

Just being indignant is not going to achieve anything. Roger Schank and those of us who think he's more or less right in spirit (if not in tone) have a very simple way to prove his point: make it all work. Show why Good Old-Fashioned AI is better than machine learning for achieving the goals it set itself back in the early days.

But we've not been able to do that. That's the fault of the people, like Roger Schank, who started various parts of the original AI project- and failed to take it to completion. Again, and again and again.

Google, IBM and the rest will do what they need to do to keep the money coming in and they'll fund a lot of research that way. The rest of us can suck it up- or come up with something that works better. Noone's stopping us.


Two points come to mind when reading the article and comments made here. One, humans have personalities based on dualistic and conflictive emotions. Only humans can love and hate at the same time at the same individual. AI is focused on mimicking some behaviors based on stimuli, but behaviors is not a personality in action. Personality is much more complex. Some Psychology 101 disperses all confusions around this. Two, the challenges around language recognition can't be programmed unless one solves meaning. Linguists have struggled to define meaning for decades; best definitions explain meaning fluctuates culturally, historically, politically, and to some extent by the individual.


* People learn from conversation and Google can’t have one. It can pretend to have one using Siri but really those conversations tend to get tiresome when you are past asking about where to eat.*

Am I the only shmuck who thought Siri came from Apple, not Google?


Yeah to me it seems the author is angry and confused. Not sure why the article is upvoted that much. I guess people just liked the clickbaity title.


The author used to be a big name in AI and learning but now sounds like someone complaining about going to and from school uphill both ways in the snow.


I asked Google: "does siri use google".

Top result: "Currently, Siri defaults to the user's preferred Safari search engine, which is set by default to Google but can be changed to Yahoo or Bing. But with the debut of iOS 7, Siri search will default to Bing instead."


Really dislike the IBM meetups, they're full of marketing speak and people they have no idea what a load of bs the "sponsor" heavy presentations are. Meetup should ban groups like this.


They are not doing "cognitive computing" no matter how many times they say they are

Maybe not, but "cloud computing" is getting a little pedestrian. New meaningless names used to market things we've been doing all along are a good way to gin up some interest.


Well, this makes me feel a little better that I could never make sense of IBM's Watson advertising. Even after checking out their site, I couldn't figure out what it was for, much less what it was under the hood.


I've been impressed Watson/Bluemix and I think IBM is on an interesting track. Marketing is not always effective at conveying the particular and stunning work that engineers, such as those at IBM, accomplish.


I couldn't concentrate on what the article was saying because there were so many grammatical errors; I kept catching myself re-reading sentences, replacing phrasing or inserting missing words.


I have always been confused too. Their approach is old school HMM GMM and hard bulldozing. Not state of the art anymore.


What do you expect from somebody that manages a unit called "branded content and global creative"?


I was wondering how long it would take before someone actually called IBM out on this.


Business Area Limited UK is seeking to expand its investments into innovative computer software projects to turn over about 78 Million USD in medical device , computer development and biotechnologies. If IBM is accepting external investment portfolios.


Business Area Limited UK is seeking to expand its investments into innovative computer software projects to turn over about 78 Million USD in medical device , computer development and biotechnologies.


Microsoft SQL Server added various machine learning primitives to their SQL dialect. So not only can you query and summarize past data; now you can select from the future as well. Bayesian, NN, clustering, the same old flower matching demo, it's all in there. If you can jam enough numbers in you can certainly handwave that you're getting insight out. https://msdn.microsoft.com/en-us/library/ms175595.aspx

Of course basic data mining is certainly not where the latest research is, but it covers many cases I see on HN or talked about in big data pitch decks. Regardless, it all seems a lot less fancy when you can get the job done issuing SQL commands that wouldn't confuse anyone who learned SQL in 1978. The whole thing is oversold and now largely commodified, to boot.

If and when this stuff starts to show real results you'll certainly feel it. The first wave of successful connect-the-dot bots will open up so many discoveries that opportunities for human labor will swell. But it's not chess, Jeopardy and a way to mine medical records. That's all obvious corporate bullshit.


A pity? IBM is DIRECTLY responsible for me not doing so any more. I've just given up on it because you (IBM) waste my time and insult my intelligence.

And i sought you guys out and your company pissed in my face!

If every time you invite people to your party you serve nothing but turd sandwiches, don't bitch about how we didn't give you credit for the croquembouche you've got in the fridge out the back...


This comment breaks the HN guidelines. Please (re)-read them and only post comments that are civil and substantive from now on.

https://news.ycombinator.com/newsguidelines.html

https://news.ycombinator.com/newswelcome.html

We detached this comment from https://news.ycombinator.com/item?id=11751662 and marked it off-topic.


But I saw it working in a Bruce Willis movie. I think he was a hitman-lol. Which is exactly my point. Computers will never be any smarter than the EXACT commands we as their 'creators' give them. In essence they will never be but an extension, a tool, of ourselves. But to say they can think? Not a chance in hell.


> Computers will never be any smarter than the EXACT commands we as their 'creators' give them.

You never tried debugging, did you? (Ie computers can do surprising things, even when they do exactly as we tell them.)

Also, look at AlphaGo (or any modern chess engine): these programs play better than their their programmers could.


When human beings can be defined by the rules and limitations of a chess boardgame, I will agree. That will of course never happen.


The entire point of AI is to establish human-like intelligence via the same general mechanisms we use to learn. Our limitations are uniquely human and don't apply here.

"Thinking" in the short-term means problem solving, whether through analysis and understanding. Machines are capable of that now, and do so much faster than do humans.

The parameters of learning will be bound - at least for now - by human input. But not the boundaries of learning.


I agree wholeheartedly with you, except for the definition of learning. Learning implies knowledge and most of what a computer would see as knowledge is garbage to a human being. As well, what exactly a single individual would consider to be true useable knowledge would not even be teachable to a computer, as for the most part, it cannot be truly explained by said individual beyond the individuals own personal viewpoint. If we start trying to marginalize individualism I would think we lose what it means to be human, et al.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: