Hacker News new | past | comments | ask | show | jobs | submit login

Anyway none of the company that pretend doing AI are actually doing AI. AI nowadays is pure branding bullshit.



I heard someone say that A.I. is just what we call technology that doesn't work yet. Once it works, we give it a specific name, like "natural speech recognition".


However, if a robot from scifi were to walk out of the lab, like Data or Ava from Ex Machina, or we had access to HAL or Samantha from Her, we wouldn't just give it a specific technical name. We would consider those to be genuine AIs, in that they exhibit human-level cognitive abilities in the generalized sense.

It's true that in Her, Samantha was just an OS at the start, kind of like how the holographic doctor was just a hologram at the beginning of Voyager, but as both stories progress, it becomes clear they are more than that. By the end of Her, Samntha and the other OSes have clearly surpassed human intelligence.

Those are fictional examples, but they illustrate what we would consider to be genuine artificial intelligence and not just NLP or ML. The reason people always downplay current AI is because it's always limited and narrow, and not on the level of human general intelligence, like fictional AIs are.


I like that definition too (I know it from Seth Godin). It’s honest, in the sense that we just don’t know yet how to that stuff instead of labeling every single code of line as AI.


I think a reason for this is that in the early days of computing and AI research, strong AI / artificial general intelligence (AI possessing equivalent cognitive abilities to humans) was considered both to be within reach, and the most obvious solution to many problem domains. We now realise that things such as computer vision and natural language translation can be approximated with solutions falling far short of strong AI.


Personally I define AI as software that you "train" rather than "program". In the sense that neural nets and other ML tools function as black boxes rather than explicit logic.

By that definition, AI is a real thing—it's built on top of programming that uses compilers and languages and ones and zeros—but it's different and it's valuable.

To say it's all bullshit, I feel, is to cut yourself off from new skills. Kind of like "compilers are all bullshit—it's opcodes at the bottom anyway."


AI carries a set of connotations in popular imagination that a) don't comport with the actual capabilities of what we term 'AI' in the computer science world, and b) are being exploited by marketing teams at IBM and plenty of other companies to sell technologies that aren't particularly new or interesting. The kernel of truth in 'AI is bullshit' is really that the discourse around AI is bullshit, which I think is a pretty fair assessment, and this is coming from someone who's work gets labeled as AI on a regular basis.


>I define AI as software that you "train" rather than "program".

I like this definition. It covers things that are AI but not ML, like DSS / rules engines. I've built two fairly sophisticated DSS before but haven't messed with ML much. It seems interesting, but I haven't had the time.

https://en.wikipedia.org/wiki/Decision_support_system

Eliza is the first AI program I came in contact with on the Commodore. It was built in the 60s.

https://en.wikipedia.org/wiki/ELIZA

AI is a very broad subject and ML is just a particular (promising) technique to perform AI.


Not sure what you mean by that. Is there some industry standard around the term "Artificial Intelligence"? I agree that its become a bit of a buzzword, but I'm not sure that its being misused.


When I hear AI I usually imagine deep learning, but many companies using the term don't specify.


But that's more "machine learning" which always seemed less "sexy" than AI -- basically just regression but better, not magical like AI.


I will say that when my company uses AI, they almost always just mean LSTM-based content generators - usually chatbots or "advisory"-style outputs - but the key idea is that it's generative and not just an evaluator.

I think that's probably the most helpful definition because your ML output has to go into some larger intelligence system (human or otherwise) to produce some decision / activity. So your choices are:

* Human * Expert system with rules of interpretation that include ML output as input * AI system which relies solely on inference and reinforcement / goal-seeking to produce output




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: