Hacker News new | past | comments | ask | show | jobs | submit login

The way IBM talks about it is completely bs. However, this round of AI is definitely better than the last one. Specifically, whats different this time around is that previously, expert based systems and many machine learning techniques require that you specifically hand code things like:

1. Parsing and providing the input dataset into 'features'

2. Hand coding the logic and rules for many different cases (Expert systems)

Now, it has become easier to train a model such as a neural net where you can provide much 'rawer' data; similarly you just provide it a 'goal' in the form of a loss function which it tries to optimize over the dataset.

By 'true' AI, I think most people mean 'how a human learns' - which is actually a very biased thing, since we humans have goals of things like the need to survive, etc. I do believe it would be possible to encode these into goals, although doing that properly and more generically seems a little bit in the future.




One of the neat parts of expert systems was that you could get surprising and bizarre interactions between rules, leading to entertainingly nonsensical answers.

One of the neat parts of neural networks is that you have no idea what rules it's using, but it still manages to produce answers that are sometimes not entertainingly nonsensical.


The bigger problem is that they produce that are most of the time making perfect sense and then every now and then they spectacularly fail on input that is indistinguishable from inputs that gave correct answers.


I think this also happens in the human brain quite a bit. There’s a lot of times where you see something out of the corner of your eye that isn’t there, or you duck because you think something was coming towards you, or you wave because you think you recognize someone.

I bet at a lower level, various systems in the mind fail constantly but we have enough redundant error correction to filter it out.


Well, as referenced in my above comment - you could spend the time to figure out how the prediction was made. Specifically, decoding the training process and what data points were used to influence the prediction - but it would be complicated and take a lot of your time. It might give you more clues why something like random noise was classified as a hot dog.


Ha - the thing to think about here is, pretend that you were asked a question of say, why you labeled a horse picture as a horse.

You might say something like "well, it has hoofs, and a mane, and a long snout, and 4 legs, etc". However, that sort of answer from a computer would probably be labeled as too generic since that could be any number of animals.

The issue is, that to you a horse is some complicated 'average' of all the examples of horses you've seen in your lifetime, that go through a complicated mechanism for you to classify it. Specifically, probabilities of different features that your eyes have extracted from seeing them.

Similarly, understanding why a neural network is doing something is very possible, however, it fails to mean much when the answer it could give is a list of, say, hundreds of vectors and weightings that contributed to its prediction.


It's funny that we are willing to disqualify an AI as an I simply because it comes to some startlingly nonsensical result, yet history is full of humans doing the same thing (from placing a plastic cup on a hot stovetop through to gas chambers).


Feature design is still the prevailing part of machine learning though.


Actually, what makes us human is the lack of fixed goals. As McCarthy wrote circa 1958, one of the properties of human level intelligence is that “everything is improvable”. That means innate knowledge that nothing is ever good enough, and an uneasy tension between settling on an answer and the drive to keep going.


> Now, it has become easier to train a model such as a neural net where you can provide much 'rawer' data; similarly you just provide it a 'goal' in the form of a loss function which it tries to optimize over the dataset.

"just".

Where is the mention of all of decisions on network size, node topologies, regularization, optimization methods, etc.. And you still have to do your first step.

Deep learning is way more complex than just choosing a loss function.


And choosing a good loss function is not a small task either.


> since we humans have goals of things like the need to survive, etc.

What makes you think AI wouldn't have similar ideas about wanting to continue its existence?


Current Ai doesn't have any ideas about existing. That's something we'd have to add in to future AI, which goes to the point of the article and over-exaggerated claims about current AI.

Caring about one's existence (or even having the notion of a self) isn't something that comes about from crunching large data sets. ML isn't going to result in existential feelings or consciousness.


> Current Ai doesn't have any ideas about existing. That's something we'd have to add in to future AI ... isn't something that comes about from crunching large data sets

(note: I'm a roboticist who spends a lot of time thinking about long-term autonomy (how can a robot go for months and years without human intervention) so that gives you a sense of where I'm coming from. Since my original comment was more of a question, I'll expound a little more here.)

I wasn't really asking in the context of current AI, but 'true' AI as the parent was talking about. They drew a distinction between how humans learn and how a "true" AI might learn based on the goals of each, and I question the notion that a "true" AI wouldn't have a notion of survival.

I think you agree with me to some extent, based on your other comment:

> A robot might someday be a self. It would certainly be advantageous for robots to avoid damaging themselves, and being safe around other robots and humans (treating them as selves). But how we go about making robots self-aware, and more problematically, how they have experiences is something nobody knows how to do.

I agree. I believe a robot not only would benefit from the idea of "self", but that it's fundamental to their design. We've built algorithms for a long time that operate in a very limited way. Robots break that mold in a number of ways, foremost being they can move in their world, and even physically change it. Yet we're still insisting on building them the same way we build webapps or trading software.

There needs to be a notion of "inside" and "outside" just to implement things like collision avoidance. I think there will also be notions of "group" and "individual" as well as "self" and "other" and "community" as well if we're going to call robots intelligent, as we see these notions in other species we regard as intelligent (whales, dolphins, chimps, etc).

The question is, how do they get these concepts and what do they look like? I don't know. But I think they're essential, and I think they will be a barrier to AI if we keep treating them optional instead of innate parts of an intelligent creature.


I think we're huge technological leaps and bounds--I mean light-years--away from AGI so advanced it requires some form of computer psychology. But I do think it might eventually happen. And I don't think people are going to do a good job of preparing for it proactively. It's going to require a huge fiasco to motivate people to take that idea seriously.

So I agree that one day we may have to be careful how we treat AGI and maintain a system of ethics for the use of artificial, sensate, sapient beings. But that day is hella not today. Yet when it does come, it will probably still sneak up and surprise us. Like most black swans are wont to do.

Right now IBM is using people's confusion of AI with AGI to make sensationalist ads. But hey, most advertising around trending tech is sensationalist.


> I think we're huge technological leaps and bounds--I mean light-years--away from AGI so advanced it requires some form of computer psychology.

I think you're right about this. I think the current wave of anxiety about the capabilities of AI are fueled by non-practitioners who have no idea really how hard it's going to be.


What makes you think survival instinct has anything to do with intelligence or awareness?

If anything, I’d guess that the most primitive life forms on this planet have the strongest proclivities for survival.


I think it has with my conflation of intelligence and life. Life is another one of those things we have trouble defining, but I think you've hinted that one of the key characteristics of life is a desire to extend it.

Once we have a thing that seems to value its own self, then we can start talking about motivations for doing so, and I think that's one of the things that can elevate life to intelligence.


I think that consciousness is a collection of experiences and observations ('data'), nothing more. I am curious what lead you here:

> Caring about one's existence (or even having the notion of a self) isn't something that comes about from crunching large data sets. ML isn't going to result in existential feelings or consciousness.

My opinion is that 'consciousness' by the human definition will be a spontaneous and emergent property, when the computers become complex enough.

(and also: No, IBM is not there yet, lol!)


> I think that consciousness is a collection of experiences and observations ('data'), nothing more. I am curious what lead you here:

Consciousness is part of having an experience. Observations don't require consciousness, since they can be performed by measuring equipment, although one can get deep in the woods on the whether observation needs a mind to make it meaningful, and thus an observation, not simply a physical interaction.

> I am curious what lead you here:

Being a self isn't about crunching data, it's about having a body and needing to be able to distinguish your body from the environment for survival and reproductive purposes.

An algorithm has no body and thus no reason to be conscious or self-aware. That it's even an algorithm and not just electricity is an interpretive act on our part (related to the deep semantic debate over what counts as an observation).

A robot might someday be a self. It would certainly be advantageous for robots to avoid damaging themselves, and being safe around other robots and humans (treating them as selves). But how we go about making robots self-aware, and more problematically, how they have experiences is something nobody knows how to do. Saying it will emerge once the robot is sophisticated enough is the same as saying nobody knows how to make a machine conscious.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: