I wish HN discouraged clickbait titles, even when the article itself uses that title. One
solution is to append the answer to the original title, separated by a "|", as used by:
https://www.reddit.com/r/savedyouaclick/
For example:
There is a blind spot in AI research | Autonomous systems are already ubiquitous, but there are no agreed methods to assess their effects
While I never post a comment on an article before reading it, I almost always read the comments before reading the article to avoid click baits. An brief one-line summary that serves the same role as an abstract would eliminate the need to read the comments first.
It might even help eliminate clickbaity titles in technical articles altogether.
buzzfeed/upworthy had such an impact on the world of journalism, that after an eon of only ever using the original headline, techmeme was forced to start rewriting headlines
This is vastly overblown. People already vastlh distrust algorithms. Psychologists have studied it and called it "algorithmic bias". That when given a choice between computer and human, even when the computer makes much better predictions, people distrust it.
In almost every domain where there is data and a simple prediction task, even really crude statistical methods outperform "experts". This has been known for decades. Yet in almost every domain algorithms are resisted. Because people distrust them so much, or fear losing their jobs, or all of the above.
But humans are vastly more biased. Unattractive people get twice as long sentences. People heavily discriminate based on political denomination. Not to mention race or gender. Judges give way harsher sentences when they are hungry. Interviews negatively correlate with job performance.
Humans are The Worst. Anywhere they can be replaced with an algorithm, they should be.
Why do humans distrust "algorithms"? Maybe they had past experiences where algorithms behaved worse than humans?
A recent example from recently: facebook replaced human-curated news with machine-curated, they started trending fake news [2].
Another example is algorithms that try to help you during automated phone calls, hence people always try to get to a human. This is because the speech-to-concept parsing/mapping is flawed or because they're not programmed to perform some specific tasks.
Another example is self-driving cars. Google cars have been involved in more accidents per mile than average humans[1].
In general, people's intuition is built through repeated encounters, that's why it is so great.
> Driverless vehicles have never been at fault, the study found: They’re usually hit from behind in slow-speed crashes by inattentive or aggressive humans unaccustomed to machine motorists that always follow the rules and proceed with caution.
Which completely negates the point you are trying to make.
In fairness though, it sounds like the point you and others are often making is this. Humans are now considered dumb, bias and unreliable. So we need to invest in some kind of external policing system (AI) to run our world for us and make sure we're doing it right. Basically establish reliance on something external to ourselves?
This is sad because it sounds like we're losing faith in ourselves to evolve for the better and hope the machines can do a better job at self-improvement ?
I'm generally curious about your point of view, sometimes I'm confused with the enthusiasm people have about this aspect of AI? Is it a form of distrust and dislike of society that makes us want to put faith in robots? A kind of adult angst?
I worry because we could be barking up the wrong tree if this is the case.
Humans are just not good at doing certain kinds of tasks. We can add numbers, but nowhere near as fast as a computer can. Similarly we can see patterns in data, but not to the exact precision of a statistical model that has it's parameters optimally tuned with gradient descent and bayesian inference. Humans will never be as good as statistical algorithms at certain tasks and that's ok.
I see fear about algorithms everywhere. Previous articles insist that algorithms could be unfair or racist. This article suggests things along those lines as well. The EU recently banned perhaps the majority of applications of machine learning, in any place where they might be used to rank individuals. This fear is hugely setting back society and technological progress. And almost every one of these places will have to revert back to human judgement. Which by every measure is far worse and far less fair.
The algorithms themselves may not 'choose' to discriminate, but they certainly can be used in a way that causes discrimination due to an oversight on the part of the algorithm's designer, even if not intended.
See e.g.:
Fairness as a Program Property, Aws Albarghouthi, et al, FATML 16
http://pages.cs.wisc.edu/~aws/ (Note: I can't find the paper link, maybe the conference hasn't occurred yet, but Aws gave a pre-talk on this topic already.)
Part of the problem with algorithms is that they allows us to be sloppy in our assignment of responsibility. We think "the computer can't be biased", which is of course true, but ignore the fact that the human designer of an algorithm could have made a mistake. And because of the nature of computer programs, these mistakes can be arbitrarily subtle. The above paper (I'm recalling from the talk now) applies certain probabilistic reasoning to prove that certain kinds of programs are "fair" for a certain population distribution and for a very limited set of language features (e.g. no loops). But static analysis is a very hard problem and it is unlikely we'll ever see a solution that generalizes well to anything we'd recognize as a useful programming language.
Edit (finishing my line of thought): So certainly bias exists in either case. I'm not trying to claim that using algorithms increases bias. However, algorithms can cause the decision process to be opaque, and in that sense 'hide' the bias. Unfortunately, it seems that if we want to use algorithms in these settings, we'll need either rigorous models like the above that are amenable to static analysis, or else give up and return to where we were before.
Look at Tesla with it's "autopilot" feature. It's not really a true autopilot, more just a driving assist, but people treat it like such. I think it's easy for people to fall into the trap of relying really hard on something that is shiny, new, and works well despite being imperfect - even if it is explicitly stated to be.
Nano tech has a similar problem at hand. There are indications that nanoparticles could have serious health related issues. Despite this researches are pushing ahead full steam with bringing nanotech to market. The money going to development far exceeds whats going to test safety. In an AMA with a nano materials researcher I asked if he ever has concerns about the safety of what he is making. His response was along the lines of "Sure I do, but it's not my job to deal with that. I just get paid to develop the tech."
Tech development has always had a shoot first ask questions later approach.
>It's not really a true autopilot, more just a driving assist.
Then naming it autopilot is deceotive marketing at best. I love Elon as much as anyone else, but i find this marketing practice to be downright dangerous.
“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”
I think this is rooted in the fact that humans don't really do intelligence all that well themselves. Most of us raise kids in a manner that assumes chuldren aren't independent thinking beings and we have a lot of social "rules" that fail to take into account actual independent thought and then humans bring these big blind spots to AI work.
Until we overcome such issues in humans, they probably are not solvable in AI.
children aren't independent from their parents pretty much by definition. They do have individual thoughts, but if their thoughts are concerned with their dependence, are those thoughts really independent?
I have raised two children. They are now in their twenties. From the get go, I dealt with them as beings who did things for a reason, and that reason was generally assumed to be about dealing with their needs, not "doing" something to me. Many parents expect kids to "behave" and that definition of "behaving" is rooted very much in what adults see and think about the child, not what the child is experiencing. This is inherently problematic.
Children may be dependent in many ways on their parents, but once they are outside the womb, if the parent dies, the child does not automatically die. They are a separate being. They have separate experiences. Their reasons for doing things come from their first person experiences.
Then parents very often try to impose third person motivations -- people-pleasing expectations -- that frequently interfere with the child pursuing its own needs.
We need to get better at dealing with kids as separate entities if we want to have any hope of dealing with machines functioning independently.
Your remark just reinforces my opinion that people do this badly. You think dependence is a given and I am not even sure how to go forward with this conversation because of this stated assumption.
>You think dependence is a given and I am not even sure how to go forward with this conversation
Well, even you evidently are dependent. At least, if you telling me what you think is to some degree intended to solicit a response from me, your conversation depends on my answer. Humans are social, which is by definition pretty much not independent. Sure, this is splitting hairs over the meaning of dependence, but I didn't get what that's to do with AI, anyway. Surely, the AI depends on its design while the design constraints, the laws of nature if you will, are a given.
but I didn't get what that's to do with AI, anyway.
Humans get a lot of feedback other than explicit algorithms as to how to act or behave or what to do. A lot of that is social feedback and a lot of the expectations are about what other people think, in essence.
If you want an individual item with AI to be functional and "intelligent," we need to be able to write algorithms that work without that extra stuff. In order to effectively write those algorithms, we need to be able to think differently about this problem space than most people do.
Yes, conversation is inherently dependent on another party being involved. It isn't conversation if you just talk to yourself. Conversation has the capacity to add value.
I may have been unclear about how I am defining this, but it is not unclear to me and I don't see how "going down that path" automatically makes it unclear.
Humans are pretty bad about imposing third party points of view on behavior and reasoning. Until we get better at understanding reasoning from a first person point of view for humans, we are going to have trouble figuring out how to write effective algorithms for AI.
I guess you are contradicting yourself, while you say
>humans don't really do intelligence all that well themselves
>[(the definition of) intelligence] is not unclear to me
because knowing the definition of something and knowing something is largely the same, so you say you know intelligence, but humans in general don't do.
In your previous reply you used the word "intelligence" in a manner which had assumptions in it (this is, after all, how humans communicate). "AI" uses the same word with an overlapping but different set of assumptions.
Not that your reply answers my actual question, but I would be interested in knowing what you believe my assumptions were and how these differ from those used in AI.
I try to avoid making assumptions, including what your assumptions in your definitions of your two uses of the word "intelligence" were. If you used them coherently then that's wonderful but their definitions are not distinct outside your head. Their general (ie. dictionary/scientific) definitions are not absolute.
I'm an intelligent person. I know lots of intelligent people who are stupid and I know lots of stupid people who are intelligent. And I'm one of them, and I don't even know which one.
Well, that's a rather weasel-y non-answer answer, but it sounds to me like you think I am calling people "stupid" and that isn't what I am doing. I do know something about the background of intelligence testing and what not for humans. That definition of intelligence is inherently problematic.
Again: My point is that people frame things far too often from a third party point of view. This inherently causes problems in decision-making. Sometimes, humans can kind of muddle through anyway, in spite of that default standard. But AI is much less likely to muddle through anyway when coded that way.
If you (or anyone) would like to engage that point, awesome! Otherwise, I think I am done here.
Key thought: AI's social and cultural impact, in other words AI's implications on "social mobility" (your rags to riches stories, the American dream).
The authors are asking: in a hypothetical world where many decisions are AI assisted, what is the risk that AI systems slow social change because they are too dumb to understand exceptions, peculiarities, positive externalities? What can we do to establish parameters that will allow us to know when a certain AI system is trained well enough to be used in real world, with minimal risk of undesired social and cultural implications?
Systematic analysis of AI biases is certainly needed. We train models on data, but how is the data collected, and how biased is it? At least in AI we can compensate for biases, but in human society they are much harder to counter. There's hope for a better future if we can make fair AI.
"As another example, a 2015 study9 showed that a machine-learning technique used to predict which hospital patients would develop pneumonia complications worked well in most situations. But it made one serious error: it instructed doctors to send patients with asthma home even though such people are in a high-risk category. Because the hospital automatically sent patients with asthma to intensive care, these people were rarely on the ‘required further care’ records on which the system was trained."
As morbid as it may be, I wonder if that system can tell the difference between a lamb and a human of comparable size. The system might still able to identify (or misidentify) all targeted joints.
With or without AI we already have issues with too much automation/assistance, and it get bad when automation they are failing/lack of maintaining.
Basically if you can drive a manual car, it is easy to drive an automatic one, but the opposite is not true.
Old GPS and even new one got 15% of the time the address wrong when I was a mover. Not only GPS failed, but what do you do when you have no usable maps?
Well, we have fired the people doing the maps, they are hardly updated at the pace mayors and real estates promoters are changing the territory, if you have an awesome GPS with no updated maps your GPS is useless, no?
We are forgetting to do the heavy underlying costly maintaining of maps, directions, forming drivers to read signs figuring GPS made them obsolete. Now we have to maintain: maps, satellite, computers and to live with people unable to use a map and a compass that are distracted when they drive by potentially wrong information and to dumb to read the sign saying there are entering a one way street in counter sense relying on their GPS.
Then too, the automation in Airbus/tesla and Boeing have proven to be less valuable then pilots' experience when computers fail due to false négative (frozen Pitot probes) or false positive (sun blinding cameras). I think civil and military records about accidents are a nice source of information about "right level of automation".
The problem is keeping up to date workers requires constant, heavy practice without too much automation. And human time nowadays is expensive.
That is one of the reason France (at the opposite of Japan) kept automation in nuclear plant rudimentary. Because when a system is critical, you really prefer human that can handle stuff at 99.999% than a computer that do great 100% of the time if and only if its sensors do works or nothing too catastrophic happens (flood, tsunamin, earthquake)
The problem is industry wants to spare on costly formations and educations (not the one from the university, I mean the one that is useful) but knowledge you have not yet crafted because of change of circumstances (I will be delighted to see how self driving car are behaving in massive congestion with dead locks) will be hard to program if we lose the common sense of doing the stuff by ourselves. How do you correct a machine misfunctioning to do something you have forgotten to do correctly yourself? You may even ignore when it will fail. Not because of it, but because of your lack of referential.
I'm not too sure how practical the suggested "social-systems analysis" approach is. It is summarized as:
"A practical and broadly applicable social-systems analysis thinks through all the possible effects of AI systems on all parties."
which seems incredibly difficult to do completely. Hopefully the authors will further describe their approach in future publications.
Also, somewhat of a nitpick, but the article states:
"The company has also proposed introducing a ‘red button’ into its AI systems that researchers could press should the system seem to be getting out of control."
in reference to Google, but cites a paper which discusses mitigating the effects of interrupting reinforcement learning [0]. The paper makes a passing reference to a "big red button" as this is a common method for interrupting physically situated agents, but that is certainly not the contribution or focus of the work.
This is another angle of the "Weapons of Math Destruction" argument, and it looks very relevant. Those who work on big data (esp. public sector) would be wise to consider the implications.
For a good analysis of the problems of living with AIs, read the web comic Freefall.[1] This long-running comic has addressed most of the moral issues over the last two decades. Of course, that's about 3000 comics to go through.
The problem is the AI community is treating itself like non-experts. Explaining that AI needs to be controlled by telling horror stories of robot domination is good to motivate research work to lay people, but is a distraction for professionals.
For example:
There is a blind spot in AI research | Autonomous systems are already ubiquitous, but there are no agreed methods to assess their effects
If it's interesting, I'll still click.