Hacker News new | past | comments | ask | show | jobs | submit login
Barack Obama on A.I., Autonomous Cars, and the Future of Humanity (wired.com)
317 points by gflandre on Oct 12, 2016 | hide | past | favorite | 229 comments



I don't know about you, but I'm very thankful that we have a president that can understand and intelligently talk about these issues (rather than thinking it's all just black magic).


I'm curious...which President are you saying thought it was "all just black magic"? Let's also remember that Obama is the man that unapologetically expanded covert/illegal surveillance programs to an unprecedented scale, directly costing American technology companies billions of dollars in sales. I'd say his actions reflect neither intelligence nor understanding of the tech industry.


One of the two major candidates for president responds to tech focused questions with some ramblings about 'the cyber part of security' being impossible and how good his ten year old son is with a computer. That's quite the contrast against someone like Obama.


... and the other sets up her own email private server which got hacked by a [taxi driver](https://www.rt.com/news/362417-hacker-guccifer-clinton-roman...). As intelligent as he can be, if he was able to get in, it's easy to imagine that any world government had access to all information that came her way.


I dont know if Russia Today is a source I would choose, given that the counterclaim is that it was the russian state behind the hack...


Argumentum ad hominem.

RT is state-supported, but it is only one of a number of sources that all reported pretty much the same thing. Guccifer, who was an unemployed taxi driver when arrested, led to the disclosure of HRC's server, and claims to have hacked it "like, twice."

http://www.nbcnews.com/news/us-news/guccifer-hacker-who-says...

http://www.foxnews.com/politics/2016/05/04/romanian-hacker-g...

http://www.msnbc.com/msnbc/hacker-guccifer-i-got-inside-hill...

^^^ There. Mainstream, American-as-apple-pie sources.


George W Bush proudly didn't use email and called the Internet the "Internets". Pretty big difference.


He left office 8 years ago, things have changed in that time. I also refer to the internet as the internets sometimes too. Why would the most powerful person in the world need to use email anyway?


"I was elected to lead, not to read"


Bush is probably at the very top of the 99th percentile in terms of readers. He reads a book+ a week.


Yes, Downandout, because as president He single-handedly did that. You do realize his political opposition LOVES that stuff, right?


Actually, yes, he did single-handedly do that. The authorization came straight from Obama, and then when it came to light, he basically didn't even bother apologizing. He lied to his base about being against this kind of thing, and then went and did exactly what he wanted to do.


he was pretty clear about his views that power corrupts, that he would expand mil presence in Afghanistan, and that he believed in a living constitution.

some people voted for him for those reasons, presumably others voted for his opponent(s?).

sometimes i just feel like the punchline to all of this will be something like: "it's the patriarchy, stupid"



Waffling vaguely about grand ideas has always been Obama's greatest skill. It's what got him in the White House in the first place.


He has to "waffle vaguely" about grand ideas when addressing the whole nation because going into details would alienate most of his audience.

This doesn't mean he doesn't know what he is talking about, as you seem to be implying (I don't want to read too much into your comment).


It was a lot of waffle, with the odd buzzword interspersed. I don't mind Obama, but I do mind when people attribute understanding to him when he talks at an extremely thin level about topics.


Many forget that 8 years ago Obama was highly criticized for being a 'technocrat' and populating his cabinet with 'technologist' outsiders.

Leaders don't always have to understand the tactical details that are required for executing their strategy. Obama led as a technologist from day one.


Except that's not what happened. There was a lot of speculation that it would, but then he appointed a bunch of politicians to the cabinet, as is tradition.


His Secretaries of Energy were scientists, including a Nobel prize winner. Rather than being an example of a President ignoring technology, Chu's appointment seems like a pretty good example of how little difference paying attention to this stuff at the cabinet level really makes when Congress is still populated by a bunch of troglodytes.


I agree. But, I'd also argue that if he hadn't had the ongoing wars, recession, and hostile congress to deal with then lots would have happened.


Could've, would've, should've. Let's look at the one area where he had almost total discretion, foreign policy and national security. There's not much to speak of in terms of success there. Not only did he fail to advance the interests of the US, he also failed to deliver on progressive promises of scaling back the surveillance state.

The recent shenanigans with Russia are incredibly telling. Perhaps you remember in 2012 how Obama and his surrogates viciously mocked Romney for claiming that Russia was the biggest geopolitical threat faced by the US. Obama was caught on tape telling the Russians he could be "more flexible after the election". They took his flexibility and marched right into Ukraine with it. They took Obama's cancellation of a European missile defense shield, and reciprocated by positioning nukes on the border. They saw Obama's incoherent Middle East strategy, and took the Iranian-Syrian-Russian bloc to new heights. Now, they're apparently hacking our election infrastructure and have released incredibly embarrassing emails about the corrupt machinery of Obama's chosen successor. The humiliation is total. Obama owes Mitt Romney an apology.

There's another recent President who faced a hostile Congress, Bill Clinton. He was able to achieve a lot by being willing to compromise.


What's any of that have to do with pushing technology, though?


My point is that I don't buy the excuses. Obama is a good tactician, his masterful campaign and propaganda machine are proof of that, but he's clearly not an effective strategic leader.


We could have had Al Gore, but for Floridians, I mean the guy created the information superhighway --that's up there with the Tim Berners-Lee kind of stratum.

On the other hand, had he won, Obama would not have become president because we would not have had to "react" to the Bush years.


"We could have had Al Gore, but for Floridians, I mean the guy created the information superhighway --that's up there with the Tim Berners-Lee kind of stratum." Please tell me this is a joke.


It is, I thought it would be evident. Never the less, he wasn't a tech slouch, specially for a politician [and for the time he advanced his beliefs in tech]. To wit: https://en.wikipedia.org/wiki/Al_Gore_and_information_techno...


This is covered on that Wikipedia page, but for those maybe too young to remember, Al Gore's supposed claim to have invented the internet was widely joked about in the late 90s. From the article:

"After this interview, Gore became the subject of controversy and ridicule when his statement "I took the initiative in creating the Internet"[53] was widely quoted out of context. It was often misquoted by comedians and figures in American popular media who framed this statement as a claim that Gore believed he had personally invented the Internet.[54] Gore's actual words, however, were widely reaffirmed by notable Internet pioneers, such as Vint Cerf and Bob Kahn, who stated, "No one in public life has been more intellectually engaged in helping to create the climate for a thriving Internet than the Vice President.""


I too hope that this is a joke, '...created the information superhighway'.

I blame the education system for this kind of thinking. It is so heavily influenced by big government to the point that many people have come to believe that demagogues actually have a role in the advancement of civilization. These folks (demagogues) really should get real jobs. They are mostly good at production of words instead of production of goods & services or like Hans-Herman Hoppe likes to say, "Government specializes in the production of 'bads'". I'd like to recommend his book - Democracy, The god that failed.


See below and the wiki article for context, but I wouldn't be so quick to downplay the critical role funding takes.

Or, in Gore's own words, "That's how it has worked in America. Government has supplied the initial flicker -- and individuals and companies have provided the creativity and innovation that kindled that spark into a blaze of progress and productivity that's the envy of the world."


>"That's how it has worked in America. Government has supplied the initial flicker"

That's not how it worked when America had the most growth. In the late 19th century, when there was less government intervention, America saw the most rapid rate of economic growth. I don't want to get into a debate about who led to the emergence of the great depression but I strongly believe that it was due to government involvement in the economic affairs of America.

That aside, when you talk about funding, there are two things that emerge;

1. You believe that that is the only way to get funding for revolutionary projects. Government is not the only way to do this. Read on IBMs/GEs research labs and you'll see what private research labs can accomplish.

2. That government has money. Government has no money. They only have money in so far as you and I allow them to take our hard earned money in form of taxes. You have to get out of the mindset that demagogues and bureaucrats actually do valuable work. They only get in the way of productive people in a bid to placate non-producers thereby granting themselves power and undue privilege once in office.

As a side note; The 20th century was the most murderous period that humanity has gone through. Why? Governments comprised of demagogues made terrible decisions after convincing the demos that they can make the best decisions on their behalf.

At least during the aristocratic periods, it was well known that wars were the affairs of Kings and that the common folk were not going to tolerate anyone who would force war upon them. Moreover, you couldn't be conscripted; the only people who went to war were the soldiers paid by the King out of his pocket. You may say that king still taxed people but at least it was just one parasite who had very low taxes as opposed to today when it is many many corrupt parasites with very high taxes.


> They only get in the way of productive people in a bid to placate non-producers thereby granting themselves power and undue privilege once in office.

Oh boy. Anti-government extremists like yourself love to grandstand about the merits of unfettered capitalism without understanding that government creates the conditions for market capitalism to exist.

There would be no property rights without the police, legal contracts without the courts, no medium of exchange without the Treasury. Governments provide minimal standards of worker safety, public health, and public education - all of which are necessary for a productive workforce.

Additionally, government is one of the only entities that can correct negative externalities (i.e., side effects of business, the classic example being air pollution).

Your position is ignorant of both economics and history.

https://en.wikipedia.org/wiki/Market_(economics) https://en.wikipedia.org/wiki/Gilded_Age


Do you value your liberty? I bet you do and you just don't realize you've been slowly but surely losing it.

I would venture that the only extremism, so to speak, that exists is in how government has increased in both its size and ineffectiveness. When you have a combination of the two aforementioned features, then things inevitably get worse.

> Oh boy. Anti-government extremists like yourself love to grandstand about the merits of unfettered capitalism without understanding that government creates the conditions for market capitalism to exist.

> There would be no property rights without the police, legal contracts without the courts, no medium of exchange without the Treasury. Governments provide minimal standards of worker safety, public health, and public education - all of which are necessary for a productive workforce.

Just because monkeys can ride bikes doesn't mean that only monkeys can ride bikes. Due to government's inherent inefficiencies and its tendency to grow and encompass ever more aspects of life, two things happen; you lose your liberty and it becomes very expensive to sustain government.

I really don't see how you can't see that government really is bad for you and that there's always a better way. I don't like the fact that as the human race we've resigned ourselves to thinking that we can innovate/disrupt most other things except for governance. When I hear statements like, 'democracy is the worst form of government except for all others', I cringe. Here's an idea, how about less or where possible, no government. These demagogues and bureaucrats really aren't as important as you think they are.

Let John Galt be. Let the markets be. Obama and the rest of them have no place dictating how innovation and businesses should be run.


You have some valid points in critiquing government, but you're doing them a severe disservice by the way you present them. Context-appropriate facts, not rhetoric.

> In the late 19th century, when there was less government intervention, America saw the most rapid rate of economic growth.

The late 19th century of American history featured a large number of one-time-only economic improvements and wholescale pillaging of large amounts of natural resources.

The completion of a transcontinental railroad (completed 1869 with government sponsorship via the Pacific Railroad Acts), Pennsylvania oil rush (1870s), the settling and harvesting of the West (1850-1900), and implementation of manufacturing economies of scale on the back of the new rail system.

Additionally, unrestrained consolidation of competition into cooperative trusts gave rise to monopolies that Theodore Roosevelt spent considerable political capital resolving in the early 20th century via lawsuits under the Sherman Antitrust Act (1890). See Northern Securities Co. v. United States (1903) and Standard Oil Co. of New Jersey v. United States (1911).


>You have some valid points in critiquing government, but you're doing them a severe disservice by the way you present them. Context-appropriate facts, not rhetoric.

I think you just don't like that I'm right about this. If I'm making a valid argument, should it matter that I'm presenting it in a manner that is inconsistent with some form of political correctness?

>Additionally, unrestrained consolidation of competition into cooperative trusts gave rise to monopolies that Theodore Roosevelt spent considerable political capital resolving in the early 20th century via lawsuits under the Sherman Antitrust Act (1890). See Northern Securities Co. v. United States (1903) and Standard Oil Co. of New Jersey v. United States (1911).

No one is saying that monopolies are good. In fact, one could argue that they aren't very capitalistic (capitalism requires that there's voluntary exchange and when you as a consumer have only one option to choose a basic necessity from, then that looks more like the very opposite of voluntary exchange).

As a side note, it is however curious that when you bring up these facts that sort of demonize the likes of Morgan, the fact that he single-handedly led the financing of the bail out of America during the economic crises of 1907 and 1893 never comes up. In 1893, the then President, Cleveland, borrowed $65 million in gold from J.P. Morgan to support the gold standard thus ending the panic.


Here, I've boiled y'alls argument down for easy digestion

When the economy does well:

    Pro-government: government fueling/supporting industry into prosperity

    Anti-government: markets make everyone richer as expected
When economy does poorly:

    Pro-government: capitalist greed/exploitation leads to ruin

    Anti-government: government regulation/strangulation leads to ruin
Feel free to repeat this argument for every boom/bust in the past/future


I never quite got what your preference is. Are you saying that it is all futile and we should just leave the markets be?


My stance is that this a nature/nurture kind of debate, to argue either extreme is ridiculous, there are clearly elements of both sides that are true.

No positive/negative turn in the economy can be isolated to a single cause, so neither side will ever be convinced by the others arguments.

A third point, which I think is often lost in these arguments, is that governments have interests that are beyond improving the financial standing of its citizens. A pure market capitalist probably wouldn't think subsidizing farming or shipbuilding a particularly good idea, but the government might be willing to accept some market inefficiency in exchange for food security or having an established ship building industry for times of war. Similarly, high income inequalities may cause high social unrest, it's in the governments own interest to prevent this, so it may be willing to accept lower total nation wealth in exchange for more evenly distributed wealth among its citizens by imposing progressive taxes and creating welfare programs.

Everybody agrees that markets work, the main questions are: are they optimizing for the thing you want? and are there cultural/political externalities that the market doesn't care about but a government might?


This is just hollow legacy-chasing. Just like how he crippled the space program during his 8 years in office, only to discover a passion for it in the last few days.


I agree 100% with the hollow legacy-chasing you're talking about. Please do elaborate more on how he's crippled the space program. If you were to ask me, I'd recommend not having government involvement in such things anyway.


I do think that there is a long-term security interest for the human race in populating other planets with self-sufficient colonies that could survive natural disasters on Earth. But I'm not really informed enough of the economics of space exploration to have a firm opinion about what the government's role is, though I agree that it should be as small as possible.

What I do know is that he's suddenly talking a big game about public/private partnerships for Mars when he undermined the previous public program, and with very convenient timing so that he won't be responsible for any of the tough decisions. It's also seems like he's trying to steal some headlines away from SpaceX and Boeing and make sure that the public sector, which he idolizes so much, doesn't look impotent by comparison.


I think it might have had something to do with the recession. Now that the economy is improving, it is a better time for such things, imo. I applaud your long term thinking, but the short term is important, especially for those worried about their jobs.


Maybe I'm overly cynical, but I feel like a lot of the tech talks Obama gives are heavily scripted and he doesn't understand everything he's saying. He once made a joke about sorting algorithms, and I find it super hard to believe that someone who's never programmed before (outside of publicity stunts) would be capable of doing that.

However, I guess props are due for the effort even if he doesn't personally understand all of it.


I was wondering why the authors would chose to interview someone with no particular credentials or expertise in the subject.


What's "the subject"? I'm not sure if you are being sarcatic, but:

“ITO: I feel like this is the year that artificial intelligence becomes more than just a computer science problem. [...] the question is, how do we build societal values into AI?”

The point is that the subject is far broader than the CS department. If you think Obama doesn't have relevant expertise, your view of the subject is too narrow.


He signs-into-law or vetoes legislature affecting public policy and economics.

I think it's a stretch to say he has "no particular credentials" or understanding of technological trends.


Why are you thankful for that? Does he add anything useful? Does his talking about these imply anything about his work as a president?


What does Hacker News think about AI? Is it real this time, or are we in for another winter? I'm seeing a lot of grand claims, and it certainly seems like there are plenty of applications, but I'm still not totally convinced that it will turn the entire economy upside down.

Given the enormous amount of press, tweets, blog posts, conferences, degree programs, seminars and interviews popping up it seems like there has to be something more than just hot air here. Still, the most outrageous predictions are hinged on breakthroughs in unsupervised learning happening. Taking the pessimistic view on science, what if we don't get there?


Like they say in the article, there's general AI (think sci-fi computers with minds) and specialized AI (think good old fashioned statistical models, but applied to more things and super effective).

Specialized AI (and I hate calling it AI) is coming along really quickly. We're getting better at it in existing fields and learning to apply it to new fields. More than anything, we just have so much data on everything now, and computers are pretty powerful now, so even old school models are finding tons of new applications.

Generalized AI is a different story. We are a few really major breakthroughs away. We aren't even 100% sure if they are possible, much less understanding how to do them. These aren't the normal slowly-chip-away-at-it breakthroughs, these are things we have no clue about. With something like that, who can really say how far we are? It could be 5 years, it could be never.


We don't even have good generalized robot manipulation. Think of that as monkey-level AI. After half a century of hard work, robot manipulation in unstructured situations still sucks. Go watch DARPA Humanoid Challenge videos or old Willow Robotics towel-folding to see how badly it sucks. Factory robotics works because the work situation is highly structured.

On the other hand, once someone cracks that, a huge number of low-end jobs will be automated.


Also mentioned was that specialized AI is creating social problems we need to solve. There are issues with fairness—specialized algorithms all too easily replicate bad/unfair patterns in the data (e.g. they replicate/amplify unfairness in the world), and that harms folks; sometimes they might not even be aware of it.


That's exactly what is about to happen. Deep learning has the potential to do robot control. Currently researchers are beating tons of video games using reinforcement learning with deep networks. Applying the same methods to robots shouldn't be too hard. And we've also come a long way with machine vision as well over the past 5 years.


I'm as bullish on AGI as anyone in the medium term, but deep learning is not even playing the same game as AGI, let alone in the same ballpark or having the potential to achieve it.

Deep learning is still mere perception. It doesn't handle memory or processing, it just transforms input into output, typically trained by Big Data, way bigger than necessary statistically speaking, given the world we live in.

AGI requires super aggressive unsupervised learning in recurrent networks, likely with specialized subsystems for episodic and procedural memory, as well as systems that condense knowledge down to layers of the network that are closer to the inputs. At a minimum. And nobody is really working on any of that yet (or at least succeeding) because it's really damn hard.

That's why everyone in "AI" is rebranding as a deep learning expert, even though deep learning is really just 1980s algos on 2016 hardware - you gotta sex up feed forward backprop or you don't get paid.

Edit: to be fair, robot control is much simpler than AGI, and might be mostly solved with deep learning somewhat soon, I forgot the context of your post.


>Deep learning is still mere perception. It doesn't handle memory or processing,

There's more going on than convolutional neural nets. Architectures with memory and attention mechanisms do exist.

As examples: https://arxiv.org/abs/1506.03134 https://deepmind.com/blog/differentiable-neural-computers/


Sure, and I probably shouldn't have glossed over that. That sort of research is definitely progress, though it's not paradigm shifting in any way. I do think that we are getting past perception slowly but surely, I just don't think we're there yet.

What really doesn't exist is any meaningful stab at unsupervised (or self-supervised) training on completely unstructured inputs or any sort of knowledge condensation/compression, at least for time dependent problems. These are of paramount importance to the way we think, and to what we can do.

There's a lot of trivially low hanging fruit, too - I still have yet to see even a grad school thesis that starts with an N+M node recurrent network and trains an N node subnetwork to match the outputs based on fuzzed ins, and then backs that out into an unsupervised learning rule that's applicable to multiple problems. Or better, a layered network that is recurrent but striated, that tries to push weights towards the lower layers while reproducing the same outputs (hell, even with a FF network this would be an interesting problem to solve if it was unsupervised). These are straightforward problems that would open up new avenues of research if good methods were found, but are mostly unexplored right now.

I could be wrong, if I had real confidence that we were close I'd be working on this stuff, but I'm collecting a paycheck doing web dev instead...


Sequence predicting RNNs are basically unsupervised, in that they can learn from lots raw of unlabelled data. And they learn useful internal representations which can be adapted for other tasks. There is lots of old work on unsupervised learning rules for RNNs, including recurrent autoencoders and history compression.


This just came out from DeepMind:

Differentiable neural computers - https://deepmind.com/blog/differentiable-neural-computers/

> we introduce a form of memory-augmented neural network called a differentiable neural computer, and show that it can learn to use its memory to answer questions about complex, structured data

So it seems that deep neural nets can have memory mechanisms and be trained to solve symbolic operations.


I'm not talking about AGI at all! Just robot control. It's a difficult problem sure, but not that difficult. There has been massive progress on it, and related problems. I have no doubt we will have 'solved' it in at a decade.


The systems which learn video games work only for games where the state of the game is entirely visible, and the desired player action can be decided based only on the current state. PacMan, yes. Doom, not that way.


That's only because they didn't use recurrent neural networks which save information over time. RNNs make it possible to play games with hidden state. Deepmind is currently working on that with starcraft, which is vastly more complicated than pacman. They also have some work on 3d games like doom.

A few weeks ago there was a paper posted on "Synthetic Gradients" which should make it much more practical to train RNNs for games. Before it required saving every single computation the computer makes to memory, which uses a huge amount of memory and computation. Using synthetic gradients they need only store a few steps in the past. And it can learn online.


They kind of work with Doom: https://www.youtube.com/watch?v=OHNXdGqxyQA

And have some problems with PacMan as the system can't plan: https://www.technologyreview.com/s/535446/googles-ai-masters...

Maybe those should be the other way around?


That's what Rod Brooks claimed when he started Rethink Robotics. It hasn't worked out well.


The ability for AI to approach these problems has only been possible in the last 2-3 years. The tech was really not there in 2008, and it's still very rough and cutting edge in 2016. But we are at least seeing the first glimpses that it's definitely possible. If AI can play starcraft, then surely it can control a simple robot. And anyway see my other comment.


Honest question: Why do you think that generalized and specialized AI are distinct things? Is it not possible that general AI is a specialized AI applied over the field of specialized AI generation?


What people mean by generalized and special AI is not consistent in the field, but everyone agrees that the current brand of AI driven by statistical learning techniques and large scale neural networks is far from explaining how even the simplest of nervous systems work. The key obstacle is adaptation. Several people believe that they've more or less solved the recognition problem. However, adaptation is a totally different thing. There are no tools in the current AI toolkit that we can use to make a robot that can go out unsupervised in the real world, do something useful and come back safely. Whereas even the Nematode C. elegans with only 302 neurons is remarkably flexible, it can forage for food, remember cues that prediction food, manage food resources, get away from danger or noxious stimuli etc. This allows it to survive quite well in a world that is constantly changing in unpredictable ways. This is the kind of intelligence that proponents of so called general AI want, and I agree we have a couple of major breakthroughs away.


And we have a complete wiring diagram for C. elegans, and no clue how it does any of the things you talked about. So, yeah, general AI is really far off.


To be honest, the wiring diagram is a bit of a distraction from the really big questions. It has its uses for sure, and is really essential in many situations but overall it gives this illusion that we understand something important about the system, where in reality we don't. Understanding a biological system from its wiring diagram is something like understanding a city by studying its road map.


I have no idea what I'm talking about. But why couldn't we build some sort of bio-computer hybrid system around a simple form of life, like "C. elegans" but augmented with traditional CPUs?


That's another option and there are people who do that https://blog.inf.ed.ac.uk/insectrobotics/.


> Is it not possible that general AI is a specialized AI applied over the field of specialized AI generation?

AI problems can be characterised as those where there's no clear path to a solution (otherwise we just call it "programming"); tackling them necessarily involves trial-and-error, backtracking, etc.

Since there are far too many possibilities to enumerate, solving such problems requires reasoning about the domain, e.g. finding representations which are smooth enough to allow gradient descent (or even exact derivatives); finding general patterns which will apply to unseen data; finding rules which facilitate long chains of deduction; etc.

The difficulty is that there's usually a tradeoff between the capability/expressiveness of a system, and how much it can be reasoned about. If we choose a domain powerful enough to represent "the field of specialised AI generation", for example turing machines or neural networks, methods like deduction, pattern-finding, gradient following, etc. get less and less applicable and we end up relying more on brute-force.

To me, this is where the AI breakthroughs are lurking. For example, discovering a representation for arbitrary programs which allows a meaningful form of gradient descent to be used, without degenerating into million-dimensional white noise; or to take deductive knowledge regarding one program and cheaply "patch" it to apply to another; and so on.


My two cents: they are separate because there is no current algorithm that can take us from modeling (whether classical statistics or neural net) to intelligence. Applying our current specialized techniques to AI generation has not gotten us there. That is because the techniques are mostly model tweaking techniques. The models are generated and trained for each problem domain. A combined solution may be developed soon, but I doubt it.

There was a great article recently on HN that highlights the current problems:

http://www.theverge.com/2016/10/10/13224930/ai-deep-learning...

https://news.ycombinator.com/item?id=12684417

Just because we may acquire the processing power estimated to be used in the brain (in operations per second) doesn't mean we know how to write the software to accomplish the task. It is very clear current algorithms won't cut it.

Also, I think we are a few orders of magnitude off on raw processing requirements because I think it is a bandwidth issue as much as an operations per second issue.

TL;DR - you could throw as much processing power and data as you want at any current deep NN or their derivatives and you wouldn't get general intelligence.

That said I don't think the winter will be as bad as before because, like OP says, specialized AI is useful.


Specialized AI is all about X,Y pairs. Given X, predict Y. There are other problems it's good at too, like given X, choose a Y to optimize Z, but at it's core it's largely the same. On the fringes, you have stuff about exploration, which is AWESOME, but still pretty niche. At least 99% of the "AI" you hear about is of the X,Y variety. More to your point, if we can make generalized AI from "given X, predict Y," then nobody's figured out how to do it, and nobody has super promising research tracks to get it.

I think a lot of the early AI research (not my specialty) had the idea that if we made a bunch of systems that were good at their own piece of the puzzle, then we could just tack them together and get real intelligence. It just didn't turn out that way. Something I'm more familiar with is graphical models, and while they in principal could do amazing things when you stick little expert components together, we've proved the complexity grows pretty badly in the most general cases that would have been really amazing. I'd bet similar things happened in other "let's put a bunch of specialized systems together" tracks. Maybe we can do it, but not the naive way that would have been great.

Then you can get interesting and philosophical about it, where you might even say that emulating intelligence and intelligence are different. Like the chinese room thing, or even a character in a story vs a physical person. I'd rather not weigh in on that right now, but there are good interesting arguments both ways.


>Then you can get interesting and philosophical about it, where you might even say that emulating intelligence and intelligence are different.

This would be a very surprising result. For example, if I can make a TSP-solver-emulator... I have a TSP solver.


I guess I should have been more specific. I meant sorta convincingly emulating intelligence versus fully meeting some other definition. Is a turing test enough?


Specialized AI - think about retina neurons, capable of detecting direction, edges, depth etc.

General AI - think about thinking machines

We can do the former, have no clue how to do the latter.


It's not such a huge jump to the latter when you nail the former.


This is what's called a bald assertion.


Can you give a little bit of details about your background in AI or at least statistics/machine learning?


This is not a problem because there is no such thing as generalized AI. There's just specialized AI for lots of things added together. Keep getting better at things and after a while you are good at lots of things, so appear to be general to an observer (until they see you outside your domain of expertise).

This theory compactly explains why no one knows how to do general AI.

I don't believe there is a special sauce waiting to be discovered.


That's not what General AI is though (at least how I define it), real AI would have to be able to invent new things, not just be good at existing things.

Many humans go their whole lives without doing that, so I'm aware it's a high bar. But it's a bar that some humans do pass, and if AI is to be more than just a helpful gimmick, it'll have to do that as well, since I'd like to believe all humans have that potential, even if not always realized.

(Obviously a helpful gimmick still does have value.)


But brain still is "just" a neural network, granted, with immensely more complex neurons and extraneuronal mechanisms that might as well be absolutely crucial for learning. But the difference between a highly intelligent and blank-stare person can be as small as switching off a small part of the brain or tweaking a neurotransmitter. Which shows me that general intelligence is something extremely sensitive as oposed to something like vision processing, which takes much more drastic changes to disturb.

Looking at the brain, it often does look like a bunch of interconnected specializes neural networks.


Most of the 'specialized' AI techniques we have aren't even Turing complete.


Unlike 30, 20, or even 10 years ago, we have heaps upon heaps of labeled datasets now. So much human life happens through technology that we are doing an astounding job of producing labeled datasets.

And computers are fast now. And RAM is infinite. And GPUs are fast and plentiful. And all of this is cheap.

AI stuff is already real more than most of us realize on a day-to-day basis. While machines might not be "intelligent" per se, their cleverness has definitely started impacting desk jobs.

The desk job stuff is what most of those alarmist news items are worried about. Nobody seems to worry [too much] about reducing blue collar jobs through automation. Can you imagine how many people you'd need to unload a modern container ship without computers tracking stuff and optimizing storage? Or how much work it would take to harvest modern crops where automated harvesters are used?


There are some promising ideas out there based on unsupervised ("self-supervised") learning. There, the problem of needing big labeled datasets doesn't exist: just turn on a camera and have a motor point it at the surrounding world. Data of this kind is and always has been super abundant. But using it requires making a conceptual leap away from supervised pattern recognition -- which is pretty much what today's "AI" does...


It'll make you miss spam.


There is a real risk related to goal oriented AI. It does not need to feel or dream. Merely having survival as its goal is sufficient to make it dangerous to other life forms. Worse is that it can happen at any time (it may have happened already). Given the computing power, tools, and availability of knowledge we can assume that it can be done outside of a controlled lab environment by a non-scientist.


I'm not too worried about it, because AIs will need to feel and dream and care and suffer and empathize to begin to be as intelligent as a human.

Imagine a human who lacks empathy entirely. That's a disability. They may be able to do some amount of destruction, but at some scale they simply lack the social intelligence necessary to compete with the entire species.

This is the most common mistake I think people make when reasoning about AI: they think human limitations are weaknesses. But they're not weaknesses they're tradeoffs. Natural selection has had a chance to reward all kinds of variations, including more cortex, and less empathy. But we ended up where we are because of tradeoffs.

Any AI which is intelligent in the same way humans are will also have our limitations. Any AI which doesn't have our limitations won't be as smart as us in those respects.

You have to really ask yourself what the difference is between a human with an AI simulator and an AI with a human simulator. In the limit of simulator quality there is none.


Huge point you make that it can be done by non-scientists now. Now we can all play with fire. That much power in idle hands controlled by mediocre minds is like everyone being magical.


"mediocre minds"?


It sounds weird, but I believe its meant as "untrained minds".


Yes untrained but more broad to mean people who don't understand the magic, people who are negligent, etc. I kind of got it from Einstein's quote, "Great spirits have always encountered opposition from mediocre minds."


I'm optimistic about what'll happen right after the upcoming AI winter. :)

So long as AI remains tasked with categorizing human-taken photos or playing human-created games (no matter how "complex"), AI will remain just that -- artificial -- and not "real".

Given the rate of hype that you point out, I suspect a winter of some kind will hit before enough folks realize this.

[edited: reorder]


What should they focus on then?


If one wants general AI that can deal with/understand the world, the system needs to learn based on (raw, unadulterated) data from the world. These data are highly dynamic and rarely fit neatly into human-labeled categories. This is part of why the currently-hyped supervised pattern recognition is not all that helpful for general AI.

By the way, the above requirement of real world data probably even applies to building chatbots. (A successful chatbot will need some understanding of the world it is talking about; this is what researchers mean when they say "grounding" is important for NLP).


As an Ai "hipster" (been into it before it was cool), it's certainly interesting seeing the mainstream culture now catching up to the implications it has. I'm glad this is happening, and I hope the discussion broadens and we hear from all people. I'm one of the whacky ones who believes it's a really important rung in our evolution as a species, so I think being collectively aware of where we're going can only be a good thing.


I think machine learning has a real ability to do certain things (image recognition especially) and do them in a way that isn't hype by itself.

I suspect that you'll encounter limits to the techniques before they do everything a person can do.

If there's a potential problem with machine learning that could sink the enthusiasm over time, I suspect it would come because machine learning applications are these black-boxes system which are the product of training with huge datasets and which use the very tuned-level of expertise of their creators (a common joke is talking of the "graduate school descent" algorithm, getting enough grad students to tune your app till it work). It may be that the deployers of these applications may find that when they have to train them again, in a year's time, that the geniuses have moved on to other things or that the geniuses now charge rates that look excessive for an application that works for just a year.

But that's just spinning possibilities. Currently things seem to be going great.


I have often wandered if there is some sort universe limitation to general AI singularities particularly if the current universe is actually a simulation. An intellectual singularity or extremely rampant aggressive intelligence might be an errant state that gets dealt with (aka universe circuit breaker / watchdog). This might also explain the Fermi Paradox.

The other idea I have doted on is that perhaps universes are the manufacturing tool to create super AI by some parent universe.

> Taking the pessimistic view on science, what if we don't get there?

I have no interest in living forever but I really really wish I could be told what will happen or what did happen or what all is. I'm sure specialized intelligence will continue to improve but my gut says general intelligence is probably not in our lifetime (or is limited or capped from above mentioned pop-culture-probably-wrong reasons).


I'm curious: how interested are you in understanding the Earth ecology? Because we understand very little about it, and we are very rapidly destroying the information it contains.

I ask because I wonder, if we did meet or make a creature which could learn everything, if it wouldn't say some variation on "99.9999% of everything there is to know is right there in your ecology and visible to the naked eye. Go look."

I also wonder if one of the first things the sentient AIs teach us is that yes, we are committing an egregious ongoing information Holocaust through habitat destruction.


>The other idea I have doted on is that perhaps universes are the manufacturing tool to create super AI by some parent universe.

And perhaps our creators are long dead but the simulation keeps going...


"press, tweets, blog posts, conferences, degree programs, seminars and interviews"

That is the hot air


I wish people would stop bringing up the trolley problem as somehow relevant to self driving cars. In any situation,a car is going to brake as hard as possible while maintaining control, and only serve if there's no way to stop and a clear space. Other than that, a good enough interview that I can't see taking place again next presidential term.


The trolley problem is just an extreme example, that people can easily understand and think about. Less extreme versions of it occur all the time.

For example, say you are driving along a road, passing a bicyclist. You'd like to give the cyclist more room, but there's an oncoming car in the lane next to you. How much room do you give? At exactly what threshold do you decide to wait to pass the cyclist? What if the cyclist is a kid?

All the time, the driver has to make decisions that trade off the safety of multiple parties, from the car's occupants to other drivers, to bicyclists and pedestrians. In reality, these will almost always be statistical tradeoffs, and usually comparing very small probabilities of accidents, but they are still real ethical decisions that have to be made.


If there is an oncoming car you don't overtake.


If the road is really broad and there is plenty of space for the cyclist and you without getting close to the lane the oncoming car is in, of course you overtake.

The interesting case is in between.


I know a plenty of people that attempt to squeeze past the cyclist giving them scant room. Almost 100% of the time, despite it being forbidden by law here.


Exactly, and if a few high-profile accidents give a manufacturer a reputation as a public menace, lawsuits and legislation won't be far behind.


This drives me crazy. It seems like the worst sort of sophomoric "philosophizing." Adults do not worry about weather or not to "pull the lever." They worry about better fencing and signage to keep people off of tracks. They worry about better ways to use alert people about approaching trains, or early warning systems for train drivers.

If we find ourselves faced with "pulling the lever" everyone has already failed miserable.


The trolley problem is a simplification of a difficult ethical question. I wish people would stop dismissing it as though whatever simple answer they come up with makes the problem go away. In many situations, a car is going to have to choose between alternatives that each carry different risks. There simply are no universally acceptable simple rules to follow in such cases.


There is one universally acceptable rule: Do not choose your victim. If they are unlucky, they are unlucky. That's all there is to it.

Choosing who should be "unlucky" doesn't solve the problem. By assuming unavoidable failure you effectively give up solving the underlying problem. The solution is to avoid the failure in the first place.


No system can avoid failure 100% of the time. It may find itself going too fast with a unexpected loss of some control - brake failure, intentional sabotage, (imperceptible) oil slick, whatever. However, the system still has some control to choose between several terrible and costly outcomes. It must choose its victim. How should it chose?


Does it really matter? Human drivers are dime a dozen and act vastly differently under stress. It has not been tested.

Armchair scenarios do not help, we need actual research in simulators at high stress.


Does what humans actually do matter? What matters is what we want the machines to do, which is going to be worked out between buyers, manufacturers, lawyers, lawmakers, and the rest of us.


I presume you've read up on this "Trolley Problem"...

Did you miss the obvious problem with your statement?

Suppose if you don't swerve, you hit 30 school children, and if you do swerve, you hit one terminally ill child molester.

...That's why you don't get to abstain. You have to choose. That's the definition of the trolley problem.


Why are we holding computers to such higher standards than we hold ourselves? I wouldn't know what to do in the trolley situation either but I can still get a driver's license.

As long as the computer makes a decision where we can say "that was very reasonable, even if better decisions existed" then it seems ok to me.


I like Barack and this is not a party political statement when I say I am about as interested in what he thinks about tech as Jim Carter, George Bush Jnr & Snr or Bill Clinton. Now he's not likely to affect much in the way of policy there are just more insightful people to read and listen to so what we're left with is a bit of celebrity worship, which is fine with me as long as I acknowledge it as such.

What the two possible next presidents think is more interesting simply because they may affect policy, regardless of their being more or less insightful than B.O.


"But it also has some downsides that we’re gonna have to figure out in terms of not eliminating jobs. It could increase inequality. It could suppress wages."

No doubt that advances in technology have always done away with jobs. We're almost to the point at which the biggest blue collar industry (truck drivers) is about to be wiped out by self driving trucks. What I'm concerned about is the government stifling innovation such as driverless trucks to retain those jobs, or some sort of regulation that stifles the technology's potential. What is the alternative?


Why aren't you "concerned" about people losing their jobs?

The alternatives are tech companies start to pay their taxes and humans start to use tech to care for one another, genuinely, not just wealthy Silicon Valley types trying to make buckets because their "entitled" to do so and pretend it's all in the name of progress.

The idea of the Internet and Automation were exciting to me as it was about liberation and decentralisation, from where I'm sitting it's turning into a bit of a joke, it's just empowering those who own the tech and their not giving a whole lot back right now.

Not attacking you personally, but this, attitude of "don't worry, those people who will be made redundant will be fine" thing is a myth, they will suffer and so will their families.


Consider a broader timeline. It used to take humans over 500 hours of labor spinning, weaving, and sewing to produce a single shirt.

Automating textile production has "put more people out of work" than computers ever will. Do you think for a second we would ever want to go back to how it was? Do you ask the inventor of the sewing machine how they will start giving back?

These inventions free humanity for more worthwhile endeavors. Nobody will look back after the next great transition and wish we just had more humans driving trucks around the country. It would be utter lunacy. And the people who created the self-driving machines will be seen as liberators of human potential and ushers of a new era of productivity.

So it is, same as it ever was.


Well yeah a lot of unemployed textile workers did go into poverty or even starve to death at the time. I've read some speculation that the first industrial revolution was a net negative on society initially, and took many decades to actually improve the average standard of living. There was a reason people gathered in mobs to break the new machines. They were hungry.

But still the industrial revolution didn't really replace laborers. The machines were still quite limited, and humans were still needed to do the jobs machines couldn't do. What's different this time, is soon the machines will be able to do everything humans can do. Or at least everything an unskilled worker can do. Operating a machine in a factory, driving a truck, entering data into a computer, making phone calls, these are all things machine learning is capable of.

Lastly look at horses. The invention of trains would seem to have competed with them and taken many of their jobs. But instead horses vastly increased, because trains couldn't do everything horses could do. Then cars were invented, and the horse population crashed over just a decade.

Why did this happen? Didn't the invention of the train prove that automation doesn't take horse jobs? Shouldn't there always be new jobs for horses? Can't horses specialize in the 1% of tasks that cars can't do, like transportation in places without roads?

But that didn't happen. The cost of just feeding the horses was much higher than the cost of buying an automobile. There were some obscure jobs for horses left, but nowhere near enough.


I think this is factually incorrect. Women used to spend the majority of their free time spinning thread and sewing clothes for their family. The average person owned three outfits total, and wore basically one outfit every single day.

The industrial revolution dramatically increased both production and consumption. We now own different outfits for every day, dozens of special occasion fashions which must be regularly replaced and updated, etc. Total production, and total wages paid by the industry has dramatically improved.

A shirt a couple hundred years ago would cost $4,500 at minimum wage to produce. But no one paid the equivalent of $4,500 for a shirt. What actually happened is a lot of the work was virtually or actually completely unpaid.

The machines drove a vast increase in productivity and GDP and provide a standard of living today which 200 years ago would have been bad science fiction. The machines drove down cost of production dramatically, increasing consumption and increasing overall employment and wages.

People didn't destroy the machines because they caused poverty, they destroyed them out of fear.

Your analogy with horses is deeply flawed. Horse used declined just like spinning wheel use declined - because they were obsolete.

The latest round of automation does not by any stretch of the imagination make humans obsolete. It will actually make humans more productive and actually more valuable.

As Steve Jobs said, computers are a bicycle for the mind. AI is a motorcycle. Get on, go faster, reach higher, achieve more, live better.


Everyone knows that in the long run the industrial revolution was a positive. But at the time, the disruption caused lots of misery. Farmers were driven from farms that had relatively good living conditions, to cities with awful working conditions, living conditions, and lower wages. Eventually things improved, but it didn't happen overnight.

What does it matter if there are more clothes, if there aren't any consumers to buy them? We are looking at a world where humans are obsolete just like horses. There is nothing an unskilled worker can do that a machine can't, at least in the near future. And many skilled workers do jobs vulnerable to automation as well. The vast majority of the human population is unnecessary, just as horses were after the invention of cars.


Do you really believe this? Horses can become obsolete because they are a domesticated animal, they are a tool. Humans are really something quite different.

It is truly an insult to humanity to think; all these people doing menial jobs which could be automated are now obsolete. That they are somehow incapable of higher thoughts and reasoning and cannot add value beyond the machines.

In fact, while certainly there is a range of inherent potential between humans, my understanding is that the nominal human capacity for creative thought is orders of magnitude beyond the point of obsolescence by any kind of "artificial intelligence" we expect to be able to create at least within the next century.

No, certainly we have not yet created anything even remotely like the machine that will be our master.

The average human, as we have asked each generation of "average" human before us, will use technology to reach farther than you can imagine they would ever be able to reach.

And also, it's worth considering, how very much we tend to under-estimate the intelligence of historical man from our lofty perch of technological superiority, just as we under-estimate our future potential.


My favorite quote from I, Robot is a man questioning the intelligence of a robot. "Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?" The robot replies, "Can you?"

Most humans aren't that special. Long before alphaGo beat one of the best Go players in the entire world, simple Go programs could destroy the majority of players. Sure, AI probably won't be able to do computer programming for a long time. But the average person with an IQ of 100 is not going to retrain to be a computer programmer. AI doesn't need to be as intelligent as the best humans, it just needs to be as intelligent as the average person. Probably much less than that, because the average job is boring repetitive work that doesn't necessarily require much intelligence.

I mean seriously, where do you predict all the unemployed people will go to? What jobs do you think are invulnerable to automation, and can absorb 90% of the population? What jobs have such great economic value, require lots of unskilled workers, and can't be replaced by machines?


It's actually clearly unsustainable the way we're consuming clothing and pretty much all goods, I read in s book published recently that if everyone in China purchased a new pair of wool socks tomorrow, we would be out of wool.

Guess who is getting an appetite for wool socks?


I often feel what will replace jobs is people just going to school and earning a PhD or two or three. It'll certainly destroy their idle time of unemployment and make them and everyone else - the collective a whole lot smarter. Perhaps subspecialties emerge with dual-PhDs.

I often hear that not everyone is capable of getting a PhD, but then again if we take the same energy and dedication of a career and push it over into education I think plenty have the capability. I often consider myself exhibit A so to speak since I got my GED and was working three low end jobs (line cook, gas station attendant and construction worker) until I bounced out of the workforce and pursued my PhD in Public Policy. I'm sort of playing around with undergrad level math now as I prepare for a second PhD. Seriously think if I could manage at least one PhD anyone else could easily manage 2 or 3.


OK, but is there a demand for so many PhDs? Having a PhD doesn't really guarantee a good income now. It will be worse once everyone has one.

This makes sense, if there is some sort of government sponsorship or grants, but providing these to all people would mean drastically higher taxes, and companies are opposed to those.


It's not about the demand so much in my view. Post graduation, people move onto research. The site Innocentive comes to mind sort of in regards to developing a form of compensation. Rewards for discovery. Nature had a partnership with that company for a bit but I believe they went separate ways.

I think the funding issue can be figured out without higher taxes or at the very least without absurdly higher taxes. Perhaps one idea would be to train people to invest through grades 9-12, have some sort of basic minimum income which isn't rich but isn't below poverty either; encourage people to invest with those funds and use profits to pay for college etc. I also sort of align with Jaron Lanier's concept of paying people for their data usage; but built onto that maybe so that it includes both government and private companies usage of that data; individuals don't see those funds until 18, from which they are encourage to go to school. As those funds are held they are invested like social security maybe? None of these ideas are really all that worked out just yet, just responding to you is all.


Not every job needs a PhD and the purpose of university isn't to prepare you to get a high paying job.


It's not about high paying jobs. It's about just occupying people's time, giving them something to do. Perhaps high pay; or at least a pay of some form can come from patents/copyrights developed with such knowledge however.


Until basic needs are handled, you can't just say "it's about occupying people's time". People need to eat, clothe themselves, and shelter themselves, and all three require a steady source of income.

If you want to push widespread welfare or UBI for those in academia, that's an entirely different option. But that'd require a serious revision in the current tax codes or some very wealthy benefactors to bootstrap.


So, basically bmi/ubi is what I'm assuming when I talk about this. Just it's post that, what I'm trying to answer is "what will people do" essentially when there is no jobs and we have bmi/ubi. I sort of just assumed readers figured such a system would need to be in place already as jobs are automated. Probably should not have assumed that, my bad.


Sorry but this is wrong, your clothes aren't being magically created by automated sewing equipment, they're being made in the third world, and hundreds of hours go into garments, The reason you work the treadmill to keep buying them? Planned obsolescence.

See http://truecostmovie.com for more information.

This common narrative that, "automation" will let us finally get to tackle the real issues really is a myth, it's just going to make a larger wealth distribution problem because most companies leading the charge won't pay appropriate taxes.

If truck driving could've been offshored, it would've been.

Side note, I have a blanket in my family weaved from hemp by an ancestor, it's generations old and it's still in really good condition and we still use it all the time on the couch, 500 hours? Maybe, but its over a hundred years old!


It's the spinning and weaving that's been automated, which was the bulk of the manual labor. Sewing the cloth of a fine dress shirt takes between 30 minutes and 4 hours, says Quora.

For that shirt, you work an hour to earn $150, pay $50 in payroll/income taxes, pay $100 for shirt, of which maybe another $20 of that will also be taxes (sales tax at the point of sale, and then income tax on the corporation, not even counting taxes on the materials and wages that they buy with the remaining $80).

Figure around half of every dollar you spend is either taxes on the way in, or taxes on the way out. The problem is decidedly not a lack of taxes.


You're neglecting other steps, cutting, button holes, ironing etc, this is also not a pair of jeans, handbang or other more complex garments.


A lot of comments like yours are based on the assumption of "giving back". I don't quite understand it.

Let's suppose I've founded booking.com and put a LOT of travel agents out of business. At what point did I assume the responsibility of giving back to those travel agents?

I agree that they are miserable because of my actions. However, in real life, nothing happens because of a single reason. They are also miserable because of free choice of their former customers, who abandoned them. Those customers have profited from the change too: after all, they chose me because of lower prices and better service. Why am I, and not the customers that left them, have this responsibility?


> At what point did I assume the responsibility of giving back to those travel agents?

At the point when you started living in a peaceful and organized society, that depends on not many people having a disastrous future to keep being peaceful and organized.


This is true. However, this is taken care of by basic taxes and welfare. (Judging by my experience with HN discussions, this usually leads to talking about how US doesn't have enough welfare net; I want to point out that this is not the topic of the conversation).

So let me ask you a question — when we discuss this "back" direction, would you include someone who's been living on this welfare net for all of this life in it? Someone who purposefully declined to contribute anything to the society? The very phrase "to give back" implies that we're giving to someone who gave us something before — and this hypothetical Joe sure as hell didn't give anything to us. *

Since my question is purely rhetorical, let me answer to it to get to the point. I don't think that you would include him in "giving back". And yet, we keep him (or at least, me and you agree that we should try) fed and alive to some degree.

So, this whole notion about "giving back" is something different from keeping this hypothetical Joe alive. What is it, exactly?

___

* This paragraph may sound similar to typical right-wing propaganda about lazy people living on welfare. However, it is only a hypothetical example to prove a point. I'm not making any statements about real people in it.


Joe is part of the society, to which you're "giving back". Hypothetically, it's impossible to separate hypothetical Joe from hypothetical Jane who was recently born with severe disabilities and also didn't contribute jack shit, or a WW2 veteran, who is now unable to work. So you "give back" to all three.

Not hypothetically, and noting that you explicitly stated you didn't want to discuss factual reality, nonetheless: such hypothetical Joes are rare to nonexistent in the US post Clinton welfare reform.


>At what point did I assume the responsibility of giving back to those travel agents?

You won't have to. If it gets bad enough, they'll come take it.


You are correct. However, this discussion operates on some level of abstraction, and you're jumping off it.

It's like talking about eucledian geometry and then saying that no physical object is actually a single point, straight lines do not exist and parallel lines would collide due to space-time curvature.


Your successful web business would be a very small event at the end of an innumerably long causal chain involving billions of humans and tens of thousands of years [1].

Can you name every event that would have enabled you to be ready to create the business, and all aspects of the world that would allow for its success? It's impossible. This is why we have to give back.

[1] We could obviously go back further, but it becomes less meaningful in the context of this discussion.


> This is why we have to give back.

OK, so how do you determine where "back" is?

Why is "back" in pockets of people being laid off and not on an altar of Cthulhu? By your logic, we never would be able to know the reason anyway, so giving to the Dark One seems to be just as reasonable.


Humans don't naturally have any empathy for Cthulhu, and science leads us to believe there is no such thing. Thus, I doubt most people would find that very reasonable.

Determining where, how, and to what extent is a much harder question. I don't have any easy answers there.

At the very least, recognizing how the indifferent hand of fortune plays an immense role in all of our lives nudges suggests that helping those who have less is a great place to start.


Helping unfortunates is a great thing indeed. But the original idea was different: it was about "giving back". Can you spot the difference?


What's a better way to give back?

Of course, we should do so in a way that helps all. However, even if we distribute help in a totally blind fashion, a single "unit of help" given to those who have less will comprise a larger percentage of their "life capital" (opportunity, potential, resources, etc, for lack of a better term) than a unit of help given to someone who is in a better position. So both the perceived and real impact will be higher for the less fortunate.

Once we have managed to gather a shared set of resources (of course, there will be much grumbling and immense disagreements about the exact amounts), how should it be utilized? Certainly, you aren't suggesting we give a larger share to those who already have more.

If you are worried about the disincentive to contribute upon receiving help, remember, this whole sub-thread started in the context of businesses making use of new, advanced automation technologies and placing large swaths of people out of work.

What if the value of labor for a large percentage of the citizenry really does fall to unemployable levels due to technology? Should we "make up" jobs for them? Let them starve?

Why does the person who can afford to buy an army of robots deserve all of the proceeds? They didn't invent the robots, it took thousands of years and billions of human lives toiling in the dirt for such incredible technology to enter the world.

It isn't impossible to strike balance between helping those who have less, and allowing those who are skilled and make large contributions to be richly rewarded.

And even if many people become somewhat unemployable, society can choose to encourage living productive, engaged lives. We don't need to become zombies hiding in our houses playing video games all day or binge watching netflix non-stop. People still will want to be fulfilled. There is much we can do to promote fulfilling, productive lifestyles and vibrant communities.


They are giving back.

If clothes used to cost $60 and now cost $20 that is equivalent to giving back $40.

The problem starts when their wages are falling faster than the cost of the goods.

If as a employee you earn $20 per hour you need to work 3 hours to afford your own clothes and now only earn $5 per hour you need to work 4 hours.


The alternatives are mostly the pointless gestures we see when the government tries to protect jobs from progress, such as the regions where self-serve gas is illegal. I don't see any viable regulation of AI, other than some attempt to tax it.


I don't even see a practical scheme to tax AI. What counts as AI? Are A* and Sudoku solvers AI, or are we just talking about machine learning? If the latter, am I taxed for doing a linear regression, or only for the "cool" stuff like deep networks? If so, how deep? Am I charged per computation? How are you going to keep me honest?


Assuming that AI-generated wealth is accruing to some person or people, you can just tax it as ordinary income. The main difference is quantitative: in a world where all wealth is generated by AIs controlled by a small number of people, you'd need to tax and redistribute their incomes at much higher rates than we currently do (or just socialize the AIs entirely).


Relevant link: https://en.wikipedia.org/wiki/Category:Obsolete_occupations

Many of these seem to have been victims of cultural and political change, but there's quite a few (for eg., Knocker-up, Lamplighter, and more recently, Switchboard operator) that became obsolete due to technological advancements.


Obama's answer to this seems to be training and education as opposed to regulating away progress. See: this article.

I'd expect the latter to come from the right in the US, ironically enough.


I'm more and more convinced that a UBI is the only way to deal with this problem, so the unemployed have a buffer to retrain to a new skillset.

Of course, replacing the current social safety net (food stamps, et. al.) with a UBI would likely result in the trimming of government bureaucratic jobs. Everything's a trade off.


I would take a look at this argument done by Obama's main economist. He basically argues that UBI isn't actually that good because it doesn't shrink the gap as much as current systems (progressive taxes and the like). Plus it is a good read anyways because it is a much longer detailed analysis.

https://www.whitehouse.gov/sites/default/files/page/files/20...


"The issue is not that automation will render the vast majority of the population unemployable. Instead, it is that workers will either lack the skills or the ability to successfully match with the good, high paying jobs created by automation."

So those who are doing jobs that AI can do better and more cheaply than any human, they are going to somehow gain skills and/or ability to perform new good, high paying jobs created by automation? I don't buy it. Analogues to those hypothetical future high-quality jobs already exist, so why aren't the people in the soon-to-be obsolete jobs already doing the existent high quality jobs? Do they not like money? Training for these jobs has never been cheaper!


Time is money. The actual monetary cost of training is irrelevant if you don't have the time to actually do it (learning new skills has never paid the bills in the right now timeframe).


Why would it be inherently different this time? Millions of farmers managed to learn new skills (or at least their children did).

I think we'll eventually have work for everyone who wants it (but hopefully you won't have to work to live). The future will probably have a lot of demand for "artisanal" products (ie. produced by humans).


This seems to make the mistake, I believe, of focusing on "inequality" as a problem.

Inequality is not a problem. Poverty is. If everyone has everything they need and most things they want, but a few super rich people own entire planets, I fail to see how that's not a utopian future.


I would also argue that inheriting large wealth is not a problem. At some point the money had to be earned. If the inheritors do not use it wisely it will be gone within a few generations.

Imagine a farmer working his entire life on his farm. Now that he is reaching the end of his lifespan he wants his children to benefit from his work. He built the house and everything himself.

Why should the government take a chunk out of that? One might argue that the children do not work and therefore this "unearned" income is unfair because it's not possible to choose whether you are born into a wealthy family.

However it is both earned by the work of the father and already taxed by the government through his income.

By the time the father dies the children are likely already 40 to 50 years old but they benefited from his little wealth while he was still alive. The inheritance tax does not affect this. It only discourages using the inheritance efficiently over multiple generations.


Thank you, I skimmed the first two pages of that, will read the whole thing soon. Looks quite well thought out.


"Choosing" (even if you have a choice at that point) UBI when only 10% of the population is employed anymore is easy.

The question is when do you start implementing UBI? When only 80% of the population have jobs anymore? 60%? 40%?

Because when you do implement it, the money that goes to the unemployed people is going to have to come from those that still make money. And they will probably be pretty pissed off about it, too.

Also, in a two-party system country like the U.S., the party you vote for may literally decide this outcome. Like say 70% of the people still have jobs and they don't want UBI, and the Democrats support switching to UBI after the election. It's very likely that the Democrats will not win an election again until a majority of the population supports implementing UBI, no matter what other terrible things the Republican party promises to do, while also promising not to implement UBI.


Well, arguably we currently have a safety net that is less efficient (in terms of bureaucracy/maintenance cost) than a UBI would be, so... how 'bout now?

Regardless, you do make good points.


>What I'm concerned about is the government stifling innovation such as driverless trucks to retain those jobs...What is the alternative?

More likely the Government comes up with "innovative" ways to hide these people from the official unemployment numbers. Somehow unemployment is calculated at only 4.9% today...add 3.5 million newly unemployed commercial drivers tomorrow (>10% of US total population) and the new unemployment rate = 4.9%.


Unemployment is a very difficult number to define and calculate.

Do you count those that are not actively looking for work, and don't want to look for work? e.g. House(wife|husband)s, semi-retired people.

Do you count those that have given up looking for work?

Do you count those that are working part time, but looking for full time work (i.e. the underemployed)?

Do you count those that aren't really looking for work (for whatever reason), but occasionally look online at job listings (since it's a very low effort thing to do)? This is an interesting one, because Statistics New Zealand just recently decided that this doesn't count as actively seeking work, which seems right to me. I look at online job listings all the time, but am happily employed and not seeking other employment.

Do you count the self employed? Contractors? Are Uber drivers considered employed? What about Airbnb hosts?


>Unemployment is a very difficult number to define and calculate.

That is my point entirely to OP. If the concern is the Gov. is willing to block new technology adoption bc of concerns of mass unemployment, then it isn't unreasonable to think the Gov. would allow the technology and hide some of the unemployment by "redefining" unemployment.

I am not sure why I would be down voted as these moving goal posts are a well known and controversial issue, including, but not limited to the many variables you highlighted.


Employment: the condition of having paid work. a person's trade or profession.

Just count those that want work or should have work (need to support a family/receiving government aid)


> Just count those that want work or should have work

As I just commented, that's easier said than done.


No. Those that want work are searching, getting interviews, etc. Those that need work are easier to count because you're counting situations. The government has enough surveillance on us to accurately count it to a pretty small error margin.


Rawr rah, mandatory link to Humans Need Not Apply:

https://www.youtube.com/watch?v=7Pq-S557XQU

I wonder if President Obama has seen it yet, because while his wording obviously has to be steeped in establishment rhetoric, its not a matter of "if" the automation comes for our jobs, but that we are already dealing, will continue to deal, and how we will deal in the future with the reality that for decades now automation of varying degrees has been eroding the market for human mental labor.

It is exactly like climate change. It is not a future problem. It is a now problem, but the progress is so slow and the symptoms variable enough you don't obviously see the underlying trend already taking place, so nobody regards it with the urgency it deserves. Social stratification, growing wealth inequality, growing partisanship, growing radicalism, and growing unemployability is already expanding globally in response to the ongoing obsolescence of the human mind. The first step is to recognize that it is happening.


> Ito: I generally agree. The only caveat is that there are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we’re going to need a dozen or two different breakthroughs. So you can monitor when you think these breakthroughs will happen.

> Obama: And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man.

Do we seriously think that it would be that easy? I think an a "generalized" AI, if aware of the ability for a human to remove its power and so that as a bad thing, would not be stopped by unplugging it. By the point you realized you needed to unplug it, it would have already convinced a human that to help it spread and it would have found other sources of power.


That's kind of handwavy, don't you think? How does an AI, presumably requiring specialized (and expensive) hardware, simply escape? Further, this requires humans to be easily hackable by the AI, which is not obviously going to be the case. Why would human cognition have a built in flaw the AI can exploit to escape? Imagine a superintelligent person in a cage, a person as smart as an AI. No matter how clever he is, he's not going to be able to escape that cage given certain levels of precaution.

If you'd say that the AI will be super-persuasive, persuasive enough to make humans behave irrationally, I say, maybe, but it's possible to simply use already irrationally fearful humans as guards to prevent the AI from escaping.


I find the scenario in 'Avogadro Corp' pretty reasonable.


It sounds like Obama was making a joke.


> JOI ITO: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.

Why is that a problem that they're mostly white? The interviewer doesn't bother to elaborate, it's sort of mentioned in passing as if it were something obvious. To me (a non-American here, mind you) it isn't. Would the difficulties they mention be alleviated if that wasn't the case? Why? Couldn't - say - an Asian student fall into a misconception that machines will come up with answers to all questions? Is it less likely? What substantiates such a claim?


They were both[Obama, Ito] focusing on AI's social effects and to how societal values can be built into AI.

So Ito's concern was probably that a group of people of similar characteristics(i.e white male geeks) is not diverse enough to address the world's social problems, especially when this group of people is not social enough; they're more comfortable talking to computers than to human beings.

This fits with what he says in the end Because the question is, how do we build societal values into AI? - he's more concerned about the builders themselves.


I do understand that as far as the social aspect goes, still I think that a white computer geek has much more in common with an Asian or black computer geek than with - say - an average white truck driver, and obviously the same applies to the other geeks as well. Basically once we control for the cultural aspect (which is correlated with ethnicity, but we controlled this out already), I don't think ethnicity by itself should matter much


In the U.S., a black software engineer is more likely to be able to design technology that helps keep people safe from police regardless of geekery and socioeconomic status.


Do you have anything to back it up, or is it a supposition?


Just a supposition, but from an American (I'm white). I don't think it's a stretch to say that people who have experienced e.g. anti-black racism will be better able to make technology that addresses it.


Willing? Or able? You're saying it makes a difference in the ability itself. In either case this argument - well, it's not even much of an argument, merely a belief - is so absurd to me that frankly I can't believe you're serious, no offence.

Care to share any examples of what technologies is, in turn, a white software engineer more likely to be able to design? Or does this philosophy only work one way : )

Is designing medical software better left to be implemented by software engineers who themselves eg. battled cancer?


1. Able. Not that skin color itself affects any sort of intellectual ability, but race--including, importantly, the experiences people have because of their skin color--definitely impacts who you are and what you are good at.

2. White software engineers are better at designing racist software (mostly joking). Another half joke: white people are better at designing technology that gets taken seriously by the government and the public (i.e. their technology will be taken more seriously because they are white, not any special ability there). But seriously, white engineers would probably be better at designing technology for teaching other white people about race.

3. Yes, there are absolutely some parts of designing medical software that engineers who have battled cancer would be better at. Imagine you are making one of those medical devices that sits next to a cancer patient's bed post-chemo and shows a bunch of numbers. Now if you fought cancer, you've probably had lots of experience lying in that bed next to those screens, and you could have a much better intuition about how those screens should look and how they should present their visualizations in ways that make a patient more confident. Or imagine the software engineer wants to, you know, talk with some patients or doctors to understand what to make: the engineer who battled cancer will probably be much better understanding what the patients (and doctors) want.


I don't know the race of any software engineers I haven't met. Which is almost all of them. How could govt and the public know either? It seems strange to imagine it could matter.


Well, there are the famous ones. I'd bet facebook would have been treated differently if Mark Zuckerberg were black. You also probably automatically make guesses about people's race online. Race is a bit trickier, but it's not too hard to guess gender online (usernames, and things like this thing I just found on google: http://www.hackerfactor.com/GenderGuesser.php#Analyze)


I'm trying to stop reading news from US/UK sources. The depth of political discourse only goes as far as your self-appointed identity. Actions don't matter anymore.

White people are guilty of colonialism much like Catholic children inherit the Original Sin, that is, it's a means of control by anti-intellectuals.

I wouldn't support Trump, but the American Left is especially dangerous because it relies on gender / racial divide to get the most votes. In some twisted sense, the problem is not supposed to be solved.

Skin color does not make Obama an expert in race relations. If he had said "...but one of my concerns is that it’s been a predominately male gang of kids, mostly black, who are building the core computer science around AI..." there would have been riots in the streets. Tim Hunt's joke was taken out of context and almost ruined his career.

EDIT: To be clear, his white remark was fully intentional.


Can you share a link to the Dawkins' out-of-context joke?


I wrote the comment in a hurry, I meant Tim Hunt. Sorry, I'm going to fix the original post.

On 8 June 2015, during the 2015 World Conference of Science Journalists in Seoul, at a lunch for female journalists and scientists, Hunt was asked on short notice to give a toast (https://en.wikipedia.org/wiki/Tim_Hunt). He made a self-deprecating comment and media vultures run with it. He was shamed out of the Royal Society and UCL.


> Skin color does not make Obama an expert in race relations.

Nor does your observation mean he's NOT an expert in race relations. You're making the same mistake as the people you criticize in applying generalities to specific cases.


We are biased to believe that minority groups are more fit to solve racial issues. But skin color is a not a sufficient condition for race relations expertise. Obama is not strictly right/wrong due to his identity. There is no other meaning to my previous post.


I would argue that it is not just bias. Indeed, not all people of color in the U.S. are experts on racism, but the vast majority of people with a good understanding of racism are people of color.


Yeah, people are generally able to solve a problem only when they understand the problem. Direct experience is a valuable form of learning. Of course it isn't sufficient, but we shouldn't pretend it doesn't matter either. Obviously almost any statement concerning race is a generalization.


> Yeah, people are generally able to solve a problem only when they understand the problem. Direct experience is a valuable form of learning.

I'm sorry, but this is pseudo-reasoning to me. These are logically linked on the surface, yet extremely vague truisms that can only pass for some form of a coherent argument because they're so full of weasel words.

Based on the same principle you could argue that in order to improve car safety, to have a better chance at it, one needs to have had a life-threatening accident. Why not? After all, people generally are able to solve a problem when they understand it... and direct experience is a valuable form of learning... etc.

It really depends on the nature of the problem though, and the nature of this direct experience, and so forth. Painting the situation with such an overly broad brush doesn't lead to any meaningful conclusions.

For starters, first-hand experience is typically caused by symptoms of a problem, the underlying nature of a complex problem isn't readily apparent, or else it wouldn't be complex.

For instance getting sick from air pollution doesn't do anything to help you understand the nature of these pollutants, how they're emitted, what is the economical context and therefore possible countermeasures etc. It just reassures you that the symptoms of such pollution are a bad thing, which isn't that much of a discovery by itself.

Not everything is as simple as an itchy-scratchy situation, and we shouldn't pretend that it is, especially when it leads to racially biased claims.


The future of humanity? The few and select Big Corp will own the infrastructure and technology and employ and sell it to reduce human labor as much as possible thereby creating massive value increases for the stock holders and displacing millions of blue collar workers into poverty. (In later stages also white collar workers). The execs will all be trillionaires and live off their own walled societies while the rest of us fight for our lives for tooth and nail. Eventually we can't even have a revolution since the goverments are owned by the Big Corp, which has manipulated and coerced the system into activities such as spying on their own citizens on the pretext of terrorism etc. Any act of revolution will be quickly and swiftly dealt with immediate force.


I was intrigued by this:

Part of what makes us human are the kinks. They’re the mutations, the outliers, the flaws that create art or the new invention, right? We have to assume that if a system is perfect, then it’s static. And part of what makes us who we are, and part of what makes us alive, is that we’re dynamic and we’re surprised.

And then he goes on: One of the challenges that we’ll have to think about is, where and when is it appropriate for us to have things work exactly the way they’re supposed to, without surprises?

One might argue that in software development things never work exactly the way they're supposed to.


I think their discussion on AI and implications was insightful. Their perspective is much wider, multifaceted and reflects a deeper and more nuanced perspective of technological and socio-political issues.

I think HN is best when its money or pure tech and tends to be dismissive or diminish the other stuff and you are left with a sterile unidimensional discussion.

Its like the gold rush, they can generate the wealth but the harder questions around politics, economics, social structures and humanity will have to be done in more 'distant' surroundings less touched by the frenzy of greed and profit.


Interesting that Obama is particularly concerned with specialized ai for cyber security/warfare applications. I thought he had a fairly balanced and knowledgeable viewpoint on these issues, though I think both he and Ito underestimate the impact and timeline.

Not much to say on the general AI question, but that's understandable.


What was that about "a friar... who is trying to understand Bitcoin encryption... from the perspective of the Catholics"?


Barack Obama couldn't figure out how to take a selfie with Bear Grylls with an iPhone, but people are going to listen to his prognostications on AI? WTF!


"we found that most people liked the idea that the driver and the passengers could be sacrificed to save many people" Understand morals? Probably not...


MIT's "moral machine" site presents a set of ethical dilemmas involving driving so you can explore your own perspective on morality.

http://jalopnik.com/this-mit-online-activity-lets-you-choose...

http://moralmachine.mit.edu/


Yeah. Anyone doing that should have kept alive whoever was not breaking the law. I noticed that it said I killed more women, but women broke the law more.


Every time I listen or read Obama talk, I become a bigger fan of the guy.


> OBAMA: The way I’ve been thinking about the regulatory structure as AI emerges is that, early in a technology, a thousand flowers should bloom.

I gotta say, I thought he'd be more familiar with that metaphor.

https://en.m.wikipedia.org/wiki/Hundred_Flowers_Campaign


Actually the "thousand flowers" metaphor is part of Silicon Valley culture going back 30 years.

At least I have heard this any number of times in relation to technology innovation, entrepreneurship, "lean startups", etc. I bet a lot of the people who use the term have no idea of its origins in Mao's China.

See for example, this article from Inc magazine in 1984, which begins as follows:

"Not too long ago, fortune 500 companies looked at small-scale entrepreneurial companies as fodder for acquisition -- if they were big enough to make the effort worthwhile -- and little more. The executive who dared to suggest that Goliath might learn from David was likely to be trampled by a herd of MBAs waving printouts on the economies of scale that flowed from a centralized and rationally managed organization... Then something happened...The micromillenium was born in a Cupertino, Calif., garage."

[excerpted from "Let a Thousand Flowers Bloom" http://www.inc.com/magazine/19840401/2895.html]


I think he knew the origin, but chose it anyway. It is an elegant phrase. Communists could be very eloquent.

How else do you think they managed to convince so many people to be communists?


I am not a communist, but I must point out, that it works exactly the same way with capitalism too. If you got some time, there is a great documentation called 'The Century of the Self', which shows (among other things) how modern marketing in western countries started and evolved.


Thank you, I'll have to add it to my list.


"How else do you think they managed to convince so many people to be communists?"

Once they had momentum? With rifles.


The free stuff?


Free food. Unless you are parasite, in which case, gulag for you.


You spelt 'government' wrong.


The metaphor seems to fit perfectly with what he wanted to say.

First you give people the liberty to explore and then you crush most of them with thousands of pages of legislation.

A little dark perhaps, but it fits.


Sounds like a perfect use for it. I'm understanding his use of it as that the thousand flowers should only bloom in the beginning, and later the field should be much tighter regulated. When you replace people's lives with different AIs, it becomes much less tragic, thankfully.


According to that wiki article, it is a BAD thing. People express themselves and then are imprisoned for it. That's bad.

Here's Ai Weiwei's interpretation of the phrase, that was a temporary installation in Alcatraz recently http://www.fubiz.net/wp-content/uploads/2014/11/blossomaiwei... (Cite: http://stefany-cordedoce.blogspot.com/2014/11/porcelain-bouq... )


Tone of your comments suggests that you're arguing with me, and yet, you didn't write anything that contradicts with my comment. Are you sure you understood what I wrote?


Government's view of the economy could be summed up in a few short phrases: If it moves, tax it. If it keeps moving, regulate it. And if it stops moving, subsidize it.

-- Ronald Reagan


Thank you for sharing. I hadn't heard the phrase before.


The way I've been thinking about AI is that it reduces the need for grasping administrators with a surface understanding of technology. I'm not too surprised that the head administrator would see it in terms of his office gaining manifold new powers, though.


Blooming flowers is a metaphor used by orators from practically every country and kingdom in the course of human history. I don't think this particular formula has anything to do with the hundred flowers campaign, in spirit or in origin. "Let a thousand flowers bloom" has been used by people opposed to heavy government regulation in the tech sector for decades. I even remember the head of FINRA using some variation of the phrase in the Senate hearings on Bitcoin (somewhat surprisingly).

EDIT: Actually, looking into the history of this particular phrase and its use in the West, I may be mistaken here.


Same as FDR's inversion of Sumner's Forgotten Man

https://en.wikipedia.org/wiki/Forgotten_man


When some startup CEO says they are on a crusade to change customer service do you get visions of knights on horses riding through Asia Minor? When your girlfriend tells you not to put words in her mouth do you flash back to 2 Samuel 14:3?

Language is great at evolving and incorporating colorful and evocative phrases while stripping out the original connotations.


Yes, I expect this kind of reuse from most people. But it was surprising to hear one world leader using another world leader's most historic phrase from decades beforehand, where the phrase is historic precisely because it was murderous and insincere. So I mentioned it.


I'm reminded of another riff on that phrase from the book Nexus by Ramez Naam, which was also said in the context of AI and incredible (and potentially incredibly dangerous) technologies:

    Let a billion flowers bloom!


I also thought it was a slightly inappropriate metaphor, but it gets the point across.


By "the point," do you mean that the government intends to initially encourage innovation, only to later stifle, repress, and criminalize the elements it doesn't like?

I mean, that's the reasonable application of the metaphor, right?


My point was that, assuming most people are unfamiliar with Chinese history under Mao, the expression will have positive connotations.


And those who are familiar with China under Mao will also find historically accurate connotations with Obama's quote.


Obama's a thoughtful guy. I'm guessing he knew what he was saying. Perhaps it's a warning, looking at the history of tech, invariably all are tested for nefarious purposes, whatever the original purposes were.


that sounds like something Obama would do when you put it like that...


Maybe that was a Freudian slip?


Somehow, someway, a discussion about AI has to touch on disadvantaged people and racism. Sigh. Can't wait for a new president.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: