Hacker News new | past | comments | ask | show | jobs | submit login
Google I/O 2017 (google.com)
344 points by ergo14 on May 17, 2017 | hide | past | favorite | 275 comments



Honestly, as someone who has been playing around with/working with AI in some form (previously bayes nets etc...) since the 90s and trying since 2012 to build our computer vision company - every keynote from AFGAM makes me want to give up a little more each time.

It's like even with a world class AI team the majors just blow past everyone with their on demand scale, access to data, hardware, people and distribution.

In the past three weeks the products that were shown at conferences (GTC, Build, I/O) would have been 100 different independent companies, with very highly trained, specialized PhD level researchers and developers only 5 years ago. Now this stuff is just baked in to the top platforms.

This has been happening for a while, and I've been saying it for a while too. No clue where to go from here honestly.

edit: The point I didn't make here but is subtext is that, IMO ML/AI is the last frontier for technology (an IMO humanity) so if a few dominate it, its kind of game over for the existence of smaller players generally.


I've been wondering for a long time whether machine learning will break the long-time tech industry lifecycle we've all come to regard as some kind of immutable law of nature.

Traditionally, 1) we start with big, established companies in tech. 2) A few people, dissatisfied with the trammels that big companies place on their employees, leave, found a new company, and then, 3) through superior grit, gumption, execution, flexibility, and speed, steal market share from the big companies. As the new company grows, 4) it accumulates organizational scar tissue and is slowly infiltrated by incompetent careerists, and finally, 5) becomes the very company that the founders set out to create.

We've been going through this cycle since the late 1950s, when the "traitorous eight" left Shockley Semiconductor Laboratory to found "Fairchild Semiconductor".

Machine learning might stop this cycle by breaking step #3. Big companies have access to huge data sets that no group of human developers, no matter how talented, can match. Even slow, hidebound, and barely-competent use of these data sets can squash upstart smaller companies.

It'll be like a planet, full of water and life, with a core that cools to the point where it can no longer sustain plate tectonics. Without constant replenishment, the crust seizes up, the oceans boil off, and the life dies. The tech industry will become Mars.


Generally agree with OP that ML is one of the last viable frontiers for the current generation of computer tech. We are basically taking the data generated by machines invented decades ago, and using statistics to optimize a few % of productivity points more out of them every year. This optimization applies to humans to - one day you'll be required to have an OBD dongle monitoring your driving data to get car insurance, for instance. Many facets of your life will be monitored and fed into a company's individualized production function or pricing equation.

When the economic cake is growing slowly, every last drop of productivity enhancement counts, and ML is doing a good job at shaving a few basis points off basic costs, such as energy (see DeepMind's work on Google's datacenters), transportation (autonomous cars), and human capital (making some simple jobs redundant). That's why everyone is piling into IoT/ML/AI right now.

What ML is NOT doing is creating a new paradigm/market/computing platform. It's optimizing old markets and technologies. It's not baking new cakes.

We'll see a new platform soon. The cycle of human technological progress has been chugging along steadily for a long time and has sped up exponentially in the past 200 years. I'm sure some new tech will come along soon that will allow us to bake more cakes instead of spreading icing ever-thinner on the ones we already have.


Generally agree that ML is often shaving off a few % after all the low hanging fruits that can be achieved without it have been exploited, but...

> What ML is NOT doing is creating a new paradigm/market/computing platform. It's optimizing old markets and technologies. It's not baking new cakes.

This is simply not true: there are many products/markets (such as automatic translation, speech recognition) that are simply not possible without this progress in ML.


What's the paradigm?

In translate it replaces a naive translator's job. In speech recognition it gives you a new-ish interface to systems.

In prediction and personalization it gives you what you likely want. (Local weather, local traffic info, local news, in the language it knows you understand, in the format you prefer, at the time it think you most likely want it, etc..).

It puts some "smart" into things. But translate is just a smart dictionary, it's not a real translator, for that we need stronger AI (something like the Jeopardy playing Watson + Google Knowledge Graph / FreeBase + language translation + it should ask questions if it doesn't understand something).

ML is amazing, but it's just a slow march toward more and more adaptive smarts (general intelligence) in a box (hence artificial). And there's probably a tipping point for that. When it can start to learn, or program itself, blablabla... ( https://intelligence.org/2013/04/29/intelligence-explosion-m... )


If we look at things from that point of view, then nothing is new.

What are computers good for? They are just replacing a calculator, which was just replacing some manual calculation machines, which were just replacing mental calculus.

A "naive translator's job" cannot translate arbitrary sentences for free and instantaneously. We are not far from real-time (and quality) translation that will make it possible to talk to somebody in a foreign language and have everything translated on the go (there's already a feature that replaces text in a foreign language in the image you're viewing).

I don't know :) I think I'm quite satisfied with the level of disruption ML is bringing to the world. And I feel people are constantly pushing away against this ("This is not real AI!") every time we start to understand how these things work.

Reinforcement Learning is the case of a program teaching itself (without training data or instructions) but I still, some will argue it is not AI because "it's just maths and engineering hacks", I guess.


I understand how ML/AI works, and there are great things like collaborative filtering, self-driving whatevers, "expert systems" (IBM Watson), magical image processing (which just shows how much detail our own brains just imagines and fills in for us in non-ideal conditions).

I'm not saying it's not real AI, I'm saying I can't wait for the time when these separate components/models can be "synergized", when one big system can be trained for multiple tasks, and when it can train itself tasks.


>one day you'll be required to have an OBD dongle monitoring your driving data to get car insurance, for instance

FYI there is a company in Australia that already does this, well, you get discounts if your device shows that you're a safe driver or something...


I think there's a real danger to underestimating small % improvements over time, depending on how you're measuring.

Especially if you're looking at error rates; 99% accuracy is very different to 95% accuracy is very different to 90% etc.


I think this depends on what you see as the downfall of these previous companies. If you agree with Christensen's theory of disruptive innovation, the issue was that large companies would undervalue small markets that could grow to become large markets.

When seen in this light, it would be surprising if ML were a disruptive technology, since it falls more into the "sustaining" category of innovation.

Most of the large tech companies were built on technologies that no-one expected to become world-defining companies at the time.

With all this focus on ML/AI, it would be very surprising if a small ML/AI shop became huge, rather then getting bought or replicated.

Almost by definition, the big companies of the next era will come from the spaces that large companies are ignoring.


the big companies of the next era will come from the spaces that large companies are ignoring.

My entire point is that they aren't ignoring any spaces, they have all future spaces accounted for by having their tentacles in literally everything possible that could be future tech.

Bldinsides come from different industries, so they need to cover their bases from all directions, so that's what they all have done with their projects, products, investments in...AI, Energy, Biology, Transportation, Infrastructure, Waste Disposal, Agriculture the list goes on.

The time of the non-ubiquitous firm is over.

AFGAM aren't software companies, they are innovation conglomerates. I mean why do you think google changed to Alphabet?


Right, but this has less to do with the fundamentals of ML as a technology, rather than the less sexy explanation of better management practices.


They are inextricably linked. If you organize your company around ML, as Google, Facebook, Amazon, Microsoft and Apple have stated they are doing, then yes it does have to do with the fundamentals of ML.

Advances and implementations of ML is driving the creation and iteration of products for these companies, as well as helping them run their companies and evaluate where to go next - follow the data as it were.

So it is a critical point that every one of their CEOs has outlined in clear detail over the past few years.


You may be right for the time being, but I don't think we're permanently entering a brave new world where machine learning startups are doomed to fail to the big tech companies. It's well known that existing machine learning algorithms are extremely inefficient learners, which is why you need enormous datasets and computing resources to be competitive currently. Right now, for instance, a computer would need to see thousands of different images of a new animal to recognize it consistently, whereas a human only needs to see a single hand-drawn sketch. If someone can develop machine learning techniques that make computers generalize as flexibly as humans, that would be a huge game changer and would completely upend the advantages that the big tech companies have.


>Right now, for instance, a computer would need to see thousands of different images of a new animal to recognize it consistently, whereas a human only needs to see a single hand-drawn sketch.

valid question here would be - would a fresh human mind perform that well too? Or may be such performance is conditioned upon having at least several years of experience of living on planet Earth and thus having the models of animals and everything else around been built using billions of images that human eyes would have generated during those years?

> It's well known that existing machine learning algorithms are extremely inefficient learners, which is why you need enormous datasets and computing resources to be competitive currently.

when put into perspective, i.e. compared with the cardinality of the above mentioned set of images human eyes generate and computational performance of the brain - 40K TFlops - computers don't look that inefficient learners to me.


> valid question here would be - would a fresh human mind perform that well too?

I think it's questions like this that will point the way forward. Humans definitely don't come working at 100% as soon as we're born. Lots of interaction with the environment is necessary to become a functioning person. But at the same time, we still seem to have a real ability to generalize hard-wired in. A human in a small society might only speak with a few dozen individuals growing up, but they don't get confused when they meet a new person and hear a new voice.

It seems to me that lots of data will be necessary to get things off the ground --- but I think the sort of "big data" needed for this will be qualitatively different than the big datasets used to train modern state of the art models. My hunch is that it will involve an agent's interactions with its environment (so it will look something like reinforcement learning).


For a playful yet pretty interesting take on "psychogenesis" I recommend reading Diaspora by Greg Egan.

Also, we are born with very good models (for faces, cognition of human languages + speaking and hearing them, seeing things on Earth, movement coordination, other cognitive fundamentals, like counting, categorization, and so on) plus all of that really well packaged as a starter kit, a do-it-yourself general intelligence, with amazing supervisory framework (parents, peers, society, reflective optimization about one's actions) and reinforcement (emotions, memory coding is waaay too drastically modulated by emotions).

And we're still not sure what else is in there, how all this communicates. What's the operating system of consciousness? (We "know" that consciousness is just a "program", but is it the main thread, is it a scheduler, what's the right mental model for understanding the interaction of brain faculties, components of cognition and consciousness itself?)


> A human in a small society might only speak with a few dozen individuals growing up, but they don't get confused when they meet a new person and hear a new voice.

It does seems pretty well established that some humans overfit on the particular aspects of verbal communication that we generally call "accent" (not to mention actual lects), even when given access to a much larger and more diverse set of individuals and their utterances to generalise from.


If someone can develop machine learning techniques that make computers generalize as flexibly as humans

Funny you should mention that...

Deepmind (acquired by Google in 2014) goal: "DeepMind Technologies' goal is to "solve intelligence" Demis has gone one to say elsewhere their goal is to create AGI

Also, the data you need to get there is owned by a few companies. Also those companies hire effectively every great ML PhD on the planet.

It's well known that existing machine learning algorithms are extremely inefficient learners

Right, which is why the heads of research at AFGAM are the pre-eminent minds in the field (Lecun, Hinton etc...) and are actively working on this along with unsupervised learning. These companies are either funding university labs that are developing this stuff or doing it in house.


On the other hand, #3 might in this case be exactly that: Small companies creating accurate & precise modeling techniques using only a limited amount of data, while giants continue to rely on the sheer and blunt volume of it.


>Small companies creating accurate & precise modeling techniques using only a limited amount of data, while giants continue to rely on the sheer and blunt volume of it.

until of course sheer and blunt volume allows to build more accurate and precise models. To me this is exactly at the core of the current AI wave - the models emerging inside deep representational layers are more complex, accurate and precise way beyond whatever human analytical thinking could have built or even just come up with. My friend just couple days ago was talking about one specific multi-year effort few years ago to constructively build a practical model of sentiment analysis which i in turn contrasted with that recent result from OpenAI of "sentiment neuron" emerging from [unsupervised] learning of next character in Amazon reviews.


Machine learning might stop this cycle by breaking step #3.

Yea that's exactly what I think is happening, and in fact that's one of the core promises of being first into AI - you use it to move faster than anyone else can because it gives you better intelligence and prediction.

Even more so with AGI, which Deepmind (Alphabet company) has explictly stated they are trying to create.


I've been wondering the same thing. It is why I don't like the fact that we need increasingly specialized types of computers to do ML. This is also why I am very interested in research that is not based on deep neural networks, which seem to be especially hungry in terms of training data and hardware power.


Look at the flipside - for the people who want to focus on applications of ML instead of futzing with platforms and implementations this sort of environment is a godsend. Compare what it took to keep a server/website/database up in the late 90's compared to something turnkey like Heroku today. It frees up a lot of developer time and energy for building the next layer of innovative things!


The "flipside" is that some of the best minds in AI basically have only 2 choices coming out of school:

- AFGAM

- Military

The period of being able to build your own competitive AI startup was...what, like 3 years between 2010 and 2013?


Frankly, I think your belief is due to lack of knowledge. As a current PhD student who knows several ML researchers at Google, there is fair bit of marketing involved in creating perception of Google supremacy. while no doubt Google is a leader in AI, their advantage is not as great as it would appear by reading the comments here.

At the same time other companies, Apple /Salesforce / Amazon / Oracle and several other smaller ones are well aware of threat posed by Google and are always on the lookout for ML/CV startup that can fill the gap in products relative to Googles offering. E.g. Amazon acquired Orbeus around same time Google launched Cloud Vision.


We fund a ML research arm at the University of Alberta, give me a break with this.

I'm not talking about just Google. Its AFGAM: Apple, Facebook, Google, Amazon, Microsoft

You can add Baidu in there too probably. That's it though.

always on the lookout for ML/CV startup that can fill the gap in products relative to Googles offering

Right, acquire. Not compete. That's the point. You can't actually compete with these companies, like they did in the past with HP, Dell, Nokia etc... the best you can hope for is to be acquired.


Bless you for defining "AFGAM" - search engines weren't helping, and I was trying to put things like ML and AI in for context.


OK. I'll coin the phrase: Afgamistan.


This is an excellent initialismanteau, I will have to use it.


Ha, sorry about that. It's also been called FAMGA[1].

[1]https://www.google.com/search?q=FAMGA


Funding is not knowing.

There are always new markets which get ignored. Apple, Facebook, Google, Amazon, Microsoft did not compete with HP, Dell, Nokia they essentially entered a new market. Most ML startups don't offer a new product but rather just enable features and acquisition is often the intended end goal.


You're missing the point.

We expect that DNNs will a fundamental thread to computing going forward. AFGAM has effectively locked up all of the talent and data in the broadest markets already. Not only that they are building and deploying the frameworks that new products will be built on in those other markets. So they don't need to create the product, they just need YOU to use Tensorflow or CUDA or whatever they come up with that runs on GoogleCloudGPU or Azure Compute etc...

What you're missing is that it's not about making a few million as a startup. It's that they are so far ahead in what is the most fundamental game changing, final human technology that I don't see a future where they are unseated.


While deep learning and related technologies are exciting and extremely powerful, they have not yet proven that they will achieve the "final human technology" status that you are proposing. We may be centuries away from that, and at this point trying to speculate is foolish. However, even if they don't live up to those lofty outlandish prospects, they will still prove very valuable and useful.


What is "proven" to you? I mean, the tech displayed today is basically unmatched by most humans. Curating the best shots in Google Photos, all the integration with Google Home, Lens that can identify almost any object in meat space... the list goes on and on. It is not going to regress somehow... I mean, it is probably exponentially increasing in power by the day.

There is no AI competition against a company that has Tensor Processing Units among the absolutely incomprehensible data store that Google has amassed.... Especially if you are not some world class mathematician or computer scientist. The best you can hope for is to be acquired if you are in the ML/AI space as a stand-alone entity. In most cases they can probably safely ignore the bulk of AI startups.


It seems like picking the best photos, at least, is a pretty forgiving task. It's subjective, and I expect people will accept any set of pretty good shots.

Also, those TPU's are available for rent.


Definitely forgiving, but I think you are missing the point. The fact is, Google owns the whole process from top to bottom.

Nice that TPU's are available for rent, but to what end? If I run something truly innovative on their hardware (TPU), with their software platform(Tensor)...is that actually innovation, will they have the means to just copy it? How do I compete with that?


Would you say the same thing of Go or Chrome? These are tools anyone can use. Building something higher in the stack isn't cheating, it's just reuse. (Snapchat is built on Google Cloud, for example.)

As always you want to be careful about dependencies, but that doesn't mean rewriting everything.


Yes, I would, as there is no way for me to truly compete with Google if I am using their tools. The premise is so flawed that I have trouble even formulating possible counter arguments you may suggest.

I use chrome and little snitch is always telling on it. It calls home so much that I have just given up in order to have a decent experience and gave it full connection rights. I don't know how much of the browser Chrome owns, but I guess it is not insignificant. That is a lot of data that I will never have access to... let alone for my family nor on at Google scale.

Netflix uses AWS and competes with Amazon Prime... I get it. I do think that no one knows the whole story.

Getting back to OP... it has to be damn tiring/frustrating to even contemplate competing in AI/ML space with these big mega corps. I simply would not, that ship has sailed.


That's the age old question - what if Facebook copies what you did, in a week? Like they did with Twitter or Snapchat?

By the time they copy you, you're already big. I don't really see the problem as long as the tech companies don't hoard the information and spread the knowledge - which they are doing with initiatives like OpenAI and many more.

The big question is, what happens when they invent AI that builds its own programs and exponentially accelerates? Who needs startups at all then?


Bull. Instagram is eating Snapchat for lunch right now, for starters. Twitter is going down in flames soon enough as well. There is a massive consolidation taking place in social, and I think with tech in general. We are at a point where automation, AI/ML and resources are the next battle grounds. Sure there will be wins here and there, but like the app stores in 2014... the gold rush looks like it's over.


So in your opinion having accurate visual recognition/identification is the final human technology? We can stop now?


No, you didn't read what I wrote: It [Google AI efforts] are not going to regress somehow... I mean, it is probably exponentially increasing in power by the day.

They not only have custom silicon (TPU), they have the software too. Do you have extraordinary insight|foresight|genius or have you discovered/developed a major breakthrough in AI/ML? If not, good luck competing with these companies. It is a resource problem at this stage, as consolidation happened and there is no going back.


That's still only computer vision and speech recognition. Google still does piss-poor at answering your questions using snippets.


I don't know - maybe I'm just irrationally optimistic but I don't believe anyone has "locked up all of the data".

You can download the whole Wikipedia data dump at under 100GB uncompressed (text only - with media is around a TB). The entire common crawl with 3bn pages is only around 250 TB. While the Wikipedia dataset is too large to fit in ram for most people and the common crawl is too big to fit on a single disk, you can process these in your own local cluster quite easily and relatively cheaply.

Honestly I think the real breakthroughs to be made will be algorithmic and I don't believe those are out of reach for "civilians" outside of the tech giants.


While I think OP is a bit too pessimistic, in a way s/he is right. Facebook can design a simple experiment, implement the changes in their whole platform, roll out, gather data, and iterate on it.

You are stuck with 25TBs of wikipedia for the rest of forever. And "academic datasets".

Because the important data is probably not inferable from web links or otherwise semi-passive Internet structures.

That said, I think the missing component is still a good mind theory.


Humans don't run split tests on alternate realities but they can learn from wikipedia alone - I think we just don't know quite how yet.

We need a lot of data currently because algorithms don't generalize like people do.

I'm not convinced you need to be a tech giant to be able to make that breakthrough - I think it's a problem of approach, not a lack of data.


People ingest a lot of data. Especially when they are young. They touch everything, look at everything in funny ways, chew on things, and so on. Try to interact with everything to gain data.

It's just not textual.


Great, so who are the people going to make those breakthroughs?

Last I checked they all work for...AFGAM (or are teaching)


> final human technology

Can you share your reasoning behind this opinion? Humans have invented a lot of tools and technological platforms over the years, what makes you think ML is the final frontier?


A lot of people have written about this over the years. I'll just let Yoshua Bengio go into it:

[1]https://www.youtube.com/watch?v=ZHYXp3gJCaI


> that I don't see a future where they are unseated.

The way I see it, consumers (at least right now) are free not to buy into the hype and consume whatever it is that these companies produce. So if prices are too high or quality too low, a new company can enter in.

The real danger I see is the possibility that essentially all consumers will be forced to pay for this stuff through some regulatory capture, in a similar fashion to the state of the healthcare industry in the US.


If AI is the high end of programming, then it's the equivalent of saying that a recent manufacturing grad can't build their own car assembly line and has to go to work for Toyota, Honda, et al.

Barriers to entry at the "top" of industries are generally higher than lower down.

I'm sympathetic to the sentiment.

Consumers might benefit from the conglomeration more than from a lot of independent groups each reinventing the wheel or spending resources on duplicative infrastructure.

There are trade offs of course. In particular, I really don't want one single ML platform to have all my data and know me that well.


The metaphor would be that no combination of people will ever be able to compete with Toyota, Honda etc... and if they tried would be acquired or beaten to death by Honda because they are leveraging their massive scale.


But what about Tesla? It managed somehow to compete with Toyota, Honda, etc, although it had to create a new market (market of electric cars) to do it.


Korean cars manufacturers created now mature brands from scratch. All it took was billions of dollars and a couple decades of time. Can a startup do it? No. Is the market locked up? No as well.


Despite the hype and genuinely great technology and growth it's still nowhere near competitive with the big automotive firms.

So while Tesla has definitely driven innovation here, there are still MAJOR hurdles to them being globally competitive.

Further it took Billions and someone like Elon Musk to be able to get even this far, so in that context if even Musk and tons of government dollars can't get there then it's safe to say nobody can.


Tesla isn't competing with Toyota... They are competing with Porsche and Rolls, etc. Further, they are a mess financially and have not proven to have staying power.


As of 2016 investors were still throwing money at 'ML for x' startups.


Yeah, why not, Google will buy them if they are okay.


I agree with this point. One can view this is as an extension of the current trend towards platform/infrastructure outsourcing. If AI and ML are deemed essential to future apps, we should be just as comfortable consuming it via a PaaS as we are with leveraging other parts of a cloud infrastructure.

A valid concern, as pointed out, is when the large platform providers begin moving into domain-specific markets, leveraging their scale and closing out start-ups/small business. However, I believe nimbler startups will always be able to carve out a successful niche as long as they are closer to the customers.


> However, I believe nimbler startups will always be able to carve out a successful niche as long as they are closer to the customers.

The difference is where the majority of the "idle wealth" is created. Sure, there will be great business doing consulting work leveraging cloud providers - you can make a good business simply consulting on such things.

However in the end you're simply enriching a large company. Whenever an "amazon cloud consultant" transitions a closet of servers to AWS - they made a bit of money, but the large bulk of the value of that customer is now in a giant corporation's hands vs. being spread out how it was traditionally (either in house, or via much smaller infrastructure providers that generally competed on service without much lock-in).

The lack of competition in the web space is astounding to me, and will be incredibly damaging to innovation. How quickly people forget why decentralization used to be considered such an important goal. Yet it appears the entire industry is rushing head long into this model.


One thing though is that although 1:1 specialized and individual service is always superior, the providers arent well paid. This means that startups, though better than large companies in a certain respect, can no longer become rich because they no longer are capable of creating a barrier to entry. A butler provides good service to you individually yes, but he can also be replaced fairly easily. (Relearning your quirks and stuff)

Startups then return to what they are in any other countries. Just a way to make a living rather than the potential path to riches it is in the bay area.

As a side effect you get smaller elites controlling more of the pie. That has other implications as well.


The stupid-money from spying-heavy web/mobile advertising is suppressing other business models that would be friendlier to smaller companies. Until it's gone (likely require a law—please oh please) we're (entrepreneurs and consumers alike) stuck with a handful of free-but-we-spy-on-you solutions, either provided by one of the big tech companies or soon-to-be acquired by them, and little else.


I think you need to pick a different niche. I don't know what your computer vision focuses on but try to pick something the big players aren't focusing or has less mass appeal. It also says to me that if you become a domain expert for this niche and develop the models, etc. that you are ripe to get acquired by one of the major players as automation (and software) continues to eat the world.


It's called commoditization. It's a pretty common phenomenon that follows the same well known cycle of first disrupting and then ushering the whole industry into a new era.

Nothing to worry about, unless you are trying to create these commoditized items, which is pretty much a doomed undertaking.


Not really, deep learning is still a research subject. It's not commodity for small companies, it's probably commodity for google, (e.g. they can use deep learning to design optimal deep learning systems)


Here's the central issue for someone outside the CS industry: I have large scale, high value data in a competitive, highly regulated, exciting market, something that bends GDP: defense, aerospace, healthcare, transportation. It's not like I got a bunch bored tweens in the Valley to share pics of their food. I have to sell my organization, which may be looking at a decade or more of data, that my CS collaborator can a) handle the data at scale, b) secure the data, c) hold on long enough to get to market. Because a couple years won't even get you to the FDA's or FAA's doorstep. NASA, DoD? FRA, NHTSA? You're looking at a 10 year lead time. You're looking at hosting tens of millions, hundreds of millions of dollars of data, running an equal amount of compute, and doing it for the next 10 years. Not the last 10 years. The next 10.

I can't, cannot, convince the org to bet the future on the ability and moral grounding of two random guys in a garage. They wouldn't even be able to wrap their heads around it, the difference in scales is too big: "You want to give our 10 PB of data to two guys in a garage? What? Get out. In fact, go get a new job, you're done here."

These deals are being signed by people who worry about capitalized words, like Gross National Product or Personal Health Information or Mars. We're talking orgs with tens of thousands of people who have committed their lives to a thing. We're not trying to use Dutch actions to steer 20-somethings in Orange County to the right sushi restaurant. Which is a sweet trick, don't get me wrong. But it's a bit different when we're trying to protect nations and their economies, and maybe just save the world or the human race.

Do you know what the US Government defines as "a large firm" for contracting purposes? 200 people. That's about right. You and I both know that those same two guys working for AFGAM are still the same two guys. But I need to contract with an entity wherein, if those guys I'm collaborating with go south, I have recourse with the Inc to keep the project on track. They can fire the guys, bring in new guys, subcontract some guys, whatever. But I've got to convince serious people that I can maintain a 10 year trajectory in a white hot, globally competitive market.


I'm not entirely clear what you are complaining about, but I can't, cannot, convince the org to bet the future on the ability and moral grounding of two random guys in a garage. sounds like a pretty reasonable stance for an organization to take?

Enterprise sales takes lots of demos, trials on subsets of data, support arrangements, relationship building etc.

NASA, DoD? FRA, NHTSA? You're looking at a 10 year lead time.

DARPA programs have a fixed, 2 year span, and they aren't allowed to be renewed. It's deliver something in 2 years or fail.

You and I both know that those same two guys working for AFGAM are still the same two guys.

But it's not. Or at least those two people don't deliver a product, they deliver research outcomes, which then takes double the number of people to turn into reliable software, which then takes double that number of people to make deployable and run at scale, which then takes double that number of people to support. And then you need a sales team.


I'm not complaining, and I didn't say anything about DARPA. The parent was complaining that it's going to get harder to break into ML as a start-up. My point was to illustrate some of the reasons that big players want to play with other big players.


DARPA is DoD.


DARPA is a small fraction of DoD: it represents 3.6% of the R&D budget, and less than 0.5% of the total DoD budget.


Sure, the US is not a fast mover in goverment innovation, and it has reasons to be conservative. (For example it's really big and complex [federal republic], and so on.)

But there are smaller orgs (let's say Mexico, or Spain, or Greece, or just a town in Germany), they might try whatever they think helps them.


Could you set yourself up as an independent unit under a larger company?


Someone convinced the CIA to drop a big fraction of a billion dollars on AWS, like 3-4 years ago. So it appears to be within the realm of feasibility.


Right, AWS. Not a couple of guys building out a cluster in their garage. That's my whole point.


Keep in mind you're watching a marketing demo.


I know this to be the case. It's all careful shown for the moment.


The nice thing that saves ML from being only another way for AFGAM to dominate and force all the innovation to a highly concentrated group of geniuses is transfer learning! You can download Imagenet trained brains and get them to classify stuff just as good as in the competitions and then adapt them to various different applications. Google has been nice and releasing some of these models and they'll form the basis of open robot brains, etc. in the future. There will probably be a Open Robot distro pre-packaged with a bunch of these brains and applications in the future.

Little companies can adapt these brains to niches that are too small for the big guys.


Transfer learning is great, but is still pretty limited; the main success stories are retraining CNNs from ImageNet to other datasets and word2vec/GloVE.

In the space of all the things you would want to do with ML, this is pretty limited.

[EDIT]: I stumbled upon a transfer learning for NER paper a second ago: https://arxiv.org/abs/1705.06273v1 And the results are not conclusive. You can can improve your results in truly data impoverished regimes, but you get diminishing returns as you add more data, so the only way to catch up is to get more data.

So, you may be able to get reasonable performance with fewer samples, but you will still get better performance with more samples, and the more samples you get, the less benefit you get from transfer learning.

This may not always hold, and I wonder if active learning paired with transfer learning would do better, but at the moment it's not convincing.


It's always been about the data. Algorithms talent and effort is at best a 2nd order effect - and this has always been true.

As a result, you would success in this area is mostly driven by the data. In other words: this is rolling out exactly how you would expect, no?


Oh it totally is, it's just one of those things that I think most of us expected to be more distributed rather than relegated to a few silos of data.

That's not to say there aren't opportunities, it's just that if anyone makes significant progress in one of those other areas, it's either on the back of AFGAM or will eventually be absorbed by them through acquisition/targeting.


> No clue where to go from here honestly

Would you be opposed to joining one of their AI teams? Clearly you believe they are building great stuff...


Not the GP, but many people in this situation cannot move to one of the few places the AFGAMs have offices (let alone the fewer places they've located their AI/ML teams), and working remotely doesn't seem to really be an option.


Except, their AI is simultaneously too stupid to preempt problems like the story we had with the Firebase customer. To generalize this a bit more, their AI is certainly too stupid to preempt similar problems with over-billing in general (though you can certainly create a little conspiracy around that one). And then to generalize even one layer higher, their AI is also too stupid to identify candidates who might get frustrated with their customer support and turn to the HN bully pulpit. And lastly, come on, they can't even predict the election winner! Otherwise why would they throw their weight behind the losing candidate? :-)


Because that was not AI, and that was not a problem for them they set out to solve.


You can out compete Google by simply keeping your services not deprecate themselves and make them available and having good customer service . Look at Clarfai when benchmarked it against Google's API and Microsoft Clarifai was more accurate .


I'm glad to see this comment at the top and setting the direction of the discussion. A year or two ago, this thread would have been dominated by ML is 'just' this or that, ML can only do XYZ, it's not AI, it's certainly not a path to AGI, and nothing is.


"IMO ML/AI is the last frontier for technology (an IMO humanity)"

This is too extreme. It is the last frontier of some set of problems, but it just introduces a new set of problems. This always happens. Even the implausible singularity still introduces more problems to solve.


I think the point is that our role in solving those problems diminishes. Certainly some, and maybe a lot.


no, this ML level still statistical inference. next step is about structure and generalization.


To balance out my other comment, check out Indica Labs. It's one guy in New Mexico: http://www.indicalab.com/


To be honest, your sentiment comes from selfishness. You want a fair playing field so that you can rise up but at the stake of slowing down progress for the entire world. Some fields are meant to be dominated and that's part of life man.


There are many reasons monopolies/duopolies are bad, it's not some selfish game.


That's not really the case here but let's say even it is, these big guys are producing most of the work for everyone to lean on. Look at all the open source initiatives out there. Don't stop progress for the sake of money.


I feel for you man. Same stuff is happening in IoT.


I think IoT is even worse, since the actual tech isn't new or innovative, just being applied everywhere because it has reached a certain price point now. It's also a badly integrated insecure mess currently that has a lot of bugs. Which screams for a big player to take over.


Yes, access to data is everything.


I know it's just par for the course, but I always get annoyed how peppy and fake these keynotes are. When they go with this kind of faux-personal style where they're staging calls to their mother on stage, etc. It actually comes across as the opposite of personal and just seems really rehearsed and distracting. I'm actually not sure who they're speaking to exactly when they go with this approach as I feel like most other people would have the same reaction as me. But I guess I don't know that for sure.


Apparently Steve Jobs calling Starbucks (during the reveal of the first iPhone) to order 4,000 lattes was an actual call to an unsuspecting Starbucks. Years later they were still getting people calling up trying to order 4,000 drinks.

https://www.fastcompany.com/3006147/because-steve-jobss-firs...


The article doesn't say anything about Starbucks not knowing about it. The employee who picked up apparently didn't know but I'd be surprised if Apple didn't get the go ahead from Starbucks first.


I got the feeling that they were intentionally trying to push the point that their employees had families to try to counter the stereotype of the 20s, childless workaholic that many have about Google. But part of its feeling staged is likely due to the fact that the presenters are real Google employees who's skills as actors are pretty limited. If you had professional actors presenting, it might feel more authentic despite actually being less authentic.


Then calling their mother is the wrong play since that is exactly who a 20 something childless workaholic would be calling for lack of a wife / children / friends...


I suggest purchasing a scarf, growing a beard, wearing some flannel and extremely thick glasses, and getting some cold brew coffee to sip. Things will make more sense then.


> purchasing a scarf

either steal a used one, or knit one yourself


I read: I suggest purchasing a scanf :(


The Microsoft Surface keynote guy does his really well and comes across fairly authentically. But it seems like the audience never gets his occasional joke.


Yeah, I actually have found those presentations to be more palatable somehow.


I don't think we're really the audience for these conferences, though.

This is to generate hype for the company, and to encourage people to buy google's products. Pretty much everything on here so far has been consumer-targeted.


That's kinda my point though. Are there really any non-developers watching this? I don't feel like there are probably many consumers watching since Google just doesn't have the rock-star power that Apple does. Maybe there are tech journalists watching or other similar industry figures, but I don't feel like this presentational style is actually required, on a psychological level, to engage with people like that. In truth, I guess I just enjoy poking fun at the circus of American big-business culture.


It's more to influence journalist to couch their features in human-relatable terms. It's about showing off specific, widely applicable use cases. 'Google launches another photo sharing service' vs 'Google launches Pics, parents can send grandma physical photo albums of their grandkids.'


Occasionally the media will pick out a couple short segments from these and broadcast them. I think that's the audience.

Also these keynote things are just cheap facsimiles of Stevenotes. Apple's own keynotes have started to slide down to this level of without him.


There are currently over 100,000 people watching this, and it is the number one trending thing on youtube.com

I think google has more of the rockstar status than apple does, or even ever did.


I'm not sure those two facts unequivocally prove your point.


No, I have the same reaction. The execution just isn't as there as it should be some of the times. And as others point out, the result is often a bit of awkward silence. Not a big deal though in the scheme of things


Quoting my friend, 'I feel like someone is going to get slimed soon.'


That basically happened during the YouTube bit.


Maybe they have some internal guidelines that everybody needs to follow.

Similar to

> Do.

> Strive for images that represent genuine stories.

https://material.io/guidelines/style/imagery.html#imagery-be...


I'm sure they do. One of the mottos at my company is "attack for progress", whatever that means. I think every sufficiently large business has a funny tendency to start making up wacky culture codes.


So Youtube is announcing "super chats"... Basically if you pay, your message to a live stream is highlighted so that the streamer can notice it.

Basically is the camgirl business model, fifteen years later.

Nothing bad, just...lol.


The way she said "Cool, no?!" after demonstrating this and the little silence after was half-way embarassing...

I mean, they sure DO know that this feature has been on twitch for one year now, right? It should have been a side note somewhere or an announce on a product blog, not a full blown stage demo...


I think the point of it being presented at I/O was a new API that lets Youtube streamers sync up the "superchats" with anything they want in the real world.


Why didn't Twitter make this money a long time ago? All those tweets scrolling across TV for free...


Because tweets are mostly curated for TV audiences. Paying for top tweets would show that editorial staff doesn't give a flying fudge about their audiance, and simply want to make money (which you are also giving it to them by sitting through commercials).


Because Twitter can't deliver any kind of new product.


A more relevant comparison would be donations and cheering on Twitch.


I basically never used twitch, not my thing.

But still, I guess they also "copied" this feature :)


Don't worry, when Twitch added it, there was jokes of it becoming one of "those sites" for months. It is an easy comparison to make, but to be fair, the porn industry has always been at the forefront when it comes to innovation. Things like video players, live streaming, donations, etc.

Realistically though, I don't see anything wrong with someone supporting a channel they enjoy. I honestly wish Youtube had more of it. It's ridiculous how many creators have to use 3rd party services like Patreon. It makes so much sense for Google to have it built in instead.


Youtube was far behind porn site's HTML5 video players for years. It was in 'beta' forever. I always thought it was strange how slow they were to improve it, being a billion dollar company and all and having a pretty straight-foward product page where people spend 99% of their time, engaging with a single piece of tech.


It's also definitely been used on Twitch in exactly the way it was with those they "copied" it from. They cracked down on it, but there's only so much you can do with dress code policies et c. Considering there's more than a hint of this already all over Youtube without this feature, I hope their moderators are prepared for some long hours.


I've always been somewhat underwhelmed by AI stuff but Google is rocking it. As a long time iPhone user I'm really tempted by their stuff (if it wasn't for the privacy creep). Apple isn't even in the same league as Google when it comes to AI and services.


> if it wasn't for the privacy creep

The sad thing is, I'd pay Google $500+/yr to use their tech... if they could guarantee a firewall between any of my data moving outside of Google-proper.


What do you mean? Google doesn't re-sell your data to third parties - that wouldn't make any sense. Third parties pay (bid) for the chance to put their ads in front of you...


Not making sense doesn't translate to guarantee. Third parties also include governments.


(I work at google)

FWIW the privacy policies pretty clearly outline that your information isn't given to third parties. So this is already guaranteed.


I think what would make ethbro more confident, and myself as well, was if there was a way to pay that included a contract ensuring my data could not be used in certain ways without my express permission.

To clarify, I don't worry about the Google of today, I worry about the Google a decade from now, or even worse, the company that buys a chunk of it down the line that feels no such compunctions about using that data however they see fit. A legal contract would go a long ways towards preventing this possibility.


I'm obviously not a lawyer, but to my knowledge, a privacy policy is a contract that ensures that your data cannot be used in certain ways without your express consent. A future google would have the option to update their privacy policy, but they couldn't use your information unless you opted in to the new policy (which companies often do by saying "if you continue to use this, you opt in").


If I'm looking at the right privacy policy (was searching for the general one which would cover Google Analytics un-logged-in tracking), here's a couple of things that struck me as being fairly open ended. Especially for a company as large as Google (and future Google).

From: https://www.google.com/policies/privacy/#infouse

"We may combine personal information from one service with information, including personal information, from other Google services – for example to make it easier to share things with people you know. Depending on your account settings, your activity on other sites and apps may be associated with your personal information in order to improve Google’s services and the ads delivered by Google."

Also: https://www.google.com/policies/privacy/#nosharing

"We may share non-personally identifiable information publicly and with our partners – like publishers, advertisers or connected sites. For example, we may share information publicly to show trends about the general use of our services."

I don't want to get into a "Google's creepy" discussion, as I realize that's a personal choice and a lot of people don't have a problem with it these days.

But I do. And I'd be willing to put my money where my mouth is if that were an option.


> but they couldn't use your information unless you opted in to the new policy (which companies often do by saying "if you continue to use this, you opt in").

Yes. I'm working under the assumption (which may or may not be correct) that an official legal contract (and not just a privacy policy which may be enforceable as a contract depending on locale) would both require more formal acceptance (i.e. typing in your name and the date for a digital signature and a note that it's illegal to sign if you are not that person instead of just a click on a button saying "I accept"), and provide a much easier time for any individual wishing to pursue legal action against an entity that violated it (which should keep most of those violations at bay).

In other words, I see privacy policies as new, unstandardized, not taken very seriously by many companies, and possibly useless in some jurisdictions. That may or may not be accurate, as I have no legal experience.


Then wouldn't the right course of action be to talk to a lawyer and not ask google to charge you for something they already do for free?


Whether they charge you or not is irrelevant to the point I'm making. Whether they make it part of the standard offerings, or charge for it, some people would like some assurance that they have actual legal control over their data and it's harder to subvert than an accidental click, and they have some recourse should that data be abused.

This is not a problem unique to Google. I'm far more worried about Facebook than Google with regard to this.


There are various ways the existing systems can be gamed to extract information through ancillary channels, particularly WRT AdWords/Doubleclick.

It would be really nice if I could know that Google guaranteed that for example I would never see any ad that was targeted at a cohort smaller than ~100k individuals, especially including geotargeting (eg. no ads targeted at mid-forties male Python developers, with a Github account, who are dual-national American citizens, are native speakers of Hebrew and English, in Albuquerque, with an Android phone, with Linux as their desktop OS, that have recently browsed Quora).

Also disallow targeting employees of specific companies entirely, and a few similar dodges.

This wouldn't eliminate the problem, but it would increase the effort/cost required to bisect cohorts, cross reference with external data sources, and incidentally reveal personal data.


To phrase it another way then, that I can control exactly who is sold my data. Whether directly, or indirectly (via Google proxying ad buys or other aggregate monetization methods).

Point being: my data is worth $X to Google. Based on this, they develop and offer me services, using $X to fund development.

The monetization creeps me out. Mostly through opposing my privacy goals and Google's profit goals. As with another poster, I'm worried about my Google file being in the hands of 20-years-from-now-Google, notsomuch Google of today.

So I'd rather just pay Google directly, call it even, and be certain that my data wasn't in the "pot of gold" pool that everyone's eyeing for whatever new monetization strategy occurs to a Google exec.

And it sucks, because I'd love to use Google Now, Home, etc. I just don't trust them with those data streams.


First, Google Now/Home is pretty shit. At least around here (Hungary).

Second, if you worry about your file getting in the wrong hands (either now or later), then paying them won't help with that. To provide the services they provide they need to have that file.


I think it's more useful in the US, but more importantly, it stands to become much more useful as more AI-personalized services are rolled out. The downside is that Google needs to have the information in my file for it to be useful (or to build a shadow file without my knowledge).

Paying at least makes explicit the bargain we're striking. As opposed to Google offering me something for free, then having every incentive to suck as much money out of what they can glean from me, I'm instead simply paying them for the true cost of the services I consume.


I think Google is very much thinks that if you start paying, you might eventually look for alternatives. And that'd lead to a pricing war. And so on.

And, more importantly, if they roll out a new service and you start using it, you'll immediately start thinking about the cost of using it. Or people will start to "demand" more for their money. (Things like support.)


On the other hand, in their favor, if I was paying for Google then I'd still be using it. And in the absence of new government-mandated information portability laws, they would still be able to have a file on me -- and therefore offer superior services than their information-less competitors.

But I do think you're right about the "branding" issues. They don't want to broach that can of worms because it reminds people that things aren't free and Google is making money to pay engineers and shareholders somehow.


Including subpoenas?


No legal contract would be capable of doing that.


They don't do that either. You can be paranoid if you want but the only time that happens is when compelled legally.


Then why not expose this alleged violation of privacy policy and collect your massive settlement?


I'm already spending a significant fraction of that amount on hardware, various subscriptions, and the occasional new release movie. My impression is that offering privacy at a premium might negatively affect the free tier, so it's off the table for now.


"Apple isn't even in the same league as Google when it comes to AI and services."

Not many companies are.


Are any?


Facebook is the closest.


They may be, but they're not using their AI knowledge to drive better features or experiences for their users. It seems like they're just using it to deliver better advertising.


Have you seen the f8 video about real-time photo/video enhancements via AI? They very much like to roll that out as soon as it works on most phones.


I think Amazon and Microsoft are closer.


Really?

Amazon recommendations are really bad and Echo is just answering predefined sentences.


I'll say openai, nvidia, and facebook are the 3 closest. Amazon and Microsoft wouldn't be close unless they choose to prioritize it.


Microsoft is actually doing quite well, Kaiming He was at MSRA before he left.


I'm sure in certain niches there are companies and people outperforming Google on AI stuff. Overall? None come to mind.


I would add Baidu as well


Before this keynote (maybe afterward too?) Alexa can do way more things than Google Assistant can.


Integrations!=capability. I use a echo dot and it sucks at voice recognition half the time. I am dumping it very soon and getting a Home.


It's part of capability. Siri's almost useless because it's tied in with Apple services. Google's quite similar. I don't want a walled-garden assistant or to pick a different assistant for every possible service that could help me.


These conferences show some of the more "wow" AI tech, but in general, I love the little things it does. A few examples would be the smart text selection on Android or Google Photos share suggestions. I like things that are seemless and truly improve your everyday life in small but meaningful ways.


well you're in luck Google Assistant is now on the iPhone


Is that true? Allo is on iPhone, and can interact with the Assistant, but that's a suboptimal way to use it right?



Ah nice thanks, tho I assume Apple will never let us set it as the default (home button shortcut) ...


Not in a million years.


Apple knows it's not a product and won't monetize on it anytime soon. There is not money in it. Yet.


Based on the Google I/O talk about actions/home/etc it is seems like they are speaking directly to Amazon. Clearly they aren't looking at Facebook, Apple, or Microsoft here. They have Alexa in their sights.

That said, I really hope they improve the hands free SMS interactions. My phone tells me I have a text and asks me to say 'listen' to hear it, but it doesn't start listening for the word until it beeps, so we enter into a staccato repeated 'listen' wait 'listen' wait then finally the text is read. Then after its read, it says "Say Ok to send a respond" not just "Would you like to respond?" where answering yes or no would be fine. It never vocalizes what special phrase you have to use to answer in the negative (if you're wondering it is 'cancel'). So here's hoping for a much better dialoging system.


I don't know. Much of the reason I keep Facebook at all is because I share photos with family and friends.... what they're talking about right now (Google Photos) pretty much solves that problem without my friends/family just needing a phone.... which means I don't really need Facebook for the one reason I've kept it.


Same, this drives me nuts. I also should be able to interrupt it when sending a text, specifying a contact, or confirming -- I'm always ready to answer before it is ready. But like Maps, there seems to be a bizarre anti-UX in Google Now where the functionality has actually gotten worse over time.

Recall Google's 1-800 number and their SMS search service?? I used to point to those as examples of how user friendly I imagined interfaces would become. But we've headed in the other direction instead and of course those products are dead.


Oh man, it seems that Android Go is a better system than Android itself. The features they presented - like easy off-line sharing, downloading YT videos, data usage management up front - are ones we should have had on normal Android phones for a long time.


Agreed. Greater efficiency, lower power and data usage... are those not things that premium phones can benefit from also?

I suppose you have to give up some features, but given how few of the features on most apps I do use, I think this could be an acceptable tradeoff.


Just wait until eventually someone over the XDA forums creates an Android Go based ROM with those features back, but still with the more efficient kernel and base system.

I'm actually pretty stoked for this to be honest! Can you imagine if there was the possibility of switching those profiles in real-time, like an upgraded version of the "battery saver" that already exists.


Would be possible to use Android Go on old devices that are stuck on old android versions? if that possible it would be awesome.


Well I'm guessing the apps at least will be available on the play store (if you don't have a Android phone in Go configuration already). Here's the Youtube Go app: https://play.google.com/store/apps/details?id=com.google.and...

That being said, I love having a choice. A lot of people criticize apps being split up, like Allo/Duo, but I like it. I don't get the obsession with all-in-one apps. Youtube's strategy of splitting out Youtube Kids, Youtube Gaming, Youtube Music, etc makes so much more sense.

Some might want the heavy Youtube, some may want the light Youtube Go. You get to choose what your needs are. If you don't need all the extra crap, download the Go version!


But will it be available outside emerging markets? AFAIK Android One was not available in Europe. Screenshots of YouTube Go app suggest India as main market.

It's sad, because I wanted Android One, now I would like Android Go. However I fear that it will be "unavailable in your country".


The I/O keynote always bothers me. Apple and AWS have great Keynotes that get me excited to develop on their platforms. Google's always comes across as too self congratulating.


You mentioned that Apple has great Keynotes and at the same time you wrote that "Google's always too self congratulating"?

Didn't you mean the opposite? Apple is always so self centered on their presentation that it is hard to watch.


They're self centered but it's about stuff that is relevant to me. All this talk about Google Assistant is useless to me as a developer. At least Apple talks about stuff I'm able to plug into.


Are you really watching the keynote? They literally just released TPUs to developers.

Last time I watched wwdc they had staged a standing ovation for a trashcan...


Google Assistant should be useful to you in many ways as a developer. It's become an open platform from several directions.


A quarter of this presentation has been how Assistant can help your app integrate with the user via the SDK/Actions.


You can write apps for the Assistant now.


Wasn't Apple's last keynote literally about Apple was brave for making it so that you could no longer plug 3.5mm headphones into stuff?


I believe there are supposed to be two keynotes this year? Product announcement and then a developer one right afterwards.


"Courage"


Google more self-congratulating than Apple? Come on.

They are just less polished in presentation. Which I like. Raw stuff is more interesting.


Wow, Kotlin is now a first class supported language with Android!


I feel nervous admitting this, but I don't see much of value here. I am definitely not typical, I realize, and often the major benefits of technology don't become apparent until a while after they are announced. But, right now, before people show me why it's amazing, I see these things as small marginal improvements.

I'm an open minded person, though, so I'm interested to learn what I'm missing.


I agree to some extent, but the Google photos features are always a really nice touch. Having it remind you to share pictures of people you just took a picture of, or being able to share your entire library with a spouse is really useful.


The demos always get a bit excessive. I'm no journalist, but I'd be way happier if they just told us what they added and moved on.


I agree, but I think this also about training people to use the features. Something Apple has is also doing on their events.


I feel like if tech announcements should just be basically patch notes. Why all the pomp?


The answer is obvious: some people like it.


I can´t help to think this was not a business as usual I/O, it feels like the concretization of the exponential nature of technology development, sported nowadays a good deal by deep learning. Notice that a good portion of the products presented in I/O weren't just a result of many people grinding away on a problem (say, iterating on a new version of a operating system), but where "machines" worked/trained on a problem in order to do something useful for us.

This is not news to me nor anyone here I am sure, but seeing it realized on a keynote where a company gives us an update on what was accomplished in roughly a year and seeing it realized in a more wholesome approach/platform (Google Assistant), instead of scattered along different products, just made me realize that a fundamental mindshift is happening on a more global scale. From makers ("I will make this machine/software so that it can help with X") to leaders of machines ("I will give the necessary conditions for the machine to help with X").


I thought it was funny how video previews and superchats have been around forever in porn. Google lens made me genuinely excited. The most important bit was also the shortest: designing nets with nets. I wonder what nn experts think of that little sidenote.


Watching this reminds me that the satire in HBO's Silicon Valley is pretty spot on.


Shots for each time they said platform in the keynote will get you alcohol poisoning


This all looks awesome (e.g. google assistant, google lens, google home, etc) but I would rather pay some $ for the service and have the guarantee that my data are my data ... than have it for free and have google mine my data to sell me ads or sell profiles to who knows who.

Maybe others are willing to trade the privacy creep for it being "free" but I'm not.

It would be great to have the option.


Google simply does not do this. They are very transparent about their security and privacy policies and I encourage you to read through the policies. Google uses the data internally to improve your experience - they are not in the business of selling your data to third parties.


> Google [...] are not in the business of selling your data

Not today, not yet


If that is your concern, how would paying into the system improve things in anyway?


Google most certainly does mine your data to sell ads. The incentives are very poorly aligned if you don't like ads and you want to use Google services.


That really doesn't make me feel any better.


Google Photos "shared libraries" looks awesome!


Just be careful not to take nudes while your kids photos sit in a frame behind you. Auto send to mom.


We're in for some fun times.


I would like to know more about that black magic of removing things from photos in the foreground.



Thanks!


If you have moving subjects (i.e., people in a public place walking past) you can take multiple shots and finagle it with your preferred image editor. Other techniques using variations on this, or just simply copying/pasting from other parts of the image (clone tool).


I know about clean plate techniques like that, even use them often in production... haven't realised it's the same. Looking at the link in sibling comment to yours now.


Yeah, a lot of these things that get wrapped up as super-futuristic innovations that use the hottest buzzwords are really just ways to apply old techniques more efficiently.

Since phones provide a platform for a robust software camera, instead of requiring the user to manually snap 9 shots for HDR, the phone just does it for you in rapid succession. Some DSLRs will do this automatically too with 3 shots, but they are much worse at providing a space for robust software assistance.

It sounds like obstruction removal is the same kind of thing, where the phone actually captures a snippet of video and automatically differences that for you, instead of having the photographer take multiple photos and difference them manually in Photoshop (as has been done for a long time, e.g. [0]).

And since Google automatically uploads all of your content, they can analyze it on their servers and return an asynchronous result. They do this for auto-generated animations, panoramas, and movies; it doesn't all have to be performed on the local device and they can take their time.

This is not to poo-poo such developments; I think it's awesome that I can use Cardboard Camera and get a stereoscopic 3D image of my surroundings. Even special-built 360 cameras like the Theta S struggle with stereoscopy. I would love to see Canon or other camera makers innovate by providing a DSLR platform that makes it easy to load new software macros that enable cooler shooting and processing modes (with the caveat that the DSLR must never allow these to slow the device's operation in no-macro mode).

The best we can do now is a full-custom firmware like Magic Lantern [1], which is cool and all, but when I tried it on my 6D, the camera response time was much slower and the sound recording didn't work on the build I installed, resulting in a couple of home videos without audio. I took ML off and haven't been inclined to try again.

[0] http://www.deke.com/content/dekes-techniques-022-removing-pe...

[1] http://www.magiclantern.fm/


I remember this being done at SIGGRAPH a couple years ago. Super cool!


I remember adobe showcasing this with photoshop. I can't find the video right now, but it was basically about Photoshop automatically filling in almost half a screen of material. I'm looking for a video on youtube specifically.


It's in PS now. It works, but with so-so results. It depends. There was/is also Furnace from The Foundry which is more like this, but for video: https://www.foundry.com/products/nuke/plug-ins/furnace and https://www.youtube.com/watch?v=OLWlmNEa4d0


Adobe calls it "Content Aware Fill" which is likely what you're thinking of. This was the video that blew everyone's mind, but its 7 years old at this point:

https://www.youtube.com/watch?v=NH0aEp1oDOI


That's a great one! This is the one I was referring to: https://youtu.be/xoyNiatRIh4?t=283 (2015)


Yep. That's the one. I just wanted to say that this tech isn't groundbreaking, especially since adobe came up with that so long ago.


My wallet is ready. Hoping for some nice new devices.


Hope for some way of making sure Android doesn't turn into Windows XP (ie, millions of no longer supported devices connected to the internet).


Doesn't? It's already a much bigger problem than Windows XP. There are billions of Android devices shipped each year, and a miniscule proportion get updates. Even the best devices for updates only get them for a couple of years.

If this isn't a disaster bigger than Windows XP, it'll only be because most of the devices end up in landfill.


This is an absurd comparison. The number of devices running Gingerbread (which is 7 years old) is around 1% [1]. At the time when XP was 10 years old, its usage was much higher.

I understand that phones don't receive updates (I've had several myself), but practically speaking it only takes about 3-4 years for the world to move to new Android releases. As of today, over 50% of devices are Lollipop (3 years old) or newer, and over 75% are on Kitkat or newer (4 years old). I've been around long enough to remember the days when people were complaining that they would never be able to take advantage of Froyo. That has clearly not happened to us; Froyo is long dead and gone at this point (and it's worth noting that Froyo is about the right age to compare to XP at the time of its deprecation, and has nowhere near the usage).

The Android update situation is definitely bad for security, but the situation is simply not in the same class as Windows XP. I'm pretty sure up until the point where Microsoft forceably deprecated XP, there were still new computers (especially in places like China) being sold with it. With Android, old phones become obsolete and get replaced by new ones, and manufacturers do adopt the new OS versions for their new phones which causes updates to make their way to users (albiet slowly). I don't think you could buy a new phone (even in China) with Froyo today.

[1]: https://developer.android.com/about/dashboards/index.html


How many Android 5 devices are still receiving security updates?


Came to this thread wondering if the Firebase issue from earlier was going to spill over, but yes also hoping for some cool stuff as well and keep refreshing Twitter :)


Samsung Chromebook Pro should be announced


Pro was announced months ago. Pre-orders just sold out on Amazon.


Was it? I couldn't find any anywhere. My bad.


Checkout r/chromeos



I always hate their confusing sites.

Why is your link not showing the same as https://events.google.com/io/

what should I be watching?

edit. seems your link work and my stream does not. thank you for stopping me from missing the event.


The irony, YouTube can't keep up with the streaming.


It can, it's your network that's the issue.


My network is fine, and it broadcasted Build 2017 w/o problems.


Streaming on my tv in high def via my chromecast without an issue.


LCD Soundsystem had this giant farewell show at Madison Square Garden complete with documentary and album saying it was there last show ever. Now they've come back to play tech conferences? Lame


They actually reunited a few years ago, have been playing shows since then, and are releasing a new album this year. They didn't get back together just to play a show at a tech conference.

https://en.wikipedia.org/wiki/LCD_Soundsystem#Reunion_and_ne...


Yeah, they were a headliner at Austin City Limits last year...


SuperChat sounds like Jackass coming to Youtube, can't wait ! Parker/Stone satirised it 17 years ago in Fat Camp episode, it's gonna be so awesome :)


Google in full offence on ML/AI. Their computing advantage enables them to outperform everyone else at this point. AI dominance and monopoly is in sight.


I don't have time to watch this, but I would like to go back and find highlights and hear about some interesting stuff earlier than later. Can someone point out a good liveblog/livetweet source?



The verge usually does a 10 minute summary video covering the big announcements minus the annoying attempts at humor and banter


Monthly Android updates everywhere please.


Yep. And tie it to 'Google Experience'. No updates? Then no Play store, no Gmail.


I'm really hoping they talk about the Starcraft 2 API they're working on. Looking forward to getting my hands on a tool less finicky than BWAPI.


Did google make another new chat platform?


Nope, they made a box


Why, oh why is the sound that bad? Sounds like they're adding reverb for the live sound and just use that for the stream...


It's at Shoreline amphitheatre, so a little reverberation is kinda inevitable. Someone should develop an AI to delete reverb, though.


don't think that would really require 'AI', no?

or is 'AI' the new 'Algorithm' like 'API' is the new 'server'


Definitely the latter.


They are actually using machine learning to reduce echo/feedback in Duo. It's not perfect but I think the technique is pretty cool.


Of course there will be reverb for everyone there, but the microphones are very close and should pick up almost not reverb...


Great stuff for google home but I thought they already had almost all of this


Is there no ATAP session in google I/O this year?


Kotlin? Why not golang, I wonder?


Long story short, Kotlin is easy to support without substantive (any?) changes to Android runtime, golang would involve much more effort.

It's also not clear to me there's any desire to write applications in golang.


Native android application development is totally not my community. I admit I have never even heard or kotlin.

Actually, at the coworking space I'm currently streaming the conference at, nobody had heard of kotlin.

edit: AH, Kotlin runs on the java virtual machine. That makes much more sense to me now.


Sorry, that's the context I should have provided! (Knowing that also probably explains why they cut to a shot of the JetBrains CEO in the audience, since Kotlin originated there.)


Well, it already works on the JVM and on Android, many production apps in the wild are already using it, and they avoided the problems that existing JVM languages like Scala and Clojure had in terms of running on Android.


Can you explain about some of these problems ? Why are some JVM languages easier than others ?


These Google people. They are so full of themselves. Yet they fail to deliver a profitable product besides serving ads


Seems like they are doing pretty well to me..


"Have my router pw, Google"

edit: Should have emphasized the using camera part.


1) It already has it when you enter the password manually. 2) What do you think Google will do with your router password?


Yes. But are we really going to ingrain our mindset that taking pictures of passwords & sending them to the Cloud is safe?


Tons of people already save their passwords in Chrome/SmartLock


People already take pictures of their router passwords. There's a reason Google introduced this feature, they must have done studies and discovered this is a really widespread practice.

If anything, this makes things more secure by automatically signing you in (so you don't need to save the picture on your device and in the cloud).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: