.ai is not one of the ccTLDs that Google considers generic[0].
It would be interesting to know if Google will be making .ai generic, or if they will make a special exception for themselves considering they do not allow others to change the geographical targeting of domains registered to a ccTLD.
[citation needed], since that's an extremely serious accusation -- particularly with search -- from a regulatory and antitrust perspective. That type of accusation should not be so carelessly thrown about, despite any opinions on Google as a whole. Search and business are almost religiously walled off from each other; apropos, they have, in the past, deindexed or penalized their own properties when they violate their own terms. This happened to the Chrome site in recent memory, and Danny Sullivan has written[0] about others.
If Google sites tend to float to the top for you, you're probably an avid Google user (don't forget search results are personalized) or searching for terms in which they rank highly. Extrapolating such an experience to Google taking advantage of their position to create a non-level playing field for search results is a biiiiiiig claim, given the implications. Back it up.
So when are they going to stop the "Install Chrome" popups plastered all over their properties? Or Forcing manufacturers to default search to Google on Android? Google absolutely ties products together.
Almost every single search query returns search results linking to content surrounded by adsense ads. I'm still waiting for them to de-prioritize those webpages, and link to results containing little to no ads. IIRC back when YT still ran exclusively on flash, they de-prioritized other websites for flash content but ignored the offensive over the top ads on YT's front page. Memories fade, but at this point I hazard a guess that even Google's own employees don't believe that their search doesn't favor them.
> Memories fade, but at this point I hazard a guess that even Google's own employees don't believe that their search doesn't favor them.
I am a former Google employee, and your guess is wrong, in my experience. Most I've interacted with on this topic correctly perceive Google search results as important to maintain as neutral, given the consequences for not doing so.
This thread started as discussing search engine ranking for .ai. A claim was then made that Google always prioritizes its own properties, and the person who made that claim discarded the context of search results. We're now talking about Google+ sidebars/shopping boxes, Android integrations, and so on -- that was not at all what I'm talking about. I'm specifically countering the notion that PageRank is manipulated to support Google properties. That's all.
I'm not here to defend Google. (I am actually quite negative on Google.) I'm arresting the claim that Google manipulates search results to promote its own properties, because that's just fake news. Sorry. I am aware the EU and others considered Google+ sidebars and Google Shopping boxes as manipulating results, but they are orthogonal and only manipulate results technically. There is absolutely no code in search that adds PageRank to a property simply because it is owned by Google, and the burden is on the person making that claim to back it up.
I don't think anyone has to offer any definitive proof. Even if Google is being a total sweetheart here, it's a conflict of interest and at a minimum, it highlights the importance of strong competition in the search space. The point is to get a scenario where we don't really have to care if Google is inherently biased toward its own products or not.
Companies go about this type of self-promotion in sneaky ways, because they know such tactics won't last long if the first line of code their employees see is `if 'google' in domain: pagerank * 5`. But it'd be naive to pretend like there aren't some people quietly attempting to promote Google's own properties within Google search results, with varying degrees of self-awareness of this fact, from "completely oblivious" (e.g., the search engineer's assumption that a Google-backed property is more likely to be the "correct" search result for a relevant query, and that therefore there is a problem if Google-backed properties are not highly ranked) to "explicitly tasked with finding subtle ways to favor Google's own offerings within the algorithm".
Large companies are necessarily masters of PR/indirection, and they will work to retain plausible deniability, especially after the MS antitrust case.
I didn't say they thought it was unimportant, just that practically speaking it never seems to be the case.
Edit: Just saw your edit, yeah, I'm broadening the topic. We'd have to shut down every single comment thread on every website if that never happened :)
One interesting thing that happens to some of my non techy friends and relatives: when they want to search for something they know it's google that is their default search engine, so they type in google followed by the keywords into the address bar. Guess which companys services they get first in their search result?
Aside from the fact that it is in their terms and doing so would violate said terms, I don't see the issue with google prioritizing their own properties over others.
Search is a feee service offered by a private company. Users of search don't pay for it so I dont see how monopoly or antitrust legal issues arise. As long as they are honest about what they're doing, and don't manipate ad results (which is a service that does have paying customers) the non ad results could be generated however google sees fit. Since it's free google should be able to set whatever terms it wants - if you don't like it don't use it. No?
Isn't the same also true with .io? I'm not sure if they use any anymore, I remember google.io pointing to something but it doesn't seem to anymore. Really curious to see what the future brings for these domains.
The potential use of TPUs for training is very exciting. They say that they train floating point, but I don't see any indication to the FP precision they're capable of; perhaps I'm missing it. At any rate, I'm really excited to see resources being piled into their Cloud ML Engine product at this high rate.
I've made this comment in a couple of other threads, which subsequently veered off into other territory, so forgive the repetition, but it's a really interesting topic to me. The open-source distributed tensorflow stuff is pretty nice, but it still requires a huge amount of hand coding and tuning the machinery, reminding me quite a lot of just rolling the damn thing in MPI yourself. I'm very excited to see where distributed tf will be in a year or two, but it's a chore today.
Depending on how much these TPUs and other Cloud ML Engine developments help, I'd gladly abandon the attempt to roll it myself with the distributed tf.
The hope is that using Google's secret sauce to auto-distribute the execution graphs and associated data ingestion makes things "just work". At the moment, the documentation and examples for that are a bit all over the place and require writing models to conform to the newish tf.contrib.learn.Experiment API, which is also a bit underdocumented and underexampled. Using it for very large datasets (say >tens of TB) seems to be pretty challenging at this moment (to me at least). For a lot of use cases, BigTable seems to be the ideal ingestion engine for Cloud ML tf jobs, but there's no C native API. You can use BigTable, but you can only dump complete tables into tensorflow rather than querying for relevant data (since the queries cost money, a 5000-core jobs with just a few queries per core would cost you a fortune, so the ability to query BigTable in the tensorflow reader is disabled.
At any rate, I've been banging around on it for a few weeks and am really hopeful. I will follow Cloud ML Engine's career with considerable interest.
I'm not comfortable with Google having and sharing my data.
Very excited about the NVidia chips though. Would be happy to run TensorFlow with them on my own hardware - though I'm more excited about the day when client software and hardware make that easy and cheap.
On one hand, I'm definitely concerned about my privacy and sharing my data.
OTOH, I like to think of Google using my data as a form of a vote. The more data they have on me and tailor experiences using my usage data, the more useful it is and it will be designed to reflect that. So while in elections you may only get one vote for your choice of candidate, Google building on my interactions with an app will mean my voice is taken into consideration.
This is one of the reasons why I tend to share my crash/usage data with developers be it Google, Apple, Microsoft, etc.
“We will work together. Unwillingly at first on your part, but that will pass. […] In time, you will come to regard me not only with respect and awe, but with love.”
> However, between Gmail and Google search, they already have all of my data.
I think we need to change our attitude here, because I don't think this is right. I think it is a self fulfilling prophecy. There is a certainly middle ground, and we are giving it up if we assume that companies already have everything. It is a false dichotomy to believe we either give all our information up or we go live in a cabin in the woods somewhere. Data minimization with an eye for the actual trade-offs in each situation is key.
You raise a good point. I guess my frustration is that there basically is no good way to have truly private/encrypted email-Google aside. Ironically, having your own email server in your closet (a la HRC) might be the most private option.
You highlight that people devalue other peoples' privacy compared to their own. HRC lost an election for having a mail server of her own. (Sure, she was a government official at the time.)
I think people do generally take it as a 0 or 1. Either their personal data isn't so valuable that they discount its value to 0. Or, they recon that google is too large an entity to avoid, so they effectively discount their data down to 0, again. It may also be a time sensitive matter. You only have so much life to live - how much do you want to spend customizing what you share with google?
I definitely feel google tech, phones included, should be much cheaper for the data their users give them. Their flagship devices are generally price-parity with Apple's - and Apple's not mining your data the way google is (to my knowledge).
You could pay for Google apps for work and don't have your data indexed. So it's really more a question how much money / work to switch to another provider is your data worth to you?
What exactly is so scary about indexing and so comforting about the lack thereof that makes all the difference to you and everyone else? The index is derived from the data... if you have the data you're minutes away from the index. And an index isn't even necessary to search data. Why are people okay with giving away their data as long as it's not indexed?
And what's the privacy issue with indexing, and heck, still showing ads? As long as they're not sharing your data, where is your privacy being violated by indexing and being shown ads when they already have the same data? That makes no sense.
Violating a contract is a breach of the civil code in my country, exposing you to financial compensation. It is very similar in the rest of the EU. So while not a crime, thankfully some countries still take that seriously.
The issue is "fuck you, I am paying you to use your service without having you getting your greedy paws on my data." If they start indexing data from actual customers, then they're reading it. That's a breach of privacy and contract, plain and simple. And at that point, well, you might as well not pay for their service if it doesn't bring you anything more than a free tier plan.
If I'm paying you to keep my private journal safe yet accessible, you bet I'm going to be pissed if you start telling me at which page something is. Maybe you just remembered a content -> page mapping, but you still read my damn private journal.
I was bringing up the contract issue somewhat separately to point out that it's not a criminal issue like the parent claimed, as far as I know. I wasn't saying breaching your contract is OK, sorry if that was confusing.
My main beef is with what we do and don't call a privacy violation. If your issue was that use of your information or identity by someone else (especially to make money) entitled your to fair compensation (or otherwise it shouldn't be done simply because of unfairness), I would agree with you. If your claim was that a PERSON (or their machine) obtaining any new information about you is a privacy violation, I would get that too. But you're claiming a machine that can violate your privacy merely by indexing and displaying things back to YOURSELF that it already knew. That makes no sense to me. No one is gaining any extra information about you when a computer indexes information it already has, so while it might be unfair, it simply cannot be a privacy violation.
I've decided to trust Google with my personal email, and that may be what it is. But for those who don't, I suspect the difference between "having" and "indexing for ad purposes" (the data is certainly indexed for search) is pretty irrelevant.
Google shut down my apps for work because one of our devs had multiple sign-in enabled and he had a banned play account. Google uses its bots for the ban hammer so just be careful
dev was added as an admin to our google apps for work and had multi-login enabled.
He already had a banned play account from five years ago.
The ban didn't happen overnight after he enabled multi-login. Took a few months and then got an email from Google that all related accounts and any accounts belonging to me or to this dev will be banned without any notice.
This is the reponse to the admin appealing the ban (Note the words "associated Google Play developer account")
"Hello Mimi,
Thank you for reaching out to the Google Play Team.
After reviewing your appeal, we have confirmed our initial decision and will not be reinstating your developer account.
Your Google Play Developer account has been terminated due to multiple policy violations by associated Google Play developer account. You may also review the Content Policy and the Developer Distribution Agreement.
Note that Google Play Developer Console terminations are associated with developers, and may span multiple account registrations and related Google services. Do not attempt to register a new developer account. Any subsequent registrations will be closed and your developer registration fee will not be refunded.
We recommend that you utilize an alternative method for distributing your apps in the future.
Please let us know if you have any other questions or concerns.
Pet peeve: the Google AI effort is the product of ${LARGE_NUMBER} of engineers. This marketing page highlights a half-dozen luminaries. Not only do these luminaries also get comped ($$) one or more orders of magnitude more than rank-and-file, but now they get the glory as well. Sigh.
This is true in all endeavors. There will always be leaders who get the majority of credit and grunts who do most of the actual work and don't get any. See a cool building? The architect gets all the credit but the people who actually built it get none.
This is why I'm happy most of all the iterative improvement and breakthroughs in ML are happening at research labs, public or private. When papers get published, you usually know who contributed to the endeavor.
I'm curious how you would even know? If Google is making ML breakthroughs, I think it's quite obvious they won't be immediately publishing them if they are actually commercially useful.
Right, but I'm saying once they actually hit commercially useful breakthroughs (e.g. AI game changer/black swan) they certainly won't be sharing them any longer.
Or put another way, they could currently be holding back 90% of their current research and only releasing 10%.
I'm not saying that is the case, it's just I think private research into some fields is much more opaque than folks (myself) realize. Relying on Google (or any other private company) to publish findings is likely not a path for long-term success.
Yeah, that one always gets me too, because it is so universal and so natural for the bigshot architect to proudly say "I built that," and never once acknowledge any of the guys who risked, and sometimes even gave, their lives actually building it. It's one of the last bastions of such unchallenged arrogance.
True, but the leaders have to take the scorn through the bad times with the glory from the good. As usual, the satire from the writers of Silicon Valley is on point:
But the featured staff are undoubtedly the highest performers and/or most creative in developing new ideas and technologies. Setting ego aside, I think the main goals in one's career should be working on very high value problems and enjoying every day at work. Working with a few people who are more knowledgeable/creative/etc. is a big win.
And none of them were AA or Hispanic. I think Google's top lawyer is AA, but who is the highest ranked minority technical leader in the company? Adewale Oshineye perhaps?
Regardless of how anyone feels about minority representation, it never ceases to boggle my mind that many people think about the whole issue in terms of individual examples rather than overall proportionality.
The single example of the CEO doesn't tell us anything about the demographic makeup of their engineering force, unless you think the example of the former is a good basis for inferences to the latter.
(But this is admittedly a total derail from the point of this thread.)
Again, I wasn't taking a position (eg "regardless of how anyone feels...") on what to do, in order to keep this from being too much of a derail. I was making a point about the strange way people attempt to quantify minority representation with one-off examples.
Of course, back in the 1980s all UK domain names were the other way round. UCL was uk.ac.ucl.cs if you used X25 and cs.ucl.ac.uk if you used TCP/IP. UCL was the gateway between the two worlds, and used magic heuristics to figure out which universe to forward email to. For example, if the domain started with "cs" it was a TCP/IP address and if it ended with "cs" it was an X25 address. Which worked well, right up until Czechoslovakia joined the Internet.
"Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device.."
Does anyone else find it odd that we're so far through the looking glass this past year that Richard Hendricks' latest venture seems not only plausible but a bit mundane by comparison?
I really hope this approach is one adopted in the rest of the industry, because it would very easily allow for a transactional-privacy system to be implemented: users set a limit on who can access what data, for how long, for what purpose, etc and advertisers/tech companies still have the opportunity to gather data whilst not completely obliterating users privacy.
Both parties get what they want, and nobody loses out. Win win.
In fantasy world, we'd all be renaissance men, making beautiful art, building useful products, etc.
In the real world, 99% of us would be bored out of our minds with rapidly-declining health, rapidly-declining self-esteem brought on by our unexpectedly poor handling of time, and a continuing unfulfilled search for stimulation that will mostly get filled by more mindless "entertainment" and consumerism/shopping.
Our social lives would decay as there'd be no in-built workplace community, and this is exacerbated by today's convention of small or totally childless families. We'd lose respect for one another. Productivity would plummet as few people would have any incentive to work cooperatively anymore (and if you can't imperil someone's paycheck, they will usually not contribute; for all of the open-source developers, how many are only in it because their paycheck and/or job prospects are somehow tied to it, and even including that number, how many more closed-source developers are there?). Volunteers don't take firm direction well, and that kind of direction is necessary to keep a successful, efficient enterprise running.
There are plenty of windows into this imaginary world without obligatory labor today, and they do not make it look like a pleasant place.
I do sometimes. I mean, obviously there are some things I absolutely need to do, but other than that, I have a decent degree of freedom. I feel like it's worth thinking about what I should do rather than just default to sitting in front of a TV or browsing Reddit.
Well, I do! When I wake up I spend a minute or two thinking of what I need to do today in my personal relationships, what are the one or two most important work tasks I need to address, and generally plan my day. I then meditate for about 5 minutes and then get up and try to enjoy my day.
I was sorta hoping with the announcement of Google.ai that they would add the .ai extension to Google Domains. Right now there are so few, and very terrible, registers that handle .ai. Like 101domain.com who can only make nameserver changes for you during their 9am to 5pm business hours on weekdays.
.ai is a ccTLD that is run by, I believe, a single person in Anguilla. I do not think it is run on a platform that supports EPP, the standard protocol that is used by registries and registrars to handle automated purchasing of domain names. Given that, it would not be possible to handle .ai domain names on Google Domains. 101domain.com is likely doing a manual process for registering .ai domain names that goes through https://whois.ai/
(Source: I am the eng lead of Google Registry, which run's Google's TLDs.)
> 101domain.com is likely doing a manual process for registering .ai domain names that goes through https://whois.ai/
Gandi seems to do this as well. I recently tried to buy one through them and they quoted $600, and required a corporate subscription. 101domains had the same one for a little over $100, which implied to me that there's a paper process with people somewhere and each registrar is pricing accordingly (since all .ai are $100/2yr flat).
Good timing on your comment, too, since buying-then-transferring seemed like a good strategy until I read it. Didn't realize .ai was so fundamentally manual. Although[0]:
> We expect that about mid 2017 we will support EPP and other registrars.
in the late 90's, I knew the guy that had somehow secured and was running the .so ccTLD single-handedly. In the age of $70/year register.com bills, it seemed insane that a single person could be responsible for an entire TLD. I'm glad to hear that some things on the internet are still run by people.
A large number of ccTLDs are still run by a single random person deep in the bowels of an IT department at some university. I was just at an ICANN meeting in Madrid last week and met some of them.
Also, we (Charleston Road Registry) are definitely not a small company, but we run ~45 TLDs with a team of engineers much smaller than 45, so we have a ratio of TLDs to engineers significantly higher than 1:1.
Well that was really interesting to learn. Thanks! That's pretty amazing it's run by a single person. Many companies are relying / using only a .ai domain which sounds even riskier when you put it that way but maybe I'm just paranoid :)
Fundamentally a registry is just a database that contains the information you see in WHOIS queries along with IP addresses. Writing one with a simple front-end would be an easy weekend hackathon project for an experienced developer. You definitely couldn't implement all of EPP that quickly, but a simple CRUD web application for domain names, sure. There are many dozens of different registry implementations out there, most of them homegrown and running just a single TLD. And let me take a moment to plug our registry software, the source code for which is available here: https://nomulus.foo
It's easy to use, though you need to pay $100 just to get an account and then $100/per domain / per 2 years. The interface is bare bones and you can't do much more then set name server records but it does the job.
Anguilla charges $100 for a two year registration (I'm not sure you can register a domain name for any other period of time). The extra $60 is 101domain's markup.
Implied here is the emergence of a new business model: developer powerful custom hardware that you do not sell, but only make available as a service in the cloud. This way you get multiple layers of lock-in.
I suspect it's the opposite. TensorFlow in source will never be as fast as the TPU cloud with custom processors, so it won't be as suitable for serious high volume production stuff. It's more like a hook into the TPU ecosystem.
If TensorFlow is open source then there's nothing stopping AMD, NVIDIA, Intel, etc. from making chips that can integrate with TensorFlow. Other than the fact that it's hard, of course.
Is it though? Facebook has been pushing hard for open hardware and open datacenter designs. I'm glad they remained an independent company if only to act as a counterweight to Google's dominance in developing customized hardware.
Now, what was that quote again? “We have only bits and pieces of information. But what we know for certain is that at some point in the early 21st century, all of mankind was united in celebration. We marveled at our own magnificance as we gave birth to AI.”
This is because Google is really pushing the image of being the "place AI is happening." They're doing the same amount of ML stuff they've always been, but it looks bigger because they're focused much of their marketing on it and attempting to associate their work in people's minds with the kind of "AI" that's just futurism for now.
I think the reason they're pushing so hard, actually, is that they're in a dogfight with IBM (i.e. Watson) over who enterprises will call if some VP gets the idea that they want to "solve problem X with AI."
The timeline is more they hired Hinton, got a bunch of neural network street cred as a result, and then hired a bunch of Hinton's acolytes, and then acquired Deepmind. From the outside, it seems Dean or some other higher up had the foresight / luck to bet big on neural architecture research just as the engineering started becoming practical.
In my experience, Google is getting better at finding more general information and worse in finding specific information.
It also tries to "help" too much with fuzzy matching which starts to make it useless if you are looking for less common thing. For example, if you search for "nmake tabs vs spaces" it returns a bunch of results for GNU make and flame wars about tabs vs spaces instead of nmake specific info regarding usage of tabs or spaces in nmake makefiles.
Yes, they seem to be dropping keywords that overly restrict results which is ironic since those keywords are oftentimes the most important due to their specificity.
I find myself preemptively using quotes more and more with Google. I kind of which their logic was segmented by user-type.
From my experience, it depends on the type of things you're searching for. If you're doing a search for a local restaurant or something in the news, you don't even realize how good Google's become because it basically gets you exactly what you want in your first result. However, if you're trying to find something more obscure, older, less contextually relevant, or using keywords, Google can get very frustrating because it's trying to contextualize something that shouldn't be contextualized.
It's 100% of their interest to find exactly what you're looking for as quickly as possible.
Oldie but goodie article, about when Mayer was experimenting with search results, and the conclusion was that minimal, accurate choices wins. People actually spent LESS time when there was too many results.
Unfortunately this doesn't seem to be for me, even though I'm really interested in AI and I'm currently working on an AI project (look at my profile if you are interested). I wish I could run my own AI algorithm rather than just using their own. It would probably be cheaper to just buy my own Xeon computers. Training is really what takes most of the computer power.
We just bought a used Xeon HP workstation for a build server. It is 7 years old. But a 24-core new workstation would be much more expensive. We're putting in some SSDs to speed up build times, and it came with 24GB of RAM, which is enough for our use cases. The price? $400 or so. It even came with a Quadro 5000 video card, though we weren't planning on GPU compute for this particular box.
The one downside of this box is that the power supply is totally proprietary to HP. Well, the motherboard too. So if one of those craps out, you're done, unless you can locate a spare cheaply.
It's also reeeeaaally heavy and draws a lot of power.
> It would probably be cheaper to just buy my own Xeon computers.
What does Xeon have to do with training? You need a fast GPU (or apparently TPU). The CPU is relatively unimportant. Furthermore, you can roll your own algorithms (i.e. architecture) in TensorFlow. You don't have to use "theirs".
Well, I'm designing my own AI engine from scratch, not using any of the current machine learning techniques, i.e. Convolutional Neural Networks, etc. Right now the way my code works is that it uses cores to make the whole training faster, ans xeon's have lots of cores. If I could use GPU's I would, maybe I can and I just need to figure it out. For now the simplest thing to do to make it faster is to add more cores.
Now, you may think I'm crazy for designing something from scratch, I may be indeed, but how else can we discover/invent something totally new. I think I'm actually onto something given that the early results look quite promising.
I actually created a video demonstration of my AI Engine which you can find a link to it in my profile but I have done a crappy job of explaining the strengths of it [1]. Too much work, so little time.
Neat. I also have built my own system from scratch., so I don't think it's crazy at all.
Where I get stuck is figuring out new challenges to throw at the system. I find it funny that there is lots of discussion here about performance and tuning, and not so much about practice applications and valuable problems to solve.
The whole point is that you can run your own algorithms; that's why it's on Google Compute Engine. As far as I can tell you should be able to avoid TensorFlow even, although I suspect that'll be a pain.
Typically those sit in the airstream that a bunch of fan cartridges pull through the enclosure. So they run from the bottom of the case all the way to the top otherwise the air would flow around them instead of through them.
Doesn't appear to be. I'm wondering this myself. For web apps, cloud services are usually the better choice vs. in house servers. But, with ML, the pricing will dictate that more than anything.
The comment is a fun poke. Please don't take it literally! NVidia's new Volta GPU has Tensor cores that I think are very similar to what was in first generation TPU. I read the TPU papers and blogs published recently and I am seriously impressed by TPUs!
I feel like there have been a series of announcements about AI toolkits and work in the last week. Is there some collaboration, is this a special week?
"We're currently testing Federated Learning in Gboard on Android, the Google Keyboard"
Thank-you for reminding me why I don't use Gboard and instead the BlackBerry Priv's fine keyboard instead. The obsession of prediction in our culture is absurd.
Doing something myself is one thing, having it done for me is convenient, having it done for me before I asked is even better (I do not mean in all situations, only in some, and only for some things, etc...)
And yet auto-suggest on Gboard still doesn't work when you use the Google Search widget, and it also has different backspace behavior than normal typing.
The device has been pleasing. Good camera, nice slide out keyboard, nice display, very very fast charging capability (this was a surprise), snappy performance even with lots of apps running. I run it on Net10 and it appears to switch networks with ease (I have coverage everywhere, even rural Ohio).
Concerning security, I think its basically an encrypted Android. The custom permissions feature is nice too but I think that's on generic Android too. It would be nice if there was an option to have no Google Play apps, but I have all permissions off on them anyways (except Gmail and Drive.)
I might be skeptical, but every single AI experiment or showcase I see either online or on google experiments list is nothing that impressive. The whole AI thing is so overhyped these days...
Is it overhyped? I feel so. Every startup seems to have a machine learning engineering position available and for what? I have friends hired just to do data analysis (which is necessary for machine learning because you need a clean dataset that can be consumed for training purpose), but beyond a couple simple rules, he's not doing the kind of machine learning the cool kids are celebrating. So the hype is everywhere, but everyone's job is different, a lot of people don't do the "cool" AI stuff.
Most importantly, AI and machine learnings are not synonym at all. People should regard AI as the overall goal, wanting computer to do something really smart on its own, very little to no instruction.
Going back 5-7 years ago when Siri first came out, it was quite a noise. But I honestly never found a compelling reason to use Siri until I started driving and I needed to call somebody. The problem is that I have some accent and I have a lazy tongue so I blur on words, so Siri does not always understand what I want to say. I am surprised the voice-to-text feature in Message is quite accurate (it can auto-correct by learning the next phrase and understands pauses so it waits for you to speak again), but Siri doesn't. So while I appreciate virtual assistant, their capabilities are very limited to a set of commands.
I do feel the AI community has made some good progress, from beating Mario game, beating top Go players, to self-driving car, the technologies supporting these initiatives are getting more sophisticated than ever (and I feel the tooling too is getting too competitive, too many choices). I am working on some simple home automation involving NLP (for speaking to the program), image recognition (who's in the house), and a couple self-execution routines such as make sure all nights are off if no one is in the house or reminds me doctor appointment every Wednesday. That's not AI, it doesn't do anything else beyond what I programmed it to do, it doesn't try to survive or better itself.
Yes and no. I mean the demo is great, but can I actually have one? No? Well then I can only be so impressed.
I know, they're coming soon, but I've been waiting for years and at some point the enthusiasm wanes when it's been right around the corner for several years.
Hey, maybe they're trying to solve a problem that's harder than anyone expected.
I'd love a place where I can grok and find only useful info. Comments like this don't help.
> The worst thing to post or upvote is something that's intensely but shallowly interesting: gossip about famous people, funny or cute pictures or videos, partisan political articles, etc. If you let that sort of thing onto a news site, it will push aside the deeply interesting stuff, which tends to be quieter.
It's true that the comment was a bit fluffy, but it's also the sort of whimsical tangent that is in the spirit of this site, which is intellectual curiosity.
Intellectual curiosity often takes up seemingly trivial details and plays with them for no particular reason. Mostly nothing important comes of it, but sometimes something really does. In any case it would be a big mistake to try to push that kind of thing out of here—it's well within the scope of what HN exists for.
It would be one thing if that was the only fluffy comment. The entire collection of top level comments is nothing but fluff, with only one or two exceptions. The vast majority of comments on this page are completely off topic. There is practically zero technical merit to any of the threads, even the on topic ones.
I think it was unfortunate that this generic landing page was chosen as the golden submission and other more specific and more technical submissions (such as https://news.ycombinator.com/item?id=14360653) were duped off the front page. If a more specific article had been chosen instead, we might have had a more focused technical discussion rather than domain name commentary and the same old non-specific privacy concerns discussed ad nauseam every day.
> it was unfortunate that this generic landing page was chosen
You know what, you're exactly right. Generic pages lead to generic discussion and that is uninteresting. We're well aware of that effect, so usually make a point of penalizing generic portal-style pages or changing the HN URLs to more substantive ones. For some reason we missed this case.
It's interesting as yet another demonstration of how reliable that effect is, I guess. Initial conditions have a huge impact on HN threads. First comment is another.
People are welcome to downvote my comment, though the quote you cite is referring to links, not as much comments.
> The test for substance is a lot like it is for links. Does your comment teach us anything? There are two ways to do that: by pointing out some consideration that hadn't previously been mentioned, and by giving more information about the topic, perhaps from personal experience. Whereas comments like "LOL!" or worse still, "That's retarded!" teach us nothing.
As you may have noticed, there are a couple of comments regarding the use of the .ai TLD. A legitimate conversation can be held on whether or not .ai should be generalized for "Artificial Intelligence", possibly to the detriment of people in Anguilla.
The only site on the internet that will meet your requirements of "a place where I can grok and find only useful info" is one you write yourself. What you consider to be "useful info" is going to differ from what other people think.
And, of course, you should consider that your comment is, if anything, a more egregious example of a useless comment, detracting from the conversation.
You can hide comments with the [-] symbol. Use it.
Some advice: When you're new to a place, it's better to stay quiet on the sidelines until you figure out the norms. Calling out the people who've been around much longer does not make a good first impression.
Mini-moderation (and, FWIW, complaints about voting) tend to be received worse than infractions around here. Downvote (you can't yet; you can at 500 karma) and move on.
Bringing the benefits of AI to everyone... even if its against your will... this will be used for evil. Mark my words. Totaliatrian governments of the future will leverage this technology, combined with controlling the populations' political discourse. Don't believe me? It's already happened on a massive scale both last election and recently before that: Facebook Manipulated 689,003 Users' Emotions For Science
I think you seriously underestimate the people, and seriously overestimate the government.
The point being, the harm the government is doing right now is obscured from the people, but with AI trawling all public data, these issues can be revealed much sooner.
I find this naive. One day every 2-4years the govt answers to the people. Every other days is story after story of the govt bending to the will of powerful/wealthy special interests.
Giving each side force multipliers doesn't even the playing field, it makes the absolute gap even larger.
Firstly, you're incorrect that the government answers to the people only on elections. They lose plenty in the courts against ordinary citizens.
Secondly, your point would be valid if the force multipliers applied equally. This likely isn't true.
So which side gains more advantage? If the government could sufficiently obscure their actions such that the data weren't available to analyze, that would give it the advantage. However, the government is constrained by certain transparency requirements that don't apply to citizens, so by default, citizens know more about their representatives than their representatives know about their citizens.
> Firstly, you're incorrect that the government answers to the people only on elections. They lose plenty in the courts against ordinary citizens.
The executive branch losing a court case isn't answering to the people. It's answering to the judicial branch. Being granted explicit permission as a citizen to do something you legally should be allowed to do isn't gaining anything at all. An innocent person being allowed to remain innocent again isn't giving anything to citizens. Those are basic rights..
> If the government could sufficiently obscure their actions such that the data weren't available to analyze, that would give it the advantage
The govt already does this across the board. I'm not certain how you could argue otherwise. Taking a issue in the news recently, we still don't have national rates on use of force by the police across the country. As in literally no-one knows what the police is doing because centralized records aren't kept, and every area does it differently. Police departments have fought body cameras. Fought enabling citizens to even see how they are executing their jobs.
I appreciate your optimism and wish I shared it. I think you overestimate peoples ability to care, understand, and do anything about any data found wrongs.
Not all people, but enough people. This sort of optimism is precisely the reason open source exists. There are plenty of data scientists who would crunch some numbers in their spare time for similar reasons.
Statisticians already do this with elections around the world, for instance.
Your opinion of someones comment doesn't ad anything to the discussion either. In any case the linked article is marketing blog, but it got voted up anyway, because Google. So I suppose the discussion is about Google branding and marketing, so their reputation and morality are fair game, IMHO of course.
Facebook, Google, Twitter and Reddit are probably the biggest threats to our democracy in the world. They are highly trafficked sites that huge populations get all of their information from. They are also staunchly only in favor of one political party and have been caught abusing their power to favor their political party of choice. Twitter suppresses hashtags, Facebook filters posts, Google only sends news notifications for news they approve of, Reddit openly suppresses subreddits with opposing views.
This is empirically not true, especially if you look at the policies of and donations by Google, Twitter, and Facebook.
Google donates to both parties; Twitter obviously has a very laissez-faire to the Right's voices, including the POTUS, and its COO recently suggested that if Trump canceled briefings, he could take questions directly on Twitter. And the evidence and rumors continue to swirl about how multiple Republican Party candidates campaigns used Facebook's own incredibly specific ad targeting, which they did not revise until after the election. (cf. Cambridge Analytica, contracted by Cruz and then Trump to "manage" the election outcome.)
Your examples do not mean much when it comes to actual influence and its use. Google relies on blatantly left-leaning "fact checkers", Schmidt was very active in HRC's campaign, Twitter actively censors Right-leaning hashtags and users [1], Facebook censor(s/ed) conservatives [2]. Do you really think actual censorship and suppression (when claiming otherwise and pretending to be objective) is less important than letting Trump takes questions?
There's no such thing as agnostic fact checkers. Facts are complicated by context and anyone who claims to objectively "check" them is selling you snake oil. For example, the lead Snopes fact-checker was previously a liberal blogger. Guess which way their fact-checking skews?
We're all human and we have our biases, whether we like it or not. It doesn't take much effort to tweak an assertion to go from "Mostly false" to "mostly true" - it just depends on which one the writer favors. My BS detector goes haywire whenever someone claims to offer non-subjective anything. tl;dr investigate facts and claims yourself and never trust those who have power over you, even if you vote for them!
Donations do not imply liking a candidate or party. Large companies will always donate across the board to keep in favor with whoever wins. The government has a huge amount of control over the fate of these companies. That is partly why they have a vested interest in controlling elections and thought.
Twitter isn't laissez-faire with right leaning voices. They actively suppress them. They remove hashtags from trending daily. They mark posts as containing explicit content. It's getting ridiculous how much all of these companies are trying to silence the right.
Facebook will take almost anyone's money on their ads. But they were also caught this last election cycle actively suppressing conservatives. It got so bad Zuckerberg had to come out and do damage control by inviting conservative commentators to a sit-down or piss off half his US userbase.
It would be interesting to know if Google will be making .ai generic, or if they will make a special exception for themselves considering they do not allow others to change the geographical targeting of domains registered to a ccTLD.
[0]: https://support.google.com/webmasters/answer/62399?hl=en&ref...