Hacker News new | past | comments | ask | show | jobs | submit | more masonhipp's comments login

This is a better product than the original. They added one of the most requested features (audio out) and didn't remove anything important (unless you don't have a better plug-in speaker system).

The biggest oversight is now the fact that it can't work together with an existing Echo: Amazon is making us order these _using_ an Echo... but the two devices don't communicate at all and require individual wake words. I wanted this as an added mic for my existing system, not as a new independent system.

Big step in the right direction though.


Connecting them will be as simple as a future software update. Amazon's challenge right now is to get the hardware in place before its competitors (Nest, Apple) - and it seems to be pressing hard to get a wide range of devices in every room of the house.


I agree with you on this, but generally they'd have an easier job of it if they worked as a connected mesh rather than independent controllers. I can't say for certain but it doesn't seem like the added engineering time would be that much greater.


Congrats guys this is pretty exciting. We've used unsplash in a lot of our projects (some pretty heavily) and it's always nice to see that the biggest source for original cc0 photos is alive and growing strongly.

Keep up the good work.


Hey HN, co-founder of Glyphs here. We're trying to make SVGs much much easier to use. The SVG format is in a similar place to where font-embedding was years ago when TypeKit was founded (somewhat complicated, multiple best-practices based on usage scenario, etc.) and we think there's a big demand for SVG that works for anyone.

Our app hosts SVG content for you, has a library of our own SVG imagery and icons, and delivers everything via cacheable JS-injection served through MaxCDN. Basically: add one line of code to your site and you have all of the benefits of SVG, one cacheable http-request, and 99% browser compatibility (we have a PNG fallback built-in).

I would absolutely love to know what you guys think.


Very cool! Love the inclusion of SVG and not just canvas, I'm personally very invested in that format so it's nice to see the tech being pushed forward.


...most chart libs use SVG. Beside Highcharts (not open source) I have yet to see a SVG based charting lib that is fast and lightweight. d3.js and chart libs based on d3.js (like plot.ly) are very heavy weigth in comparision (= more than 100KB minified JS code; plot.ly is 1.05MB huge: https://github.com/plotly/plotly.js/blob/master/dist/plotly.... )

SVG wasn't supported until Android 4 and only Internet Explorer/Edge is really optimized for SVG (thanks to their VML investment). On the other hand Chrome, Safari and Firefox render Canvas much faster. (only relevant if you want to render thousends of objects on a battery powered device)


Also, canvas support can be turned off. We have clients who have the latest version of Chrome, however they run on VMs and the system admins have canvas turned off (no gpu rendering at all). Out of our control...


Can't canvas be rendered in CPU?


It's been pretty clear for a while now that data has enormous financial and strategic value.. But that doesn't mean code _isn't_ the future. You really don't have much value out of one without the other, at least in the age where most of our data is not easily understood.

The biggest issue with our learning algorithms is that they are incredibly complicated and require high levels of mathematical understanding. The number of people driving forward machine-learning is small simply because it is such a difficult subject. There are many more people aggregating large and interesting collections of data. I think by releasing TensorFlow Google is encouraging data-collection built around their software; making it easier for a majority of people to benefit from machine learning while ensuring the continuation of their own product, code, data-collection, and ecosystem.


Excellent read. Especially interesting trying to figure out all of the behind the scenes reasons for writing this essay, timing, content, etc.

My co-founder is a woman and we have a somewhat similar dynamic: she doesn't write as much, doesn't like the publicity or argument that sometimes happens with a position in business: but at the end of the day she helps create a feeling of family in our team. I don't know if it is gender-based or just her personality, but it's nice to see someone in a similar (albeit much more successful) position getting some recognition.


I always love seeing clever ways to be random. Hadn't considered this one but it's a pretty good idea, and I'm sure there are even better ways to run with the basic concept (maybe analyzing color values of recent pinterest or instagram photos).

If you can find a private source of community-driven-randomness that'd be even better.


I wonder whether pictures would be a good source. I could see strong leanings towards blue (sky, water) and green (vegetation), and maybe also grey (concrete, asphalt). Would be interesting to do a collective histogram over a wide array of images and see if colors tend to average out or if there's clear peaks.


There's still plenty of room for enough entropy to power most use cases. Even two people trying to take identical pictures should have enormous differences when analyzed at the pixel level.



That's a very intriguing idea, I hadn't considered the possibilities of looking at public/private photos and sampling color values across to get some randomness. Private is certainly the key. With something like Birdseed, the randomness is totally public just as it is when getting randomness from atmospheric data. If someone figures out which wisps of clouds you are sampling or what search term you are using on Twitter, then the jig is up!


Other sources of randomness are https://en.wikipedia.org/wiki/Special:Random or https://news.google.com. The URLs could be parameterized to request a random language, too.


Well for the Wikipedia link you'd just be piggybacking off their random algorithm rather than consuming organic entropy. Regardless, they have some information on how the page is chosen here: https://en.wikipedia.org/wiki/Wikipedia:FAQ/Technical#random


maybe that'll be the next in the "twitch plays" series!


Interesting changes, seems like the trend of pulling items out of icon-based menus and into persistent nav is gaining speed.

I'm not sure how I feel about the full-width pages yet.. easier to read commit messages but stylistically I did like the icon menu on the side. The narrower version also seems ever-so-slightly easier to read, but it's hard to say without trying it first. Looking forward to demoing once the roll-out starts.


We've had to deal with an onslaught of of free users at our startup https://glyphs.co/ quite a bit over the past few weeks: last week we took in something like 4k new user accounts.

I'm definitely a big fan of freemium, and specifically I really like limiting accounts by features (less by usage), but I think it's 100% imperative to control the rate at which new users are given access to your system. We're currently in beta which makes it easy (by only allowing users at our pace) but I think in your case (given the amount of processing required) it would have been very advantageous to build a line for the free account and let people wait for a bit. That builds demand, lets you control user-flow, and people who are very interested can pay to skip the line. Anyhow, just my thoughts, obviously every scenario is different and you guys know this area best.


Counterpoint: This can backfire. Scarcity can make the free plan to be perceived as having a value that is greater than free.


I like the "wait in line" idea. :) Something for us to test down the road.


"The beauty of the market is that we allow people to be Bayesian" [...] "People come in with some prior belief, but they can also follow prices to see what other people believe and may update their beliefs accordingly [...] participants in the market could focus their bets on the studies they felt most sure of, and as a result, rough guesses didn’t skew the averages as much."

It certainly isn't a fool-proof method of increasing accuracy, and it does favor popularity of a theory over other factors, but overall it's probably a nice layer of data to consider adding to the mix.


> "The beauty of the market is that we allow people to be Bayesian"

Yes, this is the critical piece. The results of the Reproducibility Project were not remotely a surprise to Bayesian observers. People like Gelman have been pointing out for ages (and I mean back to the 1960s) that the prior probabilities in these fields is low and necessarily a lot of the results were false positives. With the rise of meta-analyses, it is possible to have informative priors for particular fields of psychology or for psychology as a whole, which would let you make much better predictions about whether a result was real. But you can't use these in papers - authors are heavily biased towards using procedures or flat priors which are uninterpretable or grossly overestimate the evidence, and if you try to use any of the informative priors or more advanced models, they'll nag you to death with a thousand objections and complain about double standards and subjectivity and how this time is different and (ironically) bias. So for the most part, there's not much to gain in academic research.

But in a prediction market, you don't have to listen to the self-serving excuses or explain your reasoning, and there's something to make it worth your while.


Frankly, considering all the theoretical advantages to a prediction market in terms of encouraging people to influence the outcome in proportion to their confidence in the result, the fact that they were wrong 29% of the time, only 13% less than a simple blind binary survey of a pool of psychologists, isn't hugely impressive[1]. Scientists' goal, after all is to reach some generally accepted view of why a result turned out a particular way and in what circumstances a different result might be yielded, which is more nuanced information than can be conveyed by a simple market price. Asking a pool of scientists how confident they are about a particular survey's replicability and why - which is what the hypotheses are fundamentally all about - conveys more information than the prediction market. It's more useful to know that experts' doubts over replicability are linked to a specific survey design feature than it is to know the market's prediction of the probability of replicating it is 67.2% And financialisation of results might be actively unhelpful when it comes to debates over methodogical shortcomings of replication attempts and the pertinence of factors that were different between the studies. That's especially the case for a discipline like psychology where setting up absolutely identical conditions for a retest is a practical impossibility.

[1]and of course it hasn't been reproduced yet ;-)


It isn't fool-proof, but there is a lot of research into the phenomenon that groups of humans are pretty good at predicting outcomes (much better than most individuals). I forget the math behind it, but it makes a lot of sense mathematically.

Here's a book all about it: http://www.amazon.com/The-Wisdom-Crowds-James-Surowiecki/dp/...


Very true. There's another one floating around somewhere about how good we are at estimating the IQ of other people. Pretty interesting.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: