Hacker News new | past | comments | ask | show | jobs | submit | jkkramer's comments login

3 that I always watch for:

- Lenny's newsletter. While lately it's become mostly about how to succeed as a PM in Big Tech, he occasionally covers juicy startup tactics. For example, "What is good retention?" was solid gold.

- Casey Newton's Platformer. Balanced and insightful coverage of breaking industry news

- Data Analysis Journal. Goes deep on many growth topics, with tons of real SQL code from an experienced practitioner, which is extremely hard to find.

https://www.lennysnewsletter.com/ https://www.platformer.news/ https://dataanalysis.substack.com/


A rule of thumb I sometimes use to assess products, including ones I've built:

Looking back at the last year, are you (or your users) happy with the time spent using the product? Do you/they regret it?

Juicing short-term engagement can be effective for startups, but it isn't everything, and doesn't necessarily lead to lasting value.


Everyone seems to be missing the obvious reason they want you to view Wrapped in the app: because they want you to share it to your stories on Instagram, Snapchat, etc, which you cannot do on the web.

When Wrapped came out a few days ago, virtually every person I follow on IG had shared their Spotify Wrapped. It's phenomenal marketing.


Great interview with Nick about this and other topics:

https://www.joincolossus.com/episodes/20135760/kokonas-know-...


I would call its style unorthodox rather than nonhuman. It still plays common josekis (standard opening sequences) but often chooses uncommon variations. Its mid game is full of startling moves backed by VERY good reading. There's definitely still discernible strategy that us mortals can learn from.

If I recall correctly, the version that beat Lee Sedol was trained on amateur games plus self-play. My guess would be that this new version relies more heavily on pro games.


> My guess would be that this new version relies more heavily on pro games.

Unlikely, since AlphaGo can now generate large numbers of "pro quality" games from scratch. I think it's far more likely it is an autodidact at this point.


They solved heads up poker in this manner recently. They claim that the chances anyone can beat this computer in the long run are now infinitesimal.

http://poker.srv.ualberta.ca/about


Still a good way to go to beat no limit holdem I'd assume.


A group from CMU appears to have solved no-limit heads-up hold-em. It's only a matter of time (and compute power) for a full ring game.

No-limit is far more difficult than limit due to the risk of catastrophic failure. A Nash equilibrium robot won't make any money. A robot must identify a weakness in you, then deviate from equilibrium to exploit your weakness. So long as you're playing deep stack, you could simply play the Bertrand Russel chicken story (echoing David Hume): The farmer feeds it every day, so the chicken assumes that this will continue indefinitely. One day, though, the chicken has its neck wrung and is killed. It's the "maniac" style. Pretend to be an idiot that plays too many hands. Don't lose your shirt. The robot will learn that you're always bluffing. Eventually you have the nuts and you take everything.


If you have a deep stack you can bluff, but your chances of winning aren't high if you don't have the nuts after all. You can only lose so many times before it becomes the martingale strategy.

This especially doesn't work against multiple opponents.


Well, I said you should be careful :-)

That's a good strategy against a bad robot, not the latest batch.


The version that beat Lee Sedol was trained on pro games.


Their Nature paper says "We trained the policy network p_sigma to classify positions according to expert moves played in the KGS data set. This data set contains 29.4 million positions from 160,000 games played by KGS 6 to 9 dan human players; 35.4% of the games are handicap games."

It is possible that they fed it some pro games after the Fan Hui games but before the Lee Sedol games, but that would be weird; at that point it was already learning from self-play rather than trying to match human moves.

That said, I don't think that Master's better performance comes from being trained on pro games. The AlphaGo version that played Lee Sedol played much more like a human pro than Master does.


> games played by KGS 6 to 9 dan human players

I'm confused. I thought 9-dan players were considered pro? That's the highest ranking you can get, right?


There are multiple dan scales. The KGS scale is an amateur dan scale. I don't know how much the scales overlap generally, but I'd imagine a 9 pro-dan professional to be somewhere around 12 dan on amateur scale (pro scales also have more dense scaling). However, the scales reach the ceiling at 9 dan by convention.

Even the abbreviations differ: 9d (amateur dan) vs 9p (pro dan).


KGS 9 dan players are pros or amateurs that are professional level like former insei. The highest rank is almost 11d (it still says 9d, but the graph goes even higher):

https://www.gokgs.com/graphPage.jsp?user=leqi


There's an Elo rating table on this Wikipedia page which pretty much corroborates/agrees with this:

https://en.wikipedia.org/wiki/Go_ranks_and_ratings


I think those servers have their own ranking system that does not match the "official" rankings (which I think cap amateurs at some lower dan rank)


This is correct. A players rank will typically differ between both server and go association. Sensei's Library holds more information: http://senseis.xmp.net/?RankWorldwideComparison


Got a source? I can only find references to training on amateur games (e.g. https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol#Alpha...)


No, but I'm pretty sure the parent post is right. Pro games were included.

Edit: I'm not actually that sure. I'm asking around right now (with go players, not DeepMind people).


> Its mid game is full of startling moves backed by VERY good reading.

This is pretty similar to what chess engines do.


Perfect play is likely inhumanly aggressive on blacks part part and white making zero moves. Compared to that this is very human style of gameplay simply based on a different strategy culture as it where.


I don't see why perfect black play should be any more aggressive than perfect white play. Care to elaborate?


Black moves first, on a 3x3 and 5x5 both end up 100% black with any white piece being captured. Many other board shapes don't but Go is played on a 19 x 19 board. We don't know about 9x9 or even 7x7 so the pattern is hardly set in stone. Still it seems likely.

Now, with a perfect white play there may be moves an imperfect black player makes which causes white to attack. But, perfect play on both sides probably means any white stone gets captured so white plays zero stones.


> Black moves first, on a 3x3 and 5x5 both end up 100% black with any white piece being captured.

That is true. The same, however, doesn't hold for larger boards (such as 19x19).

> But, perfect play on both sides probably means any white stone gets captured so white plays zero stones.

For boards larger than the small boards you mentioned above, this is completely untrue.


One technique that can be useful is imposing domain-specific restrictions, then passing responsibility down the chain.

For example, most people only have a handful of recent conversations; can you get away with ignoring distant-past conversations when it comes to read/unread marking? If so, you could send recent normalized data to the frontend and let it derive the unread state client-side. React has some nice tools for efficiently deriving state from server-sent data.

This isn't always the best solution, but it's something to consider.


Unless the amount of data you need to transmit is large, I always prefer that approach as it enables other UI features without any additional network trips by the client. Also, it lets one use the same mechanism for multiple uses (in this case, get latest messages), so it adds less complexity to the app.

In the articles example application, you can transmit the top X oldest messages after the last seen timestamp for each room, then have the client derive the unreadness of each room. You then can use that same info to do things like preview the oldest unread message for each room and hide latency when opening a room as you already have a screens worth of messages in hand. One doesn't need to get more recent messages as UIs generally don't have room to display large numbers and should switch to a "lots" indicator.


Sure, denormalize working storage in the app, fight to keep the "system of record" / "operational data store" clean.


Hmm. So just make a request for "(my) messages since timestamp-x" from a client, build the join-query between "message rooms for me" and "messages since timestamp-x" on the server, then send a list of messages back to the client for the client code to sort out?

That actually sound good enough, as far as I can see: sometimes the client code pulls up 3 days (or max-N rows, or whatever) worth of messages when started (no big deal for me, might be for my daughter...), but usually only asks for new messages in the last minute (or whatever polling interval), after which the database and any application/middleware server are left alone.


It's interesting to contrast how Google and Facebook both approach open source for the web.

Google tends to release code and promote it without really using it much internally first. Documentation is prolific but confusingly organized and often fragmented among several versions simultaneously (cough, Google Analytics).

Facebook, on the other hand, actually seems to use their code before releasing and promoting it. Look at how they handled GraphQL: spec and reference implementation released a year ago, clearly labeled as a "Technology Preview". A lot of design work went into it before that, informed by the problems of internal product teams. Only a few days ago was it promoted as ready for production. The spec hardly changed it the last year. Documentation is good, and they work with the community to improve DX.

Why the difference? Hard to say, but my feeling is that there's a more direct link between Facebook's product-driven open source work and their bottom line. There are other startups constantly nipping at their heels, so they need to be on their game product-wise. Better code -> better products -> profits.

Google is largely impervious in the search and ad space, which is their cash cow. It almost doesn't matter how good or bad their other products are. The company is not at risk. Their open source work reflects that.


Google tends to release code and promote it without really using it much internally first.

According to Brad Green, the Engineering Director over Angular, Google AdWords, Google Fiber, and some internal tools are all built with NG2. AdWords is kind of big deal to Google.

Edit: source for AdWords reference, http://angularjs.blogspot.com/2015/11/how-google-uses-angula...


They recently rebuilt google merchant center in angular.


Except for the tons of counterexamples at Google...

Bazel, Tensorflow, protobufs, GWT (a web/JavaScript project!), dozens of utility libraries, etc.

You are forgetting that Google is a huge company, much bigger than Facebook. It's more a collection of disparate entities than a monolithic giant. Each open source project is run differently.


It's my experience with their web-oriented and JavaScript projects. They definitely put out lots of high quality open source work in other domains.


Google does not actually do much with JavaScript. Sure they have gmail, but they put a lot more effort into graceful degradation than most web companies.


Google has larger and more complex JS apps than most other companies on the planet. They do a lot with Javascript besides gmail. Maps, Docs, Photos, G+, are all large Javascript applications. And those are just a few.


These are all separate applications which sit in their own silos. They are individually complicated, but they don't need to fit into some company wide framework.


They typically sit on top of internal Google frameworks. Closure's runtime library is the tip of the iceberg.


I think this is an age thing. People who grew up using the web before javascript started destroying it are more likely to accept professional responsibility for progressive enhancement, and Google skews older than many competitors.


    > Google tends to release code and promote it without 
    > really using it much internally first.
Is this actually true? My understanding (from watching many AngularJS presentations) is that Angular was developed with input from many teams at Google.

(edit: Angular was first used on an internal app at Googel: https://www.youtube.com/watch?v=r1A1VR0ibIQ&feature=youtu.be...)


"with input" != actually using it. There are very few public-facing Google apps/sites that use Angular, whereas a large chunk of the Facebook frontend uses React and their other libraries like Relay. That said, I have no idea if/how Google uses Angular internally, so it might get more use than we see from the outside.


External sites by Google using angular: https://www.madewithangular.com/#/categories/google


The company is not at risk. Their open source work reflects that.

Angular came out 2.5 years before React. Facebook had a predecessor to flesh out what does and doesn't work. Google started the autonomous car, and now other companies are following suit. Google starts the race, but they might not be in first place at the end. Ultimately, consumers win.

Better code -> better products -> profits.

What about Golang? Considering this conclusion is out of scope from the premise 'open source for the web', anyways.

Is it the documentation that is the essence of 'better code'? If not, then what? Left to my own devices, I will summon functional programming constructs such as Monads or Catamorphisms in personal projects. Keyword: personal projects. I think it's elegant, but someone unfamiliar with these constructs might abhor it. Analogously, what's the best programming language?

Do the arrows imply: if better code then better products, and if better products then more profits? If that's the logical structure, I can easily think of examples of companies enjoying great profits but bad code / bad products. Moreover, the direction of causality could also be profits -> better products -> better code. In reality, it's most likely to be a complex / dynamical relationship involving many other variables.

Google is largely impervious in the search and ad space, which is their cash cow. It almost doesn't matter how good or bad their other products are.

What other products from Facebook did you have in mind? I genuinely cannot think of anything other than the social network, Instagram, and Facebook messenger. Facebook is largely impervious in the social networking and ad space. Does it matter how good or bad their other products are?


This reminds me of the excellent episode of Changelog in which Facebook's head of open source discussed the logistics of managing React so that the public version is exactly the same as the version used by Facebook itself: https://changelog.com/211/

Really drove home the concept that open-source requires significant thinking and discipline when a library becomes heavily used.


That's not any different from how Google uses Angular - Google runs off of HEAD on master, in 1 & 2


Can you enlighten me where Google uses Angular in their own apps?



That's awesome, thank you.


I've been told directly by multiple Angular team members that over 70% of Google apps uses Angular in some fashion.

They have said the usage of Angular off of HEAD of master publicly in the past as well, but I don't recall off the top of my head where they have said this.


Well, this thread definitely shined some new light on the area for my. I was actually under the impression that Angular (at least 1.*) wasn't really used within Google. I guess I'm just that stupid and naive.


Yeah I think that's the key difference: the size of the production apps. Though to be fair, Angular is a much bigger part of the front-end stack than React and not something that would be feasible to retrofit GMail/YouTube/etc. with.


It looked like a risky dependency, simple as that. Despite the HN title, the point of the article was not to talk about React Router.

EDIT: Additionally, I'm not saying React Router is bad per se, just that it's not fully baked. I'm glad to see people working on the problem. I look forward to leveraging the fruits of their labor in the future (in fact, I already do - I use the history library on which RR is built).


I still feel that you're not giving reasons of why it's a risky dependency or fully baked. And probably we/you should rename the title if it doesn't convey the article body...

Not that I am defending or even using React Router myself, I'm just saying that it's usually better for everyone if the feedback is better explained than just "It's bad" or "It's risky to use".


You're not wrong. But it feels worse in JS. In both cases lib authors would do well to set proper expectations - if your lib is half baked, be up front about it.


Yeah maybe. To me it feels like many in the community complain and shirk responsibility, and some stern words were needed to counterbalance that position. If the community wants stability in its projects, it needs to learn what it takes to achieve that.


Further thoughts: choosing an unstable library and then criticizing it for being unstable seems silly. There are two sides: library authors need to value things like careful design, real world testing, and backwards compatibility. Library consumers need to advocate for the same things, plus learn how to identify risk (semver isn't going to save you), and take ownership of their choices.


> Further thoughts: choosing an unstable library and then criticizing it for being unstable seems silly.

If so many people (apparently) missed the "expect this to be unstable" bit, I wonder if it's just a question of not being signaled effectively enough on Router's home page. Which I can understand since no developer actually expects, let's say, their v3 to actually be completely revamped into a v4. If they knew ahead of time, they'd presumably just have chosen the v4 design.

I guess it's kind of catch-22. Maybe the right thing here is to explicitly say "we currently fully expect this to be stable for the foreseeable future, but cannot predict the future, and are prepared to break everything if a better design is discovered"?

EDIT: I suppose another way to alleviate the problems would be to pledge support for the previous version for a period of time... but no developer working in their spare time really wants to do that. (For very understandable reasons.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: