Hacker News new | past | comments | ask | show | jobs | submit | ZoomerCretin's favorites login

I feel like most social media would be much more usable if there is some kind of browser extension that automatically clusters the comments and hides the same opinion/sentiments that have been repeated a dozen times in the same thread(you can label the clusters with comment counts of the same topic).

edit: Actually before someone submits a comment, they should be informed that how many people have said the exact same thing, like a repost warning.


My experience in job hunting (my resume is swiss cheese, full of gaps, nothing particularly impressive) matches this. Give a good presentation at a conference, go to a language meetup, chat to some engineers directly and you can often skip technical screening. Most of the times I don't even need a resume, or the resume is a formality after the decision.

I'd add one more tip that I think far more software developers should do: Add unit-level fuzz testing throughout your projects. Fuzzy bois are like assertions on steroids.

With large projects you often get modules which have an API boundary, complex internals and clear rules for what correct / incorrect look like. For example, data structures or some complex algorithms. (A-star, or whatever).

Every time I have a system like this, I'm now in the habit of writing 3 pieces of code:

1. A function that checks the internal invariants are true. Eg, in a Vec, the allocated length should be >= the current length. In a sorted tree, if you iterate through the items, they're sorted. And children are always >= the internal nodes (or whatever the rules are for your tree). During development, I wrap my state mutators in check() calls. This means I know instantly if one of my mutating functions has broken something. (This is a godsend for debugging.)

2. A function which randomly exercises the code, in a loop. Eg, if you're writing a hash table, write a function which creates a hash table and randomly inserts and deletes items in a loop for awhile. If you've implemented a search algorithm, generate random data and run searches on it. Most complex algorithms and data structures have simple ways to tell if the return value of a query is correct. So check everything. For example, a sorted tree should contain the same items in the same order as a sorted list. Its just faster. So if you're writing a sorted tree, have your randomizer also maintain a sorted list and then periodically check that the sorted list contains the same items in the same order as your tree. If you're writing A-star, check that an inefficient flood fill search returns the same result. Your randomizer should always be explicitly seeded so when it finds problems you can easily and deterministically reproduce them.

3. A test which calls the randomizer over and over again, and checks all the invariants are correct. When this can run overnight with optimizations enabled, your code is probably ok. There's a bunch of delicate performance balances to strike here - its easy to spend too much CPU time checking your invariants. If you do that, you won't find rare bugs because your test won't run enough times. I often end up with something like this:

    loop (ideally on all cores) {
        generate random seed
        initialize a new Foo
        for i in 0..100 {
            randomly make foo more complicated
            (at first check invariants here)
        }
        (then later move invariants here)
    }
Every piece of a large program should be tested like this. And if you can, test your whole program like this too. (Doable for most libraries, databases, compilers, etc. This is much harder for graphics engines or UI code.)

I've been doing this for years and I can't remember a single time I set something like this up and didn't find bugs. I'm constantly humbled by how effective fuzzy bois are.

This sounds complex, but code like this will usually be much smaller and easier to maintain than a thorough unit testing suite.

Here's an example from a rope (complex string) library I maintain. The library lets you insert or delete characters in a string at arbitrary locations. The randomizer loop is here[1]. I make Rope and a String, then in a loop make random changes and then call check() to make sure the contents match and all the internal invariants hold.

[1] https://github.com/josephg/jumprope-rs/blob/ae2a3f3c2bc7fc1f...

When I first ran this test, it found a handful of bugs in my code. I also ran this same code on a few rust rope libraries in cargo. About half of them fail this test.


I've spent thousands on ads and have overall made 5-13x in ROI.

I can say that my bad ads get less sales than my good ads.

My good ads consistently work, my bad ads don't.

So, ad copy matters.

However, a good advertiser accepts that not everyone is a good ad target.

You may not be a good target for advertising.

I would say, I'm not a good target.

Some people are very moved by advertising.

Others are not.

The 80/20 rule applies here.

80% of the profit in advertising probably comes from 20% of the target population for a given product.

I've learned that it's better to think of good advertising as just a form of communication.

A great ad is factual information that answers questions customers have, and provides urgency.

An ad might contain some fluff, but to really sell it usually needs to state explicit, accurate facts, in easy to understand language.

There is a lot of bad marketing from people that think they can make an ad just because they're a hip or a cool person, or they have an eye for color.

Those ads can be egregiously bad.

Just as some marketing sites are all fluff no substance, but sold to unsuspecting business owners, the same happens with advertising.

But the key idea is that, for certain products, good ads sell "enough" to the target audience to be very lucrative.

A great book on this topic is "Ogilvy on Advertising." Specifically, the first chapter.


Yes, they are the best, but are they good?

Professionally influential:

  - High Growth Handbook (general company building tips)
  - Traction (the one by Weinberg and Mares; engineer-friendy guide to marketing and growth)
  - Understanding Michael Porter (great intro to business strategy)
  - Monetizing Innovation (pricing advice)
Personally influential:

  - Thinkertoys and Cracking Creativity (how to be more creative)
  - Atomic Habits (how to establish good habits)
  - A Guide to the Good Life (friendly intro to stoicism)
  - What Got You Here Won't Get You There (building self-awareness)
Fun:

  - Richard Feynman autobiographies
  - The Martian
  - Shadow Divers
  - Ready Player One
  - The Myron Bolitar Series (mysteries with a good sense of humor)

I think Reddit's decision to move from Common Lisp to Python is interesting and their reasoning (ecosystem) is still valid 15 years later.

The historical timeline is especially interesting because Reddit's cofounders Steve Huffman & Aaron Swartz were alumni of Paul Graham's first YC batch and PG is the author of the well-known Lisp essay "Beating the Averages":

- 2001-04 Paul Graham : Lisp essay "Beating the Averages" [1]

- 2005-07-26 Paul Graham : https://groups.google.com/forum/#!topic/comp.lang.lisp/vJmLV...

- 2005-12-05 Steve Huffman : https://redditblog.com/2005/12/05/on-lisp/

- 2005-12-06 Aaron Schwartz : http://www.aaronsw.com/weblog/rewritingreddit

The takeaway from the Reddit case study is this: Yes Lisp macros and language malleability gives it superpowers that other languages don't have (the "Blub Paradox") -- but other languages also have other superpowers (ecosystem/libraries) that can cancel out the power of Lisp macros.

You have to analyze your personal project to predict if the ecosystem matters more than Lisp's elegant syntax powers.

[1] http://www.paulgraham.com/avg.html


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: