Hacker News new | past | comments | ask | show | jobs | submit | rustyfe's comments login

Worth calling out that Serious Eats is a publication with many recipe developers. Not all of them are equal to Kenji (although Daniel Gritzer is unequal because he's even better!).

Serious Eats is overall great, but definitely trust the byline not the publication.


+1 to Daniel Gritzer - but Serious Eats generally has earned my trust now, they must have a really good editorial process over there to keep the quality so high.


I didn't know about hoverfly when I wrote mock-proxy, but the proxyserver mode looks like very much the same strategy.

One feature mock-proxy has that hoverfly lacks is first class support for git repositories as an endpoint type. This can simplify the mocking process if what you're mocking is an HTTP git clone.

But overall hoverfly looks a lot more feature complete and super polished, thanks for telling me about it!


The (1) in make(1) corresponds to which section of the manual[1] make is in. This is useful in some cases to distinguish between things that might be in multiple sections like printf(1) the user command and printf(3) the C library function. When everyone knows what's being discussed, I think it's mostly people trying to give a shibboleth that they've read the fine manual.

[1] - https://en.wikipedia.org/wiki/Man_page#Manual_sections


Yes. They showed in the demo that there will be a new scope for read/publish packages. So you can create a personal access token for Travis with only that scope.


We'll take 60 possible characters (alphanumeric with caps and a few special characters). The summation 1...N of 60^x is (60/59) (60^N - 1). Known length is 60^N. If we assume N is big enough that the minus one isn't important, you can see going from known to unknown only increases the guesses by a factor of 60/59!


In my opinion, this won't stop you from from enjoying Infinite Jest. There are a few arcs that might require some insight into American culture (Professional Football, the US drug rehabilitation system). But the big picture themes are either very universal or so particular you aren't really expected to be keyed into them (you don't need to be a competitive tennis player or a Quebecois separatist to enjoy it).

Pale King might be a bit of a different beast. The focus on the IRS is somewhat particular, and you may miss big beats because of ignorance about the IRS. But at the same time, some of the accounting minutiae are such that I don't think you're expected to understand them.

Anyway! I wouldn't let this steer you away. To me, the joy of DFWs writing is the individual sentences. The mannerisms and humanity of the characters. If you miss some details because they're US focused, I don't think it'll be anything important.


Maybe you are not in the United States, but for those that are, the answer is pretty simple.

It is reasonable to trust a VPN provider more than an ISP because you have a choice of VPN provider, you can vet them and choose the one that you feel provides the best safeguards to your privacy and security. Most Americans have between zero and one choices for high speed internet. Even in major metropolitan areas it is common to live in a cable monopoly, with a phone company providing sub-par "competition". You cannot vet your choice and choose the one that provides the best experience because you have no options. Even those that do have a choice may still connect to coffee shop or hotel WiFi on occasion, losing choice again.

In short, VPN providers are a) competitive and b) portable.

You're not wrong that you're putting the same amount of trust in them, but these properties mean you would not be wrong to do so.


I would say the state of Go dependency management is somewhere between what GP implies: "everyone is using go get to run things directly from master and that's terrible" and what you imply: "everyone has standardized around dep and that's fine".

I picked 5 "important" Go projects without checking first, and here's the solutions they use:

Kubernetes: The deprecated tool Godep + a pile of Make and Bash (which is the Go-est thing I've ever heard in my life)

Docker (Moby): vndr

Hugo: dep

Etcd: A bit of a mish-mash of dep and vndr, but mostly dep.

Cobra: Doesn't vendor dependencies, isn't a reproducible build (which is somewhat okay, as Cobra is primarily a library, not a tool in and of itself, but it does also have a CLI that probably breaks a lot).

In short, things are not currently fine. Dep is an okay tool, but fails for some use cases, and the community has not really rallied around it. Lots of important projects are sticking with what they've got. Glide, gvt, vndr, and Godep all remain important.

We're deluding ourselves if we discard an outsider's view that Go's dependency management situation is a dumpster fire, because it is. However, to GP, it isn't quite as bad as you suggest, most projects using Go have found some way or another of getting reproducible builds, and don't just run everything from tip of everyone else's master.

I am cautiously optimistic that modules will finally solve this mess, but we'll see to what extent they win in the marketplace of dependency management solutions.


Note that the Kubernetes people tried to migrate to Dep, but weren't successful. As I recall, it was due to several blocking issues (e.g. see https://github.com/kubernetes/client-go/issues/7).


> Kubernetes: The deprecated tool Godep + a pile of Make and Bash (which is the Go-est thing I've ever heard in my life)

I haven't written Go in a couple years, but I lol'd. This exactly what we used to do, except replace Godep with glide.


this is an accurate assessment. i work in the dystopia of microservices and every service has it's own unique build.

Godep, dep, glide, make files, you name it. it's a total dumpster fire.


I wonder how much this varies by programming language. In Go, gofmt ensures that most stylistic choices are similar for everyone (Go code tends to look alike) and patterns and rules are fairly well codified (Go variable and function names tend to follow certain rules). The more codified the code is, the less fingerprints each developer can leave.

It'd be interesting to see the most and least identifiable languages!


It is not about tabs, spaces and variable names. The main difference is how long your functions are, how many functions they call, how deep control structures within functions are etc.


Would be interesting to tie that to ratings. Have a diverse set of people read lots of code (samples from projects large and small, in different languages, etc.) and rate it on things like "easy to understand" and "easily extendible", then correlate that with styles. I'd be very interested in how I score and what I can improve.


It sounds like you're talking about cyclomatic complexity.

https://en.wikipedia.org/wiki/Cyclomatic_complexity

There are tools out there for most major programming languages for measuring cyclomatic complexity. They can tell you whether typical coders will be able to help you work on your program at its current complexity level.


I heard of this but haven't seen it used anywhere outside of school. Do you have any experience whether this correlates with how readable people find code to be?


No, sorry. Actually, I feel like long functions are often clearer than refactored equivalents.


Okay, thanks for responding!


My style changes over time.


As long as it changes gradually, or partially, that might not matter.


I'm also wondering if this transfers between programming languages, and to what degree.


Is it good or bad if a programming language leaves room for personal style?


This argument really doesn't hold water to me. It seems dramatically more likely to me that the human agent overriding the model is going to be prejudiced than the mathematical model itself.

Of course it's possible (in fact, almost certain) that a math model trained on a large set of data is going to pick up on some problematic features. However, is it really more likely that these statistical inferences are more biased than a human being?

I'm sorry, but in my experience the number of racist human beings outweighs the number of racist computers.

Your examples seem so fraught. The Johnsons are unreliable, from a human, seems as likely to mean that John Johnson and Mr. Overriding Agent's sister had a nasty breakup as it does to mean they're likely to bounce checks. The Kennedys are good for it just sounds like code for, The Kennedys are of the racial group Agent prefers.

I agree with you that we can't blindly follow computer models, but I don't think I follow you to your conclusion that the loan officer was a valuable safety net.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: