Hacker News new | past | comments | ask | show | jobs | submit | visural's comments login

But the thing that’s unusual about good scientists is that while they’re doing whatever they’re doing, they’re not so sure of themselves as others usually are. They can live with steady doubt, think “maybe it’s so” and act on that, all the time knowing it’s only “maybe.” Many people find that difficult; they think it means detachment or coldness. It’s not coldness! It’s a much deeper and warmer understanding, and it means you can be digging somewhere where you’re temporarily convinced you’ll find the answer, and somebody comes up and says, “Have you seen what they’re coming up with over there?” and you look up and say “Jeez! I’m in the wrong place!”

It makes me happy whenever I read quotes like this of Feynman's.


Not a very rigorous test - but I just applied this to one of our (very) large angular 1.x apps and it had a consistent 20-30% reduction in profiled execution time across the couple of test interactions I did.

That said, I'm not sure it made much perceptible difference as the app already performed adequately and the reduction is amortized across all of the interactions the user makes.


If you tested on your machine, I hope you factored in that your users may not have an up to date machine.



To me that recent one is an example of overdevelopment of the art.

I remember commenting on a different artist's rework of his old comic strip, about how the new strips looked too busy and that there was something beautiful and thematic about his earlier work. He replied that he also preferred his earlier work, but he just couldn't make himself 'stop that early' when drawing anymore.

Certainly in both cases there's a lot more skill in the later work, but it's interesting to note that overdevelopment is also a thing.


I agree and thought he peaked around 2004, e.g.:

http://art.penny-arcade.com/photos/215499399_pfUFC/0/1050x10...

http://art.penny-arcade.com/photos/215499192_iWbzW/0/1050x10...

The cartoon faces were more expressive with less detail than the current ones, but he was able to draw just about anything he wanted in a funny way, which wasn't the case in the beginning.



The non-regular-format pieces might be a better example. Here's a recent one:

http://penny-arcade.com/comic/2014/12/12/fan-fiction


I call this the Pink Floyd Effect.


As with the shirt example in the original article, the cost of labour (i.e. paying people for their time) represents the majority of the cost in building a house. The other items you list are using cheaper overseas labour in their construction, whereas if you're building a house by definition you need to pay local craftspeople for their time. Also due to their physical size, houses require lots of people to put together (i.e. the time is in some sense proportional to size of the item being constructed).


Am I right in saying that the complaints with btrfs in CoreOS are specifically around its use in conjunction with Docker?

(Interested as I'm thinking about building a homebrew NAS/general purpose server w/ btrfs, there's a lot of outdated info on btrfs but I was getting the impresssion that it's now a pretty stable and useable filesystem)


I can say that ZFS has worked great for me on BSD-based home servers. Haven't used ZFS with Linux yet, though it's possible to do so, it's just unpopular partly for licensing reasons. I suspect what type of RAID you do may have greater consequences than what file system you pick, particularly if your distro is already designed for serving files on the file system you choose. Oh and working out all the AFP/Samba bits are fun, because there's always something that surprises you.


When I last tried to try the issue of ZFS was one of performance and massive requirement of RAM to enable the features I wanted.

It seems performance have improved and are improving according to this benchmark of last year zfsonlinux by phoronix: http://www.phoronix.com/vr.php?view=19059


I've got severely burned by ZFS on linux running in AWS. Heavy NFS load (ZFS NFS, not linux kernel NFS) caused a kernel panic, pretty reproducibly. This was on Ubuntu 12.04 with the offical ZoL PPA sources, so YMMV.


For managing volumes, ZFS on Linux works great. But for managing NFS, I'd definitely go with a separate NFS implementation if I wanted to heavily use NFS. The primarily developers/users of the ZFS on Linux port are mainly using it for highly-available single-machine volume management, and exporting volumes to gluster or other clustering filesystems for use in massive HPC clusters like those over at Lawrence Livermore National Labs.

http://www.nersc.gov/assets/Uploads/W01-ZFS-and-Lustre-June-...

ZFS is meant for managing local drives, and to make it performant you need to configure SSD partitions to act as an L2 Arc cache. The online documentation is pretty good, so after going through the docs it should be pretty clear how to set ZFS up properly for your use cases.

But it sounds like you're using EBS drives? If so, not sure why you'd want to use ZFS. Last I checked, ext2 or xfs was the way to go with EBS drives on AWS. AWS has so much stuff going on in the background to ensure reliability/availability of EBS volumes that adding another layer isn't worth it IMO and I've seen similar kernel panics running other complicated volume managers on top of EBS.


> Heavy NFS load (ZFS NFS, not linux kernel NFS) caused a kernel panic, pretty reproducibly.

"ZFS NFS" is "linux kernel NFS". Setting "sharenfs" options on ZFS on Linux dataset simply informs the normal kernel NFS service of those exports.


I've had ZFS on linux freeze up on me every several months too. Not a production system fortunately.


One thing you might want to check is if you're setting a limit in the driver for the amount of memory ZFS uses for caching. By default it'll use a LOT of memory, so I usually just set to a max of 2GB and don't see any issues.



I find these landing pages that don't let me even have a peek at the actual product, so they can funnel me down the "Try now for free path" really irritating.


I remember not a long ago, the same type of landing page, but this time for Apple's Swift courses, appeared the day after Apple announced Swift programming language on their event.

I guess people nowadays want to research their market, before even write some code.


Hi, author of Harrow/Capistrano here. Actually there's a video on the way, since we're still pre-alpha, so that people can see the (pretty rough) workflows. There's a reason we have't submitted the tool to HN ourselves yet. Yesterday we conducted some Tweet/Survey about Capistrano that we launched via @capistranorb yesterday, it looks like cnolden wanted to pick up some HN karma! Sorry that you saw (what I consider to be) a bit of a (technically) disappointing product landing page.


Nice to know that. I already signed for your mailchimp list. Hope things go well and have something ready for testing soon. It should be interesting project.


Netbeans also has a local history recording feature which is integrated pretty well alongside the git/vcs features (just hit history on any file and see a log of all your local changes intermingled with the committed versions).


There's nothing in the parent comment that seems unreasonable to me. You're putting a spin on it that doesn't exist.

The content of Quora today is what it is. The Internet Archive has no agenda for misrepresenting the content of any site. They don't want "records to be" anything other than what is reality now, tomorrow and the in the future.

The Archive's stance is perfectly reasonable. You can't arbitrarily go back in time and remove content that existed at the time, otherwise it's not a historical record.

So you can opt-out totally or be included in the archive's records, it's that simple.


> You can't arbitrarily go back in time and remove content that existed at the time, otherwise it's not a historical record.

With all due respect, the previous line is just your opinion. Court transcripts and other historical records get redacted all the time.

The Archive's stance might be reasonable, but so is Quora's. I object to the idea that Quora is "selfish" for letting people control their own content. Read that guy's original post:

> What Quora is asking for from the Internet Archive — and really, since the Archive has no public competition, from the Internet — is unreasonable, short-sighted, and selfish. Quora is simply being a shark about "their" content, at the public's expense.

The post is nothing more than an attempt to shame Quora into opening up their data. There are many people that don't want everything they post on the internet going into permanent and searchable databases.


> The post is nothing more than an attempt to shame Quora into opening up their data.

My post is definitely an attempt to shame Quora into opening up their data, in at least the sense of making it available to the Internet Archive. No bones there.

> There are many people that don't want everything they post on the internet going into permanent and searchable databases.

We may just disagree to some extent on what the norms should be, but I think if you're intentionally posting public content to a public website, that's part of the permanent public record. Especially when that website is about accumulating a knowledge base.

Wikipedia, another knowledge base, records everything. Though unlike Quora, you're allowed to contribute fully anonymously (without even registering an account -- in fact, come in through Tor, if you like). They have no problem allowing themselves to be backed up on the Archive, and I'd be pretty worried if they did.

In fact, Wikipedia's robots.txt is really interesting: https://en.wikipedia.org/robots.txt

There's some brief bot exclusions, a brief, now commented-out section asking the Internet Archive not to archive user pages, and then a very, very long section blocking various pages from being indexed by anyone. That long section has a lot of thought and history in it, including notes about the Internet's memory about users, like "Folks get annoyed when XfD discussions end up the number 1 google hit for their name."

I think it's totally fair to argue with Wikipedia about the choices it makes in its robots.txt, but ultimately what we're talking about here are organizations making these choices on behalf of users, not the individual users themselves.

If individuals are concerned about their contributions being preserved, that should be something they take up with the Archive. The Archive respects take down requests, both because copyright is a thing and because they're not interested in harming individuals.

I don't think we're working in the service of humanity by blessing companies that gate the future's access to massive troves of knowledge that was freely contributed to public websites.


I think you used the wrong word - IMO Uncle Bobisms often show a lack of practical pragmatism. The vibe I usually get from listening to him is along the lines of... "if everyone just did things the right way then we wouldn't have all these problems", which is firstly, not a realistic point of view, because there is no reality where every developer on a team is going to do everything the same way, let alone one person's idea of the "the right way", and secondly it's just unprovable conjecture that these things would solve all our problems in the first place.

I'd rather listen to people with a proven track record for shipping great software and a history of reasoned pragmatism regarding techniques and methodology.


I agree with you, just a note, "practical pragmatism" is a pleonasm http://en.wikipedia.org/wiki/Pleonasm


Don't install games on your "work" PC.

This is a really simple way to keep yourself honest. Even if it's partitioned OSes on the same machine it would help.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: