My fiancee is probably your target demographic -- she LOVES socks to an irrational degree. I was very curious about what it would cost to buy this as a gift for her... but there's one major problem though:
The price isn't listed anywhere on your site. Evidently I have to create an account before you'll tell me how much it costs.
So... no thanks. I am immediately struck with the impression that this is almost certainly massively overpriced if you won't even tell me how much it costs before starting the signup process. Seriously... tell me how much it costs!
Ok, a followup. I poked around a bit more, and yeah, buried in the FAQ section I finally found the price: $11/month. That's $11/pair of socks.
Yikes -- while there are socks worth that much, for that much (even a percentage of that-- I get it that you need to make a profit), there needs to be a bit more discussion about why these socks are so amazingly special. The blog has entries about awesome sock companies, and why they are good... but it never says that those are the socks you're shipping.
Also, I think you're overlooking that a lot of people (like me) will want to give this as a gift. As far as I can tell your current model is sign up, then cancel anytime. Can't I buy this for just 3 months/6months/a year? If I'm giving this as a gift I'd rather not have my credit card being charged every month from now until the heat death of the universe because I just forgot about it.
$11 for a single pair? That's vastly overpriced, although maybe there is a target audience that would pay that much... are there really people that care that much about wearing fashionable socks? half the time I don't even wear matching pairs because my pants cover it up, and nobody can see them anyway.
I think the article you're referring to is called "Is There Anything Good About Men", and has been submitted to Hacker News multiple times. Even though it doesn't have anything to do with technology it's one of the most insightful articles I've found here.
I wrote (well, update/adapted actually) a plugin for Redmine which allows hosting of git repositories (https://github.com/ericpaulbishop/redmine_git_hosting) and I integrated Scott Chacon's "grack" code so that it will support smart http out of the box.
If you're interested this feature it may be helfpul to check out the code in app/controllers/git_http_controller.rb in my plugin. Note that the rails user needs to have access to the git repo directory for this code to work. I handle this issue using a sudo configuration, which is an extra setup step for users, and requres them to have admin priviledges. You may want to handle this differently.
How is this superior to jplayer (http://www.jplayer.org/) ? Jplayer is implemented in HTML5, but has a fall-back flash implementation for backwards compatibility, and like this is open-source (GPL/MIT).
Have you considered implementing some sort of script to scan some of the large biological databases and add links/metadata for the datasets they contain?
Looking at what's in CKAN now, it seems that it's mostly datasets that are a bit more easily understood than most of the biological data that's out there, but at the same time indexing and accessing biological data is a HUGE problem for researchers in this field.
There are currently some big databases such as the data stored by the UCSC genome browser (genome.ucsc.edu/downloads.html) and all sorts of expression/small RNA data available from GEO (Gene Expression Omnibus, http://www.ncbi.nlm.nih.gov/geo/), and lots of other slightly more esoteric databases like flybase.org, which specializes in fruit fly data.
Truly doing a proper job of indexing/classifying all of this is a close-to-impossible task (and in many cases requires specialized knowledge), but there are an absurd number of publicly available biological datasets out there. If you wanted to rapidly expand the number of entries you have you could use a script to index one or two of the big databases like GEO, and fill in the metadata from what they already have.
Of course, I can also understand why you might prefer to have the majority of the datasets in your site be the sort of thing most people (or at least, non-biologists) can interpret vs. something that's highly specialized like this. Not to mention, keeping up with all the new data, and properly filling in all the metadata could be a real can of worms.
Sorry for the late reply. I sadly do not understand the concerns of this field very well. There are many very large datasets referenced on ckan, mainly links to huge triple stores. There are many biological data sets also eg flybase as mentioned. These triple stores are too big to do any decent dynamic linking against which is big shame.
If you get the opportunity could you repost this to ckan-discuss@lists.okfn.org. There are people on that list that understand these issues far more than me and they would love to hear from anyone interested.
I have an old Pentium II 300MHz PC from the late 90s with 128MB of memory. I'm not sure when exactly it was first purchased, since I first got it second hand off of someone else in 2003. I added a new ethernet card in 2003 when I got it, and the CD drive has been replaced since, but other than that it's the original hardware (including 15" CRT monitor). The hard drive died long ago, but it doesn't need one to do its job.
And yes, it is used in production, for one very special task:
I'm the author of Gargoyle Router firmware (www.gargoyle-router.com), an alternate firmware for wireless routers. I've recently started selling some small routers with my software installed, but loading each one individually takes some time -- about 10 minutes each. This 10+ year old system runs a customized version of Knoppix, which can be used to install my software onto a large number of routers simultaneously. It means I can flash as many routers as I want within 10 minutes, instead of having to wait for 10 minutes each. This multiplex install system has some other components as well, but this PC sits at the center of it.
It's far more convenient to have a separate PC (especially one with a monitor) for this, since that way I don't have to keep disconnecting/reconnecting the necessary ethernet cords. Also, it only draws power during the brief time it's turned on (when I'm actually doing an install), so the fact it has an inefficient power supply isn't really a problem. It's cheap and it works, which is what matters.
edit: It was 2003, not 2002 I first bought the system.
Have you thought about creating your own router out of an Hawkboard or something similar ? That's something I would be very interested in, but I'm on something else..
Earlier this year, I moved from SVN to git when I found I really needed better branch management. One thing I found is that hosting a git repository is a lot less straightforward than hosting a subversion repository, particularly since fewer bug tracking systems have adequate support for DVCS at this point. Trac seems to be lagging behind Redmine in this regard.
If anyone is interested, I created a library for deploying git hosting (or SVN hosting if you really prefer) on a VPS with Redmine project management: http://github.com/ericpaulbishop/redcloud
Git hosting is provided via gitosis, not gitolite since there is no redmine plugin for gitolite support. It runs on Ubuntu VPS and uses Nginx with passenger compiled in as the web server. PHP via php-fpm can optionally be compiled in too, in case other sites on the same box need PHP (as is the case for what I'm doing).
This is very true, but contrary to what most people think the real danger isn't nuclear fallout but nuclear winter. Nuclear blasts kick up a lot of dust into the atmosphere, and even a few blasts can drastically alter the climate. A study a few years ago found that a nuclear war in which only 50 Hiroshima sized bombs were detonated could wreak havoc. See the wikipedia article on this topic: http://en.wikipedia.org/wiki/Nuclear_winter
The danger of nuclear winter has actually become worse, and it is not dependent on the size of the bombs. The problem is that urban areas are becoming more and more densely built up, and petroleum products (e.g. plastics) are more frequently being used as building materials. You don't need a truly huge bomb -- just one big enough to set fire to everything, so that it can't be extinguished. The energy released from burning all the plastic and wood and other flammable materials found in a modern city can be even greater than the bomb itself. Further, when plastic burns, it's very dirty, and all that smoke gets carried far into the atmosphere. It's the dust and debris and smoke that gets thrown into the upper atmosphere that does the real damage, blocking the sun and changing the climate.
Err... As chemists Watson & Crick were light-years behind Franklin. They initially proposed a model of DNA where the phosphates were on the inside. When they showed it to Franklin she (along with other chemists) basically laughed at them, and the faculty ordered them to stop working the problem because they obviously didn't know what the hell they were doing.
Then they got a hold of Franklin's photograph of B-form DNA without her knowledge. Some sources say Wilkins, the guy who gave Watson & Crick the photograph stole it, others that he was just trying to be helpful, but they agree that Watson & Crick used Franklin's work without her permission to form their final model... with the phosphates where they belong, on the OUTSIDE, this time.
Franklin was very cautious and methodical: she wanted everything all lined up and double-checked before she published, and she didn't seem to realize just how important what she already had was. (She was after all a chemist, not a biologist). She wanted more data before she said conclusively what the correct model was.
Watson & Crick were rather impulsive, but courageous. Even though they were dead wrong the first time, they kept working and with Franklin's data managed to put together a cohesive, plausible model, which turned out to be right. They raced to publication, and didn't give Franklin any of the credit.
I'm not sure what you want to interpret from that about hard work, except that someone's hard work (in this case, that of Franklin) is usually required for a major success, and that if the hard worker isn't careful, he/she might not be the one to get the credit.
They were dead wrong the first time, yes, but they had the right symmetry. Watson was a much worse chemist, but he was prepared to recognize that symmetry.
While I can understand how git is better for the majority of use-cases, I found that it was far less suitable than SVN for what I want to do. I used git for a while (3-6 months) and then switched BACK to using SVN. It's true I didn't experience pain until I started using git. That's when the pain started.
First, as I said, my use case is highly non-standard. I'm the only developer on my project, so I don't have to worry about collaborators. I do my work on a lot of different computers, so it turns out that (for me) one of the biggest advantages provided by version control is effectively synchronizing source code between systems, many of which get erased (e.g. new OS installed ) on a regular basis, all of which have internet access. It is important that I be able to quickly and simply set up a development environment on a new system -- a somewhat unusual requirement, I realize. Under these circumstances the primary advantage -- not just of git, but any DVCS -- becomes a major disadvantage. The whole point of DVCS is that the ability to commit is separated from the ability to update the central repository (git push), where they are the same in centralized VCS. However... I ALWAYS want to push when I commit. If I commit to a local repository on a machine that gets wiped, it does me no good. It needs to get updated in the central, online repository every time. This can be done with a short simple shell script, of course, but remember: I need it it to be as simple as possible to set up a new development environment. This adds an extra step, and if I neglect it, I can lose a lot of work. If I use SVN, I simply don't have to worry about it. It's automatic.
Another feature of SVN that I like, that's impossible with DVCS: revision numbers. With SVN, revision numbers increase sequentially from 1 to whatever number commit the last one was. DVCS can't do revision numbers because you can't determine the sequence of commits until after the commits are made (everyone has their own repository!). Being able to refer to r378 is a LOT nicer than referring to an arbitrary hexadecimal identifier. Having commit identifiers that convey information is a huge plus.
For me, these reasons are enough to outweigh the advantages of having access to repository history while offline and the better handling of branching that git offers.
If adding a shell script to your development environment results in pain every time you need to set it up, that says you haven't automated setting up the environment. One great way to automate it is to put all such scripts (surely this would not be the only one?) and other files in a version controlled repository; then you just have one more thing to check out.
(If checking out one more thing results in pain, you should think about automating the checkout too... This line of thinking has resulted in me keeping everything in git and using a single `mr` command (google will find it) to check out all my respositories when I'm setting up a new account.)
Revision numbers are not impossible with DVCS; bzr has repo-local revision numbers. But once a number becomes longer than 3 digits, I cut and paste it anyway; if I'm already pasting, the longer size of git's sha1s doesn't matter.
The price isn't listed anywhere on your site. Evidently I have to create an account before you'll tell me how much it costs.
So... no thanks. I am immediately struck with the impression that this is almost certainly massively overpriced if you won't even tell me how much it costs before starting the signup process. Seriously... tell me how much it costs!