In Brazil there is a saying, "água mole em pedra dura, tanto bate até que fura", which means something like "Soft water on hard rock will eventually poke holes in it", but it sounds way less folksy than the original.
My apologies, but I can't resist playing with national stereotypes:
German vs. Brazilian (Portuguese)
I think the best English approximation of the German version is:
Steady water drills the stone.
First we see that the German at once describes and commands the actions: "drills". There is no time measure here, the point is the damn rock's getting drilled and that's that.
In Brazilian/Portuguese the talk is of eventually. As in, sure it's inevitable but the stone and the water will have a lot of time together, they will change over time.
Next we can observe that while Portuguese/Brazilian provides details, like many holes, the German is light on flowery detail, one hole, many holes, not relevant to the pain point, which is: rock->drilled rock.
Lastly we can see the German implies the stone is hard, hard is the default nature of all things in Germany, especially stones.
The Portuguese/Brazilian on the other hand specifically qualifies the stone as hard, presumably because Brazil is filled with soft stones, gently dancing under the feet of girls in Ipanema.
Made me smile... or would have, if I wasn't a German Very Serious Person. ;)
I don't think "drill" is such a good translation though. Höhlen comes from "Höhle", meaning "cave". Maybe a better translation:
Steady water hollows the stone.
And you got to admit that the "German assumption of hardness" is actually pretty accurate when it comes to stones! (Except those in Brazil, presumably.)
I don't think that would work. When it absorbs light, it converts that energy into heat, then re-radiates that heat. That re-radiated heat will show up to the infrared motion sensors.
I see one way you might be able to work around this: if you made a suit out of this stuff, and were somehow able to rig it so that all of the heat is dumped inside the suit, then it would be "invisible" to IR sensors. However, it would get really uncomfortable inside that suit really fast.
Also, even if it somehow magically absorbed all of the IR without re-radiating, it would still be possible to build IR sensors that detect it: if it's perfectly black in IR, it will show up as a "shadow" against a warmer background. To be invisible to IR, you need a suit that matches its temperature to the surroundings.
[quote]if you made a suit out of this stuff, and were somehow able to rig it so that all of the heat is dumped inside the suit, then it would be "invisible" to IR sensors. However, it would get really uncomfortable inside that suit really fast.[/quote]
That gave me an idea... if it were possible to make a sphere that had this coating on the outside and dumped the heat to the inside, you'd have a great heat source for a Stirling engine. This could work exceptionally well in space if you can use the light-absorbing sphere oriented so that it shaded the cool side of your engine, thus maximizing the temperature difference.
Infrared is not all the same. For one thing the frequency depends on the temperature.
But more importantly it absorbs infrared light, and converts it to heat. Heat and light are not the same thing! True, hot things emit light, but that doesn't make them the same thing.
Most practically you can cool the device once the energy is heat. Or just dilute it in a heatsink.
On that note, it sounds like you could make a really good heat ex-changer with this stuff. I wonder if this would make for improvements in fields like refrigeration and energy production.
Indeed. In fact, if this low albedo extends over such a wide range in wavelength, this material might be as close to the theoretical concept of a "blackbody" we have seen.
I´d like to ask the Couch luminaries that hang around here about the status of the inclusion of Google´s snappy compression library and the rewrite of the view engine. I´m aware that we are talking about a Cloudant specific solution here, but how much of an impact would it have in a scenario such as the one described in this talk?
With CouchDB, you front-load all of your disappointment. In exchange, everything that CouchDB can do has compelling big-O performance. For example, all queries finish in logarithmic time, including one-to-many, one-to-one, and merge-joins. Map-reduce is not a job you run; it is a living data set that always exists and always reflects the latest changes to your data. (Updating a map-reduce result takes linear time for the number of updates, if I recall.)
Plus, the BigCouch builds allow you to specify your redundancy needs. The preceding paragraph still holds true. Nothing has changed. You just get to throw hardware at the problem to guard against machine failures.
CouchDB is slow. Its VM is pokey. Its disk format is bulky. Its protocol is bloated.
CouchDB is fast. Everything that you can do, you can do in logarithmic time.
CouchDB is neither slow nor fast, but predictable. Fun fact: the entire CouchDB Erlang code base is almost the same size as the NodeJS standard library (20k apples vs. 15k oranges).
To answer your question, snappy compression and view optimizations will be a welcome boost for the other speed question: speed of development, time to market. If you think the compile step is time sink, rebuilding an index on all of your data is just untenable. So, the optimizations will improve day-to-day experience, but they will not change its fundamental value proposition.
and it is a easily trollable target as well. Had he tried to to attack durability on probably any other database, it probably would not have worked as well as it did.
Whether its a troll attempt or not, remains to be seen, but I completely agree with this specific, albeit very generic, part of the text.
> Databases must be right, or as-right-as-possible, b/c database mistakes are so much more severe than almost every other variation of mistake. Not only does it have the largest impact on uptime, performance, expense, and value (the inherit value of the data), but data has inertia. Migrating TBs of data on-the-fly is a massive undertaking compared to changing drcses or fixing the average logic error in your code. Recovering TBs of data while down, limited by what spindles can do for you, is a helpless feeling.
Yes, i am a troll, and things have gotten a little out of hand.
Just because a story was very successful at fishing for up-votes, it doesn't have to be true, people around here need to be a lot more sceptical.
And i think everyone who truly pays attention will know by now that MongoDB is the next MySQL.
Whether you are the original poster or not, you're not a troll, you're a sociopath emboldened by anonymity.
Cloak yourself in some idealistic mission if it makes you feel good- but your mission isn't to make the point that "people around here need to be a lot more skeptical"- You're a sociopath that enjoys kicking a hornet's nest just to watch the reaction.
I feel like a dick, but I have got to ask. Is it Disney? Disney is on both the couchbase and 10gen sites. Both sites mention that they are using their NoSQL solutions to power their social and online games. Couchbase powers Zynga and can arguably be considered the leader on this specific market. Am I close?
One of the things that I love about Couch is that the standard way to shutdown the process is simply doing a kill -9 on the server process. No data loss. No Worries. Want to back up your data? rsync it and be done with it.
Couch may have its warts, but it is damn reliable.
I feel that Couch has too much server side programming. It can be off puting sometimes. If anyone wants to make some money, I'd suggest them putting a server on top of a couch cluster that receives mongo queries.
I mean, how hard can it be to
1) Manage some indexes,
2) Keep some metadata around and
3) Build some half-assed single index query planner?
Couch is already a solid piece of technology. It just needs a better API to "sit" on top of it, kinda like what Membase is doing now.
edit: or on top of Riak, Cassandra, PostgreSQL or etc ... on the API side, Mongo has clearly won.