The second slide says "If 8 out 10 (or 80%) businesses fail after the first year, the remaining 2 (or 20%) would probably not survive to see their fifth year" -
how does this follow? The 'myth' only talks about the first year of the business, and doesn't say that 80% of businesses fail every year.
Indeed, this is a complete straw-man argument. Obviously, the longer a business has been around, the less likely it is to fail -- which is exactly what the article itself points out on slide 9.
It assumes that "80%/year" failure rate implies something like exponential decay. In this model of business, they behave something like particles decaying; the business environment might be treacherous enough that failures are due to factors that are essentially random and unmanageable.
While others (and you) have pointed to the deficiencies of this model, or at least to its assumptions, it is not fallacious, per se. The presence of other models--for example, that the first year is the most treacherous because the new owner lacks experience--does not make the current model, nor conclusions drawn from it, fallacious. Indeed, the implied size of the population, "all businesses," is a warrant for some pretty strong claims. Additionally, treating a saying as data, rather than respecting the context in and modality with which it is offered is probably fallacious in itself.
So, quoth the Bard, "If you're wondering how he eats and breathes/and other science facts/la la la/repeat to yourself, "It's just a show,"/"you should really just relax!" :D
The statement "Half of all Carbon 14 decays in 5,730 years" does not say anything about the decay in years after that.
Half lives on the other hand are defined as being a re-occurring phenomenon, such that it _does_ say something about future years.
Language does not work like code, yes, but that doesn't mean it can't have a clear and defined meaning. If it couldn't, language would be useless as a communication method.
Exactly, your first sentence isn't saying anything at all about the next 5730. Without outside knowledge of exponential decay, there's zero reason to imply anything about the next 5730.
If you instead say "the half-life of Carbon 14 is 5730 years" then you would be saying something about the next 5730 as well.
The English of "8 out of 10 businesses fail in their first year." is completely unambiguous.
Ok I'm confused by 'environment variable' vs files. How does one set an environment variable without putting it in a file on the particular server. Or by 'file' in this article (and the 12 factor one) do they mean a file that in source control?
The article means 'file in source control' - the specific context is that the author is one of the co-founders of Heroku where there is a whole separate (really nice) system for handling 'config variables' as part of your app deployments separate from source control.
You can also run foreman if you're not on Heroku. Put your environment variables in a `.env` file. The environment variables get sourced only to the environment of that process and not to the __whole__ system
I found the elasticsearch tutorials on youtube by Clinton Gormley pretty helpful in understanding the concepts. Also came across this book (which I haven't really read so I don't know how good it is, just posting it here)
http://exploringelasticsearch.com/
"A key reason that John permanently left Zenimax in August of 2013 was that Zenimax prevented John from working on VR, and stopped investing in VR games across the company."
What kind of morons stop John Carmack from working on something?
"The reason for this is that transactions just serialize the execution, they don't guarantee any atomicity of independent row updates. After the delete happens the second transaction gets a chance to run and the update will fail because it no longer sees a row"
Umm - I thought everything in a transaction can be treated as atomic wrt to other transactions.. ie they don't see "in between" states?
"…it is possible for an updating command to see an inconsistent snapshot: it can see the effects of concurrent updating commands on the same rows it is trying to update, but it does not see effects of those commands on other rows in the database."
That only applies for the read committed isolation level. It seems to me that at least serializable should not show inconsistent state in this situation?
What does that have to do with serializable transactions?
"The Serializable isolation level provides the strictest transaction isolation. This level emulates serial transaction execution for all committed transactions; as if transactions had been executed one after another, serially, rather than concurrently."
If you don't get the right behavior with serializable transactions, as you seem to be claiming, it seems to me that serializable transactions should be considered buggy. In this case they do not provide the guarantees they are claimed to provide.
Yes, it is true for all databases which do not take table level locks. In PostgreSQL you can increase the isolation level to serializable and get an error when this happens instead of incorrect query results, not sure about MySQL.
It's actually that isolation that causes the problem. The issue is you have updates which dont see the existing inserted (and uncommitted) row and so the resulting inserts collide.
I feel you, all my friends "love" traveling too, more like "love talking about traveling" tho.
I always think: "Do you know the implications of traveling? Not sleeping on your own bed? Bugs? Most people get sick for trying odd foods? (which is something I'm always down)"
Personally I think the "fluid" responsive design for article snippets is a mistake; but then I hate multi-column feed designs, so that may be just a personal preference.
Apple got it right with the iPhone menu structure, which was inspired by the iPod (maybe they nailed it because it was not card-based). There, columns always maintain the same positional relation (there are menus to the "left" and sub-menus to the "right"), so you can always remember by muscle-memory where a particular item is located relative to the others. Re-flowing similar items breaks those relations.
Responsive pages make sense when the side columns are used for side content (i.e. "aside" tags, headers and footers, navigation...) - there, placing the sub-content above or below the main content to show it on a narrow screen is not a problem, since the moved card was subordinate to the main article anyway.
Not yet. To be honest, I've just decided to focus on backend features, put a locked height on items and to get back to this in the "let iterate on design details before releasing" phase.
But those plugins are a perfect fit that I didn't know of, thanks for it !
It's still something that have to be done through javascript, though. So I guess we probably have to consider cards canvas as a feature rather than as a design element (like calendars, graphs, etc).
"2 x $30,000 sales sitting at a probability of 10% to close, 1 x $500 sale sitting at a probability of 75% to close", and so on.
How does one estimate these probabilities?