Cashless is an utopia, pushed by disconnected from reality liberal arts academics and humanities majors. Sweden is famous for having way too many of those victims of Plato virus.
Cash is a major innovation in evolution of societies and not for some weak-minded progress-cosplaying idiots to abolish it in order to be popular an re-elected.
Let them visit some overpopulated asian country to see why cash is absolutely essential.
Also look no farther than bitcoin - anything which disrupts or even increase transaction time will be a disaster.
That seems like a very high error rate, about 10 million errors in the three-gigabase genome, and 100 thousand errors in the 30-megabase exome (protein-coding regions.) That might be an acceptable rate for population-level analysis if the errors are sufficiently uncorrelated, but I wouldn't want to be making decisions on the basis of it for personalized medicine. For comparison, here's a rough estimate that an individual human genome has 2-3 million SNPs [2].
I thought you could do better than that with 30x coverage, so I might be misinterpreting them, somehow. Or maybe they're using an unconventional sequencing technology which is cheaper but less accurate.
There's no simple answer to your question as it depends on many things - sequencing technology used, library prep and coverage to name a few.
Generally, it's not far from none when aligning short reads to a high-quality reference genome. Provided there's sufficient coverage and a majority of reads covering a particular nucleotide don't have a error at that position, than the correct answer will be given. Errors creep in due to things like systemic errors in library prep (such as a PCR error), and very low coverage over particular loci due to weird AT/GC content, meaning errors are harder to correct for. Repetitive regions can cause issues for short read alignment too, but coding regions generally aren't that repetitive.
$200 is very cheap for WGS - guessing it would be at the low end of the accuracy range, as they can't be sequencing to great depth (presumably).
In the genetics world, when you say the words "Gold standard", that usually translates to "Sanger sequencing", which is a high accuracy method of sequencing a small section of DNA, like a single gene. I don't think your statement is very helpful in that context.
Most of the world's whole genome sequencing is done using the Illumina platform. This service is using the BGI platform, which is arguably higher quality than Illumina. Our lab has data showing the error rate with BGI is about 1/6 the error rate of Illumina.
Yes, there are some even better sequencing technologies out there, such as PacBio, which provides longer reads capable of sequencing slightly more of the genome, and the error rates are constantly improving. However, these technologies are much more expensive.
No, because to achieve an animal-like, or even insect-like performance is way too complex and neuroscience itself is still poorly understood. How bees know how to dance? There is no supervised learning practices for bees, no schools. How a bird knows how to make a nest? How a newborn goat knows how to walk?
They literally should stop pilling up crap like kubernetes and look back at what 9P and Plan9 were and why. Especially that they have almost the whole Plan9 team employed.
Making analogues of J2EE Application Servers for native code is the wrong way. It is against intelligence.
The classic ed and its visual version which is called vi are really good-enough if done right. The central concept here is simple regexps bound to one key commands, and that these commands compose. This is the reason why vi is what it is.
What is interesting, the losses of those who got in after $10k could be approximated by analyzing daily trading volumes, and the sum of loses is about to exceed by far the .com loses.
Such a huge loss of real, usually borrowed, money must affect the economy, at least in US and Asia.
The legacy of Bitcoin should be studied to understand better how memes or semantic viruses are spreading through social groups and affects our inner representations or models of the world, trained by endless screaming streams of bullshit^W information we are forced to process.
There is nothing more fascinating than this - how a global Ponzi Scheme based on a few memes about poorly understood abstract concepts unfolds.
This is what globalization is all about. Someone, lets say US and EU, do very costly and high skilled labour intensive R&D and then sell the results all around the world (just take a look at what is going on in genetic engineering and pharma - billions and billions are being paid in salaries and other expenses every year, look at Uber, which develops its own AI software, while losing billions, etc).
Of course, the soviets and chinese are unhappy, because they are self-proclaim themselves as being no less "great" and capable, while, in fact, they have nothing even vaguely comparable with the US R&D machine, fueled with top talent and endless investment bank's money from all over the world (Saudis, Softbank, Norwegian sovereign fund, etc).
And, of course, being an R&D hub of the world is absolute win in the long run.
The tiny Swiss being the world leaders of R&D in industrial robotics is another great example. And soviets and chinese have nothing but propaganda, false claims and unsupported imperial ambitions.
Is the complaint, then, that China gets to profit from US-funded R&D?
Because genuinely, the human race as a whole is benefiting from that R&D — and also from China's appropriation of it — by getting cheaply produced implementations!
This is definitely a new development in philosophy and even in physics. Joe Armstrong would really appreciate this.
I almost could see it vividly - the beard, 0-rh designer glasses, macbook pro with many stickers and a vegan smoothie. Time is partial, dude, you know.
ROTFLBTC!
Seriously, I always tell people that there is no global/common "now()" in a (our sigh) distributed system. Leslie Lamport explained it back in 1978 already [1]. Any design assuming that is doomed to fail.
Not only this. In the domain of lazy functional languages they have arrived to a similar conclusion - the only way to establish an order of evaluation in a lazy language is via nesting of function calls, which is an implementation of causality principle, if you think of it for a while. No wonder monads and other things desugars into nested function calls.
"Time" is an abstract concept superimposed on reality, a product of the mind conditioned via sequential sense organs. It is of the same nature as the concepts of "god" and "divinity". Smart people are trying to rule it out from their computations.