In classical computing maybe. And with phase space analysis millions or even tens of millions of events can be visualized for emergent attractors that would indicate an underlying pattern.
I sincerely doubt that anything is truly random, there has to be some type of cosmic drummer behind the scenes biasing certain events. Case in point is the conflict between molecular Darwinism and the numbers associated with the human genome. Approximately one billion nucleotide bases and with four different nucleotide bases in each location == 4^1,000,000,000 combinatorial explosion, a number waaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaay larger than 10^120 universal complexity limit, 10^80 stable elementary particles in the universe, and the generally accepted age of the universe ~10^40. The monkeys typing Hamlet thing is computationally intractable.
The main idea of "molecular Darwinism" is that the initial life for had a very short DNA [1]. While the species evolved, the short DNA evolves and got longer [2].
* For example some genes are repeated, a bad copy may repeat a gene and the DNA get longer. Virus may cause some duplications too.
* Some genes are almost repeated,each copy has a slightly different function, so each one has a variant that is better for each function. The idea is that an error in the copy made two copies of the original gene and then each copy evolved slowly differently.
* Some parts of the DNA are repetitions of a same short pattern many many many times. IIRC these apear near the center and the extremes of the chromosomes, and are useful for structural reasons, not to encode information. The DNA can extend the extreme easily because it's just a repetition of the pattern.
* Some parte are just junk DNA that is not useful, but there is no mechanism to detect that it is junk and can be eliminated, so it is copy form individual to individual, and from specie to specie with random changes. (Some parts of these junk may be useful.)
So the idea is that the initial length was not 1000000000, but that the length increased with time.
Your calculation does not model the theory of "molecular Darwinism". Your calculation is about the probability that is a "human" miraculously apear out of thin air with a completely random genome, it will get the correct one [3].
[1] Or perhaps RNA, or perhaps a few independent strands of RNA that cooperate. The initial steps are far from settle.
[2] It's not strictly increasing, it may increase and decrease the length many times.
[3] Each person has a different genome, so there is not 1 perfect genome. The correct calculation is not 1/4^1000000000 but some-number/4^1000000000 . It's difficult to calculate the number of different genomes that are good enough to be human, but it's much much much smaller than 4^1000000000. So let's ignore this part.
Again and irrespective of how much genome information was there initially and what it eventually became, you are still talking about a final optimization problem of 4^1,000,000,000. Even one tenth of that amount of the human genome is an unfathomably large number to randomly iterate to given the generally accepted statistics cited above. The math behind stochastic molecular Darwinism doesn't work out at all.
Tell me why evolution would require all of those combinations to be tried?
Edit: Microsoft Windows 10 is 9GB. It would be impossible to try 8^9000000000 different programs. Yet, Windows exists, and most of us believe it's contained in those 9GB.
If you just want to talk about how computationally tractable it is, the math is trivial. Optimize one base pair at a time. Now it's an O(4) problem repeated over a billion generations, most of which are bacteria where a generation is measured in minutes.
In practice the changes happening in each generation are all sorts of different rearrangements, but that's different from proving the basic and obvious fact that when you have multiple steps you don't have to spontaneously create the entire solution at once.
Bogosort will never ever sort a deck of cards. Yet it takes mere minutes to sort a deck of cards with only the most basic of greater/less comparisons. Even if your comparisons are randomized, and only give you the right answer 60% of the time, you can still end up with a sorted-enough deck quite rapidly.
(Why sorted-enough? Remember that reaching 'human' doesn't require any exact setup of genes, every single person has a different genome. It just has to get into a certain range.)
There's no random iteration, it's more like stochastic gradient descent with noise. Your number isn't correct even if only because of codon degeneracy.
>>> you are still talking about a final optimization problem of 4^1,000,000,000.
There is no final optimization step that analyze the 4^1,000,000,000 possibilities. We are not the best possible human-like creature with 1,000,000,000 pairs of bases.
> method of gradient descent
Do you know the method of gradient descent? Nice. It is easier to explain the problem if you know it. In the method of gradient descent you don't analyze all the possible configurations and there is no guaranty that it finds the absolute minimum. It usually finds a local minimum and you get trapped there.
For this method you need to calculate the derivatives, analytically or numerically. And looking at the derivatives at a initial point, you select the direction to move for the next iteration.
An alternative method is to pick e few (10? 100?) random points nearby your initial point, calculate the function in each of them and select the one with the minimum value for the next iteration. It's not as efficient as method of gradient descent, but just by chance half of the random points should get a smaller value (unless you are to close to the minimum, or the function has something strange.)
So just this randomized method should find also the "nearest" local minimum.
The problem with the DNA is that it is a discrete problem, and the function is weird, a small change can be fatal of irrelevant. So it has no smooth function where you can apply the method of gradient descent, but you can still try picking random points and selecting one with a smaller value.
There is no simulation that picks the random points and calculate the fitness function. The real process in the offspring, the copies of the DNA have mutations and some mutations made kill the individual, some make nothing and some increase the chance to survive and reproduce.
Would you please not be a jerk on HN, no matter how right you are or how wrong or ignorant someone else is? You've done that repeatedly in this thread, and we ban accounts that carry on like this here.
If you know more than others, it would be great if you'd share some of what you know so the rest of us can learn something. If you don't want to do that or don't have time, that's cool too, but in that case please don't post. Putting others down helps no one.
who is talking about neurons? Beneficial random mutations propagate, negative don't, on average. In this way, the genetic code that survives mutates along the fitness gradient provided by the environment. The first self-propagating structure was tiny.
It's not literally the gradient descent algorithm as used in ml, because individual changes are random rather than chosen according to the extrapolated gradient, but the end result is the same.
>computationally intractable nature of 4^1,000,000,000
which is a completely wrong number, even if only because of codon degeneracy. Human dna only has 20 amino acids + 1 stop codon, which are encoded by 64 different sequences. Different sequences encode the same amino acid.
You're of course free to "doubt that anything is truly random" and to suspect that "there has to be some type of cosmic drummer," but I feel compelled to point out that your "case in point" completely fails to support your opinion.
Your example insinuates that (a) all of the human genome is required to correctly model the human phenotype, i.e. each bit is significant, and, more importantly, (b) the human genome came into existence as-is without a history of stepwise expansion and refinement.
I can't know whether you're a creationist, but I will point out that your attempted argument is on (e.g.) #8 on Scientific American's list of "Answers to Creationist Nonsense" (https://www.scientificamerican.com/article/15-answers-to-cre...). Amusingly, SciAm's rebuttal even explains how the "monkeys typing Hamlet" fails as an analogy to the human genome.
Your argument is essentially the anthropic principle, the entire "we are here because we are here" thing. Even the Second Law of Thermodynamics counters stochastic evolutionary strategies, the math behind non deterministic molecular Darwinism is simply not possible given the youth of the universe.
You’re definitely reading something I didn’t write.
All I said is that this is a large state space, which has been largely unexplored. The reason it’s largely unexplored is because most of the state space is useless, inert garbage. The amount of time it takes to create a genome this large is proportional to the size of the genome, not the size of the state space. That’s how evolution by natural selection works. If you hypothesized a world without evolution, where things appeared completely by chance arrangement of molecules, that’s when the size of the state space becomes important.
So I would say that your argument is not an argument against molecular evolution, it is an argument against something else.
> the math behind non deterministic molecular Darwinism is simply not possible given the youth of the universe.
Doesn't it also depend on the size of the universe ? We don't have any idea how big it really is. It could be infinite in which case it's not only likely, but inevitable.
What has the Second Law of Thermodynamics has to do with evolution?
If you think Earth is an isolated system, just go out on a cloudless day and see for yourself why that is not the case.
Last time I argued with someone who didn’t believe in evolution, I went home and wrote something which implemented it. It took me half an hour and worked first time. We’ve been using simulated evolution as one of several ways to train AI for quite a long time now.
Well this is where the law of large numbers comes into play. Flipping a coin half a dozen times is likely several points away from 50:50; randomness isn't an emergent property of a coin flip dataset until hundreds or perhaps thousands of flips later assuming a precise coin flip mechanism and fair coin. So randomness at least within this context is a function of time.
That's not what I was trying to get at. For even a single coin flip, we still need a language to talk about the situation when you and I can't predict it. If I flipped a hypothetical coin that poofs out of existence after a single flip (or more generally single-time events like elections or poker games) the usual probability language applies. No law of large numbers necessary.
Exactly, yes. We have a language of mathematical unpredictability that applies to many things independently of whether the universe is deterministic. Fundamental unpredictability is irrelevant to practical unpredictability, but when you replace the word "unpredictable" with the word "random" people get weird about it. It's the same damned mathematical theory. Fundamental randomness is irrelevant to practical randomness.
Little if any empirical research has been done into the quality of entropy associated with repeatedly observing a quantum state. Pretty easily accomplished using an RTOS (RT_PREEMPT or Xenomai) with GPIO sampling, and with n-dimensional phase space analysis of that dataset to determine if any patterns emerge. There are plenty of tools from the field of chaos theory and nonlinear dynamical systems analysis to prove or disprove the fundamental nature of quantum randomness with.
And the likely reason why this is not happening (in the commercial space at least) is FDA regulatory hurdles in having a product (and probably an entire company) shut down for making "unapproved" medical claims.
I've got a set of Bose Hearphones, about $500, and which use the Bose Hear app. You can modify treble and bass but that's about it; even so if you are hard of hearing then the default iPhone sound profiles can be used to fine tune the in-ear audio and they work well as a low cost hearing aid. Same deal with my AirPods. Today Tim Cook could wave a magic wand and literally cure deafness for hard of hearing iPhone + AirPod users; ain't happening.
No question Bose Hearphones with the Bose Hear app or AirPods with a complementing iPhone app could easily perform spectral analysis with an easy-to-use-equalizer, to increase and/or decrease the amplitude of specific areas of the auditory spectrum; this would be ideal for tinnitus patients and would without question put the entire hearing aid industry out of business. Current generation hearing aids are about $15 worth of analog parts that they are selling for thousands of dollars each, just by virtue of having invested in some horseshit FDA regulatory process when infinitely more capable technology has been available to hearing impaired individuals for well over two decades at this point.
This is not complex, and what sucks about tinnitus is that it affects each person differently which in turn requires specific auditory tuning for each individual. But companies such as Bose or Apple that have the technology with more than sufficient computational horsepower to replace hearing aids simply refuse to do so, for whatever reasons that are likely FDA regulatory hurdle in nature.
On a side note as to OP, $300 for a Teensy-based hardware platform with BT stack is f'ing outrageous, probably worse than what the actual hearing aid vendors are ripping off.
I use a site called MyNoise to play nature sounds to drown out background noise (neighbors, traffic). The thing that sets MyNoise apart from the other ones I've tried is that it has a EQ you can tune, and specific instructions for tuning it: for each frequency range you find the lowest volume where you can still hear it. The result is a curve matched to your hearing curve -- the tuning process accounts for hearing loss and tinnitus. Now, if that curve could be combined with the "live listen" / Bose Hear... then you got yourself a hearing aid, using hardware you already own!
Parent of 2 deaf children here. When our first was born, we were handed a very thick packet of information warning us that without intervention (i.e., hearing aids) our child would be at much higher risk of falling behind in school and suicide because of social isolation. So, y’know, respectfully, nah.
Of course silence by choice is a different thing than silence by force. I have a huge concentration problem and silence is what I crave most when I'm working or trying to read. That says nothing at all about a child that is born with a hearing defect.
I agree that the FDA is a huge problem, and that Something Needs To Be Done, much like the CPUC in California which is really just an arm of industry. I don't know how to approach implementing a complete overhaul of the FDA without some truly atrocious people taking advantage of any loopholes which inevitably appear.
Who cares, Google is behind it and it's actually quite well done from a network traffic obfuscation / confidentiality perspective. They have a formalized working group and doing it the old fashioned RFC-spec-development way would likely slow down QUIC's adoption by several orders of magnitude.
Flutter QUIC + ICE NAT traversal + P2P distributed key value store == a completely decentralized framework for getting rid of mobile telephone providers and abolishing all forms of censorship.
No reason for cellular towers anymore when you've got 20+ handsets around you within ad hoc WiFi or Bluetooth range for decentralized mesh networking.
Jitter is always introduced by non-RTOS operating systems that don't have a guarantee for preemptive realtime scheduling. The kernel scheduler introduces jitter, supervisory processes introduce jitter, etc. And an easy test to see the effects of this is to try to control a servo motor with a GPIO pin without RT_PREEMPT or comparable RTOS.
Simple fix would be an RT_PREEMPT-linked Linux sound player.
Scheduling jitter does not cause jitter in audio, because audio is buffered and the buffer is consumed by audio device using it's own clock. Yet OS has to fill the buffer in time or underrun will happen, which will be audible as crackling and stutter. For some applications input and output audio buffers need to be very small to avoid introducing noticeable delay in audio processing, and in this case real-time capabilities are required to prevent buffer overruns and underruns.
Er, all it takes is one ingested radioactive particle to damage DNA and spawn cancer. Radioactive particles from Fukushima are showing up in automobile AC filters on the West Coast, and the media hasn't said much about the MOX fuel in Reactor 3 that many experts now theorize could go critical based on the volatile (and largely untested) combination of plutonium and uranium that MOX consists of. And Japanese culture is about the worst possible mindset to deal with this cleanup effort, they are more concerned about saving face and avoiding embarrassment in front of the international community for having made the genius decision to build a bunch of nuclear reactors, some of which are using experimental MOX fuel, on top of the Shionihara Fault. They haven't even located the melted fuel in these reactors yet, and one misstep could easily result in another meltdown or worse yet a critical MOX mushroom cloud that kills millions if not hundreds of millions of humans as the result. Never mind the 850,000 tons of contaminated water they are now simply going to discharge into the ocean.
That's pretty much the intent and role of IPFS though. Based upon current saturation statistics for mobile telephone handsets (which has to be 95%+ for every person 15+ years old in the USA at least), you're only a few meters away from another handset that's likely GPS enabled. Why in the world do we need centralized mobile carrier infrastructure when mesh-based P2P comms are now possible over ISM band networks? An IPFS + 900MHz P2P mobile chipset addon + GPS-based geographical routed P2P communications transport for handsets (and rapid adoption of the same) is all it would take to eliminate the mobile carriers. This could be as simple as a BLE-enabled device or phone case...
Mesh networks unfortunately scale really poorly [1]: O(1/sqrt(N)). You really need a fat backbone to turn the topology into something more like a hypercube.
Type of routing doesn't make a difference for the above result. The main assumption is that node-pairs that want to communicate have random locations. That leads to O(N) pairs trying to go across O(sqrt(N)) links in the middle.
In practice who knows what kind of communication patterns you would get. Applications would probably evolve around the long distance limit if it existed, but it's hard to imagine not having backbone links. Most likely the meshes would stay relatively localized (and I believe there exist a number of regional wireless mesh networks out there serving real customers).
Unfortunately Shannon–Hartley theorem is very harsh on mesh networks.
900MHz in US is 902-928 MHz, 26 MHz wide channel. The usual SNR ratio range is 0-40 dB. This corresponds to total bandwith over entire band of 26 to 172 Mbit/sec -- depending on the range. Let's be generous and say 100 Mbit/sec.
You are not going to get any sort of beanforming from the mobile pocket-held device, so your collision domain is going to be everyone around you. Say you are in the park, and there are 24 people around you, and you are all routing packets for each other, so each packet is transmitted twice before leaving the collision domain. In this case, single person's ideal, best case bandwith is 2 Mbit/sec.
In practice, you are not going to get anywhere close to 100% efficiency, so I'd expect internet speeds significantly less that 1 Mbit/sec. This is going to be very painful.