As someone with no neuroscience background I’ve suspected this for a while just based on how Boltzmann machines work. These are a type of unsupervised artificial neural network.
The training algorithm for Boltzmann machines has two phases:
1. (Awake phase) Feed the network training data, and increase the weights between neurons that tend to get activated simultaneously in this phase.
2. (Dreaming phase) Let the network run without any training input, and decrease the weights between neurons that tend to get activated simultaneously in this phase.
This training algorithm wasn’t designed to emulate how human brains work, it just sort of falls out of the math. And yet (1) resembles the Hebbian principle from neuroscience (neurons that fire together, wire together), and (2) resembles the process described in this article. Actually I suspect something stronger than what the article claims based on this: that the connections that get pruned are the ones connecting neurons that fire together while dreaming. It would be nice to have someone who knows stuff about neuroscience comment on this.
I wouldn't be so sure that Boltzmann machines were designed without how the brain operated in mind. Terry Seknowski, one of the inventors, is a computational neuroscientist.
In general, I think the influence of neuroscience on neural network research has been subtle and perhaps underrated. For example, modern day convolutional neural networks Havea lineage going back to Fukushima's neocognitron, which was heavily inspired by Hubel an Wiesel's simple/complex cell model of visual cortex based on single cell electrophysiology.
They weren't. They were a generalization of the Hopfield networks. Boltzmann machines are a stochastic version of the Hopfield network. The training algorithm simply tries to minimize the KL divergence between the network activity and real data. So it was quite surprising when it turned out that the algorithm needed a "dream phase" as they call it. Francis Crick was inspired by this and proposed a theory of sleep.
Haha I'm not sure if you're being sarcastic so I'll try to unpack the comment. Hopfield networks were one of the first models of associative memory. They themselves were based on a model of simple magnets called ising model (generalized). Basically a group of binary units, each connected its nearest neighbors with a coupling strength. Each unit prefers to be like their neighbors. Hopfield developed a clever method to change the coupling so that the networks can store and retrieve patterns of activity. In the Hopfield network everything is deterministic, Hopfield himself realized that if this constraint was relaxed this model could become a very powerful computational machine. Which means that if instead of being always on or off, the units had a probability of being on or off the networks could perform very general computational tools [1]. Unfortunately, training these general stochastic systems was not easy. With their Boltzmann machines Sejnowski and Hinton proposed a possible solution. The activity of stochastic binary units effectively encodes a probability distribution, so all they had to do was make sure that the probability distribution being encoded by the activity of the units was the same as that of the input. They did this by changing the connection strengths between the units such that the activity pattern minimized something called the Kullback-Leibler or KL divergence, which is a measure of how close two probability distributions are (the one encoded by the network activity, or the dream activity of the network and the probability distribution of the real data e.g. a set of natural images). If two distributions match exactly then the KLd is zero and if not it's large. When they wrote out the math it turned out that the algorithm required two phases, an awake phase where the connections were changed according to the real data, and the sleep phase where the connections were pruned by the spontaneous activity of the network without any input (or dreams). This analogy got a lot of people excited, including Francis Crick and several others tried to test this idea in real brains, but we are still waiting for a convincing result.
In applied machine learning not so much. They feel ancient! But some use them to study the physics of computation. They were used to make the connection between renormalization group (RG) and machine learning. RG is one of the main workhorses of quantum field theory and condensed matter physics. The fact that there's a mapping from RG to RBMs means that we can understand how deep learning works by using the same techniques that modern physicists use to understand the world! Here's a nice article on this topic if you're interested https://www.quantamagazine.org/20141204-a-common-logic-to-se...
I don't disagree with this, but I also think that part of the interest in Hopfield networks and generalizing them to have stochastic outputs was because they seemed like a good model for how neurons might store memories or compute.
Numenta [1] (co-founded by Jeff Hawkins, author of "On Intelligence") has been working on a model of ANN designed around how the brain operates. Hierarchical temporal memory (HTM) [2] is one aspect of their model.
"I wouldn't be so sure that Boltzmann machines were designed without how the brain operated in mind."
Sure, actually it seems highly unlikely to me that they were designed without considering at all how they brain works. Let me clarify what I meant: given the definition of how a Boltzmann machine infers (i.e. how to determine which neurons are on or off), the training algorithm can be derived from purely mathematical considerations.
When I told Hinton about SPWs and that they are related to RBMs, he told me that he had a theory which required the existence of SPWs. That is not a coincidence, they seem to have been inspired by neuroscience and real neural network phenomena.
SPW: High frequency oscillation in the hippocampus which occurs during rest and sleep. The characteristics would suggest they are critical in memory management.
If you google "SPW neuroscience", you will find many resources.
Brain science is highly interdisciplinary. Each expert should spend time to familiarize themselves with the various approaches used by other experts. In particular, machine learning theorists should spend time to understand the latest results (their implications, at least) in neuroscience and neuron biophysics. At some point, an ML theorist is not too different than a biophysics theorist. They just use different abstractions. This is why I am particularly interested in Google's and Hinton's work. That work seems to be subtly motivated by neuroscience and natural science...
There is evidence to suggest that sleep and rest are important in verifying which memories to keep, as well. I did some undergraduate research (as a biophysics student) which actually looks at this from a neuroscience perspective. If you are interested in memory and neural networks, read about SPWs, from some scientists in the field:
In summary, the hippocampus will organize neurons into clumps which encode spatial dimension. As a rat runs around throughout a track, certain groups of neurons fire[1] depending on where the rat is located. Before the task even starts, hippocampal neurons in a rat actually fire in quick succession (e.g. ABCDEFG). When the task is completed, the same pattern fires in the reverse order (e.g. GFEDCBA). Scientists believe that these SPWs play an important role in memory consolidation.
I have always thought of this as "the rat is thinking about where it's going to run to". So I started learning machine learning and landed on things like RBMs. I was so excited because RBMs are reminiscent of this actual phenomenon that occurs in the brain. So, I asked Hinton about it because this is no mere coincidence, and he told me that he had a theory that required reverse order firing during sleep state.
If you get a ton of new memories and patterns during wake, sleep may be responsible for not only pruning less important memories, but strengthening more important memories. For example, sleep might be asking the brain for a ranked list of memories, and then strengthens the memories above some threshold, and prunes the rest (just an idea).
I am not sure why you are attacking me. If you do not understand jargon, please read some more literature. SPW-R complexes. Look it up!
Edit: SPWs, Sharp Waves, SWR, etc. are just labels for sharp waves. I can't believe I am getting attacked for using jargon. The names don't matter. The physics and science matters. Richard Feynman.
Edit 2: The link I sent uses the same jargon that I use. So, honestly mate, what is your deal here? Are you trolling?
I am not attacking you. I don't work in neuroscience or biology.
You used a term (SPW) I'm not familiar with without defining the term, the three articles you referenced did not include the term, in the end we find the term is not an acronym, I have no knowledge of your training or background and my background is different from yours.
I was intrigued enough to briefly search for SPW but seeing nothing clearly enlightening, I asked for clarification.
My reference to your needing a nap is because you didn't initially define the term SPW, since most reading your post were likely unfamiliar with. No harm was intended. Not everyone knows what you know or thinks about what you think about, especially technical jargon.
A simple CTRL+F (CMD+F Mac) "SPW" comes up with 7 results in the Wikipedia article I sent you. I didn't define it, but I mentioned why it was interesting, and I linked to an entire paper discussing the phenomenon! Anyways, I hope you read more and share your thoughts. Please no more personal remarks, just focus on the content!
It is true that the article you posted later in response to my queries does indeed contain references to the term "SPW". Thank you for providing that clarifying link.
Unfortunately the terminology and origin of the term "SPW" (or "SPW-R")remains unknown to me, despite the second article using it freely. The article says, for example:
"... John O'Keefe investigated SPW-Rs in more detail in 1978 while studying the spatial memory of rat..."
Queries on Google seem to indicate that the term "SPW-R" refers to the same concept as the acronym "SWR" (a true acronym), but I'll never know, being too impatient to consider the matter further.
It is up to you. I tried to point you to the best of my ability. Let it be known. Anyways, I would appreciate if you removed the downvotes -- there is no good reason you have downvoted my responses to you.
I'm not sure if English is your first language, but giardini's first comment in response to yours does not seem to be in any way "trolling" or "attacking" you. None of his later responses seem to, either. You may want to check yourself, because there was no good reason to accuse him of trolling you. You are the one out of line. You started this with your overly-confrontational attitude.
> Anyways, I would appreciate if you removed the downvotes -- there is no good reason you have downvoted my responses to you.
As for this, obviously somebody thought your responses were inappropriate. I agree with them, and would have done the same if I had the necessary reputation.
Quoting the HN Guidelines:
> Please resist commenting about being downvoted. It never does any good, and it makes boring reading.
I have every right to feel provoked if I believe someone else is intentionally wasting my time. That user, from my point of view, did not seem genuine in his interest in what I had to say.
For what it's worth, at this point, both SPW and SPW-R are commonly used. SPW == Hippocampal sharp waves, SPW-R == Hippocampal sharp wave ripples. Often referred to as just sharp waves or sharp wave ripples.
That's really interesting. From a naïve and purely pop-science perspective... if true, wouldn't this explain why it's hard to remember dreams after waking up?
There are brief sections about this in the Deep Learning book by Bengio, Courville and Goodfellow (2016):
> 18.2
Because the negative phase involves drawing samples from the model’s distri- bution, we can think of it as finding points that the model believes in strongly. Because the negative phase acts to reduce the probability of those points, they are generally considered to represent the model’s incorrect beliefs about the world. They are frequently referred to in the literature as “hallucinations” or “fantasy particles.” In fact, the negative phase has been proposed as a possible explanation for dreaming in humans and other animals (Crick and Mitchison, 1983), the idea being that the brain maintains a probabilistic model of the world and follows the gradient of log p ̃ while experiencing real events while awake and follows the negative gradient of log p ̃ to minimize log Z while sleeping and experiencing events sampled from the current model. This view explains much of the language used to describe algorithms with a positive and negative phase, but it has not been proven to be correct with neuroscientific experiments. In machine learning models, it is usually necessary to use the positive and negative phase simultaneously, rather than in separate time periods of wakefulness and REM sleep. As we will see in Sec. 19.5, other machine learning algorithms draw samples from the model distribution for other purposes and such algorithms could also provide an account for the function of dream sleep.
> 19.5.1 Wake-Sleep
One of the main difficulties with training a model to infer h from v is that we do not have a supervised training set with which to train the model. Given a v,we do not know the appropriate h. The mapping from v to h depends on the choice of model family, and evolves throughout the learning process as θ changes. The wake-sleep algorithm (Hinton et al., 1995b; Frey et al., 1996) resolves this problem by drawing samples of both h and v from the model distribution. For example, in a directed model, this can be done cheaply by performing ancestral sampling beginning at h and ending at v. The inference network can then be trained to perform the reverse mapping: predicting which h caused the present v. The main drawback to this approach is that we will only be able to train the inference network on values of v that have high probability under the model. Early in learning, the model distribution will not resemble the data distribution, so the inference network will not have an opportunity to learn on samples that resemble data.
Another possible explanation for biological dreaming is that it is providing samples from p(h,v) which can be used to train an inference network to predict h given v. In some senses, this explanation is more satisfying than the partition function explanation. Monte Carlo algorithms generally do not perform well if they are run using only the positive phase of the gradient for several steps then with only the negative phase of the gradient for several steps. Human beings and animals are usually awake for several consecutive hours then asleep for several consecutive hours. It is not readily apparent how this schedule could support Monte Carlo training of an undirected model. Learning algorithms based on maximizing L can be run with prolonged periods of improving q and prolonged periods of improving θ, however. If the role of biological dreaming is to train networks for predicting q, then this explains how animals are able to remain awake for several hours (the longer they are awake, the greater the gap between L and log p(v), but L will remain a lower bound) and to remain asleep for several hours (the generative model itself is not modified during sleep) without damaging their internal models. Of course, these ideas are purely speculative, and there is no hard evidence to suggest that dreaming accomplishes either of these goals. Dreaming may also serve reinforcement learning rather than probabilistic modeling, by sampling synthetic experiences from the animal’s transition model, on which to train the animal’s policy. Or sleep may serve some other purpose not yet anticipated by the machine learning community.
Hmm interesting. So it seems the Boltzmann machine explanation of dreams that I described becomes less convincing when you consider that running the waking and dreaming phases of the training algorithm in separate long "chunks", instead of simultaneously, seems to not work well in practice.
As an aside, in trying to make a title compelling (have a person as the subject), the Times is inadvertently hurting science.
This article is about a hypothesis that has some evidence in mice, but tells nowhere near the whole story, especially in humans with regard to sleep. However, the title suggests more confidence than warranted. Likely, in the near future, some newspaper will publish a similar article, except the purpose of sleep will be to remember (which is also supported by studies).
The problem is the average person will see that and think, what scientists say is always changing, and they are contradicting each other all the time. Then when they see an article stating scientists say that climate change is worsening, they will view it just like any article that has "scientists say" in the title.
It would be nice if they wrote "A new study hints at...." or something like this. When they write "Scientists say..." a lot of people will think that this is now widely accepted science.
The article starts "over the years, there have been numerous theories ... here's a new one." Then later, has the usual "other scientists say it's too soon to tell". The real problem here is the headline - but then, that's true of almost all headlines, scientific or not.
> when they see an article stating scientists say that climate change is worsening, they will view it just like any article that has "scientists say" in the title.
Actually, you should see it exactly the same way - a possibly interesting finding from probably a single study, and if you actually want to know the scientific information behind it, you should go read the science, not the newspaper, and look at multiple studies, and look for weaknesses in the study that may weaken the findings.
...especially if it is something you already believe.
I rather suspect the NYT readers will figure out that "scientists say" is a simplification.
Climate science is a different kettle of fish where the media on both sides tend to bias things to one extreme or the other - we're all going to die or it's a hoax, where what the actual scientist say gets drowned out rather, which is a shame.
While forgeting might be one of the effects that happens it is certainly not the main purpose of sleep.
If you forbid people to sleep long enough they die and nobody knows why (not counting microsleeps). Fatal Famililal Insomnia is erm.... fatal. Its certainly not because you didn't forget. You can use alternatives for that, such as boosting endocanabinoid system. THC in mices also make them forget stresfull events in life and vice versa (blocking CB1 receptor makes them remember). There is strong anecdotal evidence that it happens in all human consumers too. Also, there are few people that can never forget [1], one depicted in House MD episode and those people sleep and I bet they would still die if they didn't.
All those facts do not align in my mind with above stated hypothesis.
AFAIK, some neural pathways get improved during the sleep while others are pruned. That sounds more like a brain is in a maintenance mode, filtering out irrelevant stuff from important (among other things that might happen).
It's possible Fatal Famililal Insomnia kills you by some other mechanism, and the sleeplessness is just a symptom, I figure. I don't think we know that the sleeplessness is the thing that kills you.
"Like all prion diseases, FFI is a progressive neurodegenerative disease, which means over time there are fewer neurons (nerve cells). Loss of neurons in the thalamus, as well as other mechanisms not yet fully understood, cause the symptoms of FFI."
"Although the main target of FFI is the thalamus, other parts of the brain are affected as well including the inferior olives. The inferior olives are part of the medulla oblongata and are important for coordinating our movements (motor control). Losing neurons in the inferior olives can make it harder for a person to control their movements as seen in later stages of FFI."
> When your whole brain is going haywire due to neuron loss, attributing death to sleeplessness seems premature.
Or it could be another way around, as its not easy to have experiments like this I guess we can never know.
But yes, that is certainly possible that in FFI you die due to some other reason. I didn't see that info you provided but I remember reading in NGeo many years back that FFI people seem healthy in other aspects.
There was 1 experiment with mices tho - after 32 days they all died witout sleep:
> Fatal Famililal Insomnia is erm.... fatal. Its certainly not because you didn't forget.
It might be. It's a kind of garbage collection. Long-lived dynamic programs with bad GC run out of memory and exit with fatal errors, precisely because they didn't forget.
The brain naturally employs dimensionality reduction for memory. Sleep is one example. Another, simpler one is reading -- how far back can you remember word for word when you are reading something? Maybe a sentence at most? But you still remember enough to understand what you're reading, because of efficient dimensionality reduction.
Some neural networks mimic this, such as LSTMs. But it's a poor mimicry at best. The brain has a natural, built-in selection mechanism. It seems to "know" what to remember and what to forget. How could we implement something like this in a deep neural network?
(This is key step to giving computers "personality". Which emerges from a selective set of memories and trained behavior)
The process in the article is pretty similar to dropout in neural nets. But instead of "knowing" what to get rid of, we randomly prune. The brain may do it randomly or intelligently, hard to say based on these studies.
Anyone who is curious about what sudoscript is talking about they should watch the HBO series "West World". The series dives directly into this question.
I went through a 3 week period of sleeping no more than an hour a night, if at all. Things like "this morning" became very muddled. If I thought about what I had for breakfast, there was no difference between breakfast today and yesterday and even a couple days before that -- they were recalled to mind equally.
Multiple days in a row felt like "today." It's hard to describe the feeling because I've never felt anything like it before or since. It just felt like the stack was overflowing for "today". One night of sleep and things were back to normal.
I went through a 3 week period of sleeping no more than an hour a night, if at all.
Can you explain why this happened? Off the top of my head it is either military or having a child, but I have a feeling there might possibly be an even more interesting explanation.
Didn't think of that. Hope my original question didn't come off as rude, I'm just seriously interested how this is possible since I have trouble just pulling a single all-nighter :-|
The whole premise of there being a single function or main function of sleep seems slightly far-fetched IMHO.
For example, is the brain's function to think rationally? To speak and understand language? To unconciously regulate bodily functions?
Wouldn't it be more likely that due to the brain's multifunctional nature that an evolutionairly pruned process has developed which encompasses multiple necessary subroutines (energy preservation, memory consolidation, waste cleansing, etc.)
That was my immediate thought, as well. It even behaves similarly: If you allow the system time to garbage-collect before it's absolutely needed, it's less intrusive. If you go too long, you'll end up doing a garbage collection cycle without advance warning.
Francis Crick, (of Watson and Crick DNA fame), got very interested in brain science toward the end of his life and gave several worldwide lectures on that subject. I was fortunate to attend one of them. During which he proposed several interesting theories. My favorite being that the purpose of dreaming was to identify and collect all the well-separated bits of something designated to be forgot, (tracing the chain of fragmented memory blocks); exactly as you suggest: the biological equivalent of garbage-collection.
My second favorite quote from the Kingkiller Chronicles:
>First is the door of sleep. Sleep offers us a retreat from the world and all its pain. Sleep marks passing time, giving us distance from the things that have hurt us.
For a theory of sleep to convince me, it would need to explain how the benefit from sleep can possibly be worth the huge disadvantage of leaving yourself completely vulnerable for hours a day. Evolution-wise, that seems hard to justify. A variant of human that didn't have to sleep seems like it would overrun the sleep-needing humans, unless the non-sleeping human just isn't possible.
Remember that sleep evolved very early on. Arthropods (insects, arachnids, crustaceans, etc.) sleep. Worms experience a sleep-like state. It's even been shown that cyanobacteria experience circadian rhythms.
Fruit flies whose sleep patterns were disrupted were slower to learn new things and faster to forget the things they learned. I'm not sure there is a single purpose for sleep, but memory is certainly a critical factor.
People have speculated that sleep is meant to organize your thoughts for years. I know when I worked in a Neuroscience lab years ago that is what many researchers thought. But to say it is to "Forget" is almost certainly an over simplification. It might be to optimize to remove the extraneous, but that is probably not all of it either. Nothing about how the brain works is simple.
Anybody that studies Carl Jung work on dreams will understand how superficial this theory that tries to explain sleep as forgetting is, and how this theory fails to account for symbols and archetypes that show in dreams, also any event can be remembered by hypnosis so nothing is really forgotten with sleep.
You're assuming dreams originate from in you rather than them being something that is reflected on your (sleeping) consciousness. Kind of similar to presuming the brain is what fundamentally generates consciousness.
I'm thinking its more like, to integrate memories. Lessen the impact of the most recent memories lest we lose earlier lessons. Kind of like a moving average or filter. Dim traumatic events so we can think about them without over-reacting.
Every parent probably intuits this, whether they consciously realize it or not. The #1 cause of babies crying [that isn't pay or hunger]: overstimulation & a need for sleep to "reset".
The main reason is probably is need to be calm while the body is experiencing that level of growth. You wouldn't want to build a house with constant earthquake I guess.
Very relevant PubMed article many of you will undoubtedly find interesting: "Partial sleep in the context of augmentation of brain function."
Abstract: Inability to solve complex problems or errors in decision making is often attributed to poor brain processing, and raises the issue of brain augmentation. Investigation of neuronal activity in the cerebral cortex in the sleep-wake cycle offers insights into the mechanisms underlying the reduction in mental abilities for complex problem solving. Some cortical areas may transit into a sleep state while an organism is still awake. Such local sleep would reduce behavioral ability in the tasks for which the sleeping areas are crucial. The studies of this phenomenon have indicated that local sleep develops in high order cortical areas. This is why complex problem solving is mostly affected by local sleep, and prevention of local sleep might be a potential way of augmentation of brain function. For this approach to brain augmentation not to entail negative consequences for the organism, it is necessary to understand the functional role of sleep. Our studies have given an unexpected answer to this question. It was shown that cortical areas that process signals from extero- and proprioreceptors during wakefulness, switch to the processing of interoceptive information during sleep. It became clear that during sleep all "computational power" of the brain is directed to the restoration of the vital functions of internal organs. These results explain the logic behind the initiation of total and local sleep. Indeed, a mismatch between the current parameters of any visceral system and the genetically determined normal range would provide the feeling of tiredness, or sleep pressure. If an environmental situation allows falling asleep, the organism would transit to a normal total sleep in all cortical areas. However, if it is impossible to go to sleep immediately, partial sleep may develop in some cortical areas in the still behaviorally awake organism. This local sleep may reduce both the "intellectual power" and the restorative function of sleep for visceral organs.
the article title (and indeed, upon groggy inspection, the article itself) is misleading: it's likely the the _hypothalmus_ "forgets" while the the _cortex_ "remembers"/"learns" during sleep.
i think ppl have been generally aware of this since the 80's
Perhaps dreams are the mechanism that allows the dimensionality reduction to take place? Or at least a side effect of us becoming semi-conscious while the process is taking place?
I frequently recommend the Foundational Falsehoods of Creationism (AronRa) videos on YouTube. They're obviously more about addressing creationism, but it's enjoyable and educational and necessarily explains a lot about evolution that I didn't know previously. One great part is where he explains the lineage of species by using folders on Windows.
If you trust your intuition, you've known this is one of the MANY purposes of sleep for a while now. The western science culture of needing scientists to validate what you know as true damages our own ability to trust ourselves. You're not going to have statistical and scientifically proven evidence for 99.99% of most things you encounter in your daily life. That's why the mindset of waiting around for science to prove something is harmful. You need to be able to go with your own intuition most of the time. Anyway, this was obvious. And keep in mind, it's just one of the purposes--can things not have multiple purposes??
* The lack of inertia is very intuitive. Once you stop pushing something it will get to rest soon. This is extremely intuitive. (Unless you live nearby a frozen lake???)
* The spontaneous generation of small creatures like worms and flies is intuitive. You put some crap and wait a while, and you'll get a few small animals. (Did someone ever believe that elephants were generated spontaneously?)
* Leeches are good for your health. I'm not even sure that this was intuitive, or that it was even a popular cure, but it was used in traditional western medicine. Modern medicine use penicillin, I don't think that using an extract from a mold is intuitively good.
What? No, im pretty sure that some people knew what friction was and that if theoretically, there was none of it, things would continue to move. Thats intuitive to me.
> Spontaneous generation of animals.
Fair example; ive read this old theory before.
> Leeches
Blood letting is still somehow advised as being healthy for certain conditions (too much iron, and "resetting" a percentage of ones blood. That or they just really want people to donate)
Plus you even admit that youre not sure about it being 'false' fact brought by intuition, so why even include that?
OP's point is not that intuition is infallible. Not much is infallible (perhaps nothing, if one is atheistic)
It's that science has seemed to shunt/reduce/eliminate the intuition of a subsection of the population. It's his opinion (and mine) that this has harmful effects that are worth examining.
It was a popular idea a long time ago https://en.wikipedia.org/wiki/Inertia#Early_understanding_of... and after teaching physics I think that inertia is not so intuitive. What did you thought when you were 13 years old and had no formal training in physics?
> OP's point is not that intuition is infallible. [...]
The problem is that if you ask enough people, each one may have a different intuitive idea. Some of them will be slightly different and some of them completely contradictory. Everyone has an intuitive idea of how to manage economy and a country, but not everyone has the same intuition about that.
That's why to do science you can use intuition to design experiments and make theories, but you MUST confirm them with experiments.
The training algorithm for Boltzmann machines has two phases:
1. (Awake phase) Feed the network training data, and increase the weights between neurons that tend to get activated simultaneously in this phase.
2. (Dreaming phase) Let the network run without any training input, and decrease the weights between neurons that tend to get activated simultaneously in this phase.
This training algorithm wasn’t designed to emulate how human brains work, it just sort of falls out of the math. And yet (1) resembles the Hebbian principle from neuroscience (neurons that fire together, wire together), and (2) resembles the process described in this article. Actually I suspect something stronger than what the article claims based on this: that the connections that get pruned are the ones connecting neurons that fire together while dreaming. It would be nice to have someone who knows stuff about neuroscience comment on this.