Hacker News new | past | comments | ask | show | jobs | submit login

All models are convenient fictions. I heard a neuroscientist once describe averaging as a low-pass filter. People know it hides high-frequency dynamics. But unless you have a way to interpret the high-frequency signal, it looks an awful lot like noise.



> But unless you have a way to interpret the high-frequency signal, it looks an awful lot like noise.

In other words, they're looking for their lost keys under the lamp-post because it's easier there. If there is a signal in the HF, it's not yet understood. This feels like "junk DNA" -which is I believe receiving more attention than the name suggests.


> they're looking for their lost keys under the lamp-post because it's easier there

This is a strange criticism. If you're looking for your keys in the dead of night, and there is a lamp post where they might be, you should start there.

The streelight effect criticises "only search[ing] for something where it is easiest to look" [1]. Not searching where it's easiest in all cases.

In this case, we know averaging destroys information. But we don't know to what significance. As the author says, "we now have the tools we need to find out if averaging is showing us something about the brain’s signals or is a misleading historical accident." That neither confirms nor damns the preceding research--it may be that averaging is perfectly fine, hides some of the truth that we can now uncover or is entirely misleading.

[1] https://en.wikipedia.org/wiki/Streetlight_effect


Good point.


My grad school research was with an NIH neuroscience lab studying low-level sensory processing that offered a fascinating perspective on what's really going on there! At least for the first few levels above the sense receptors in simpler animal models.

To oversimplify, you can interpret gamma-frequency activity as chunking up temporal sensory inputs into windows. The specific dynamics between excitatory and inhibitory populations in a region of the brain create a gating mechanism where only a fraction of the most stimulated excitatory neurons are able to fire, and therefore pass along a signal downstream, before broadly-tuned inhibitory feedback silences the whole population and the next gamma cycle begins. Information is transmitted deeper into the brain based on the population-level patterns of excitatory activity per brief gamma window, rather than being a simple rate encoding over longer periods of time.

Again, this is an oversimplification, not entirely correct, fails to take other activity into account etc etc, but I'm sharing it as an example of an extant model of brain activity that not only doesn't average out high-frequency dynamics, but explicitly relies on them in a complex nonlinear fashion to model neural activity at the population level at high temporal frequency in a natural way. And it's not completely abstract, you can relate it to observed population firing patterns in, e.g., insect olfactory processing, now the we have the hardware to make accurate high-frequency population recordings.


By “low level” do you mean in the thalamus or cortex or something else. Live a citation. I initially thought that “low level” meant at the level of receptors and the first few synapses. But to the best of my knowledge gamma oscillations will not play a roll in the periphery.

It would be great if you had a citation. I have been reading Karl Friston’s work all day.


Here's an example[1] examining the functional role of gamma oscillations in the hippocampus:

[1] https://www.jneurosci.org/content/15/1/47


It is a low-pass filter in the frequency domain with a roll-off that is not smooth. I quite like [1] as a quick reference.

https://www.analog.com/media/en/technical-documentation/dsp-...


Not the OP but we're talking about different things here. Much of the concern about averaging is about averaging across trials. Smoothing a spike train over time isn't really the issue that this thread is concerned with, since that's just averaging successive samples within some small window.


In physics the model we choose is based on the scale - as in the macro sense all quantum effects average out over the several sextillion atoms in, say, a wood screw.


I think of summaries as the text equivalent of averaging. Some high frequency stuff you don’t want to loose in that case are things like proper names, specific dates, etc. In the face of such signal, you don’t want to average it out to a “him” and a “Monday”.


That makes a lot of sense. Thank you for this analogy.

We use Conscrewence at work for internal documentation, and when I pull a page up it wants to recommend an AI-generated summary for me. Uh, no, Atlassian, I'm on this page because I want all the details!


That would be a median in your example no? A spurious average might be us thinking that the statistical mean word contains all vowels except for 'e', and that 'm' is twice as likely as the other most likely consonants.


This is broadly speaking not correct. If you average together a bunch of trials with variable timing, then the result can tend to wash out higher frequency components (which you might not have realized were in the data), but trial averaging is not a low pass filter at all. There are some nice methods to recover temporal structure that changes across trials prior to averaging, like:

https://www.sciencedirect.com/science/article/pii/S089662731...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: