> ...we learn about what we attend to, and we attend to what we learned high values for.
I see this a lot when I'm trying to help people who "should" know what they're doing. I focus on trying to identify what they're not noticing, and then bringing it to their attention. It makes for great light-touch teaching and lets me, in retrospec, provide better introductions/instructions/documentation/whatever by adding in whatever stimuli they overlooked.
OT: the article has a terrible introduction. almost entirely unrelated to the article, and non-sequitorial. It's like they did a keyword search for quotes with "attention":
The Wizard of Oz told Dorothy to “pay no attention to that man behind the curtain” in an effort to distract her, but a new Princeton University study sheds light on how people learn and make decisions in real-world situations.
> I focus on trying to identify what they're not noticing, and then bringing it to their attention.
This is a brilliant characterization of something toward which I've been groping in my first role as a senior engineer responsible in part for helping my teammates develop their own skills. I hadn't thought about it in sufficient detail to identify it, and I am much obliged to you for putting it in a way that makes it come clear in my mind.
Should you have the time and interest to do so, I'd greatly appreciate any elaboration in which you'd care to engage on the methods you've found to work best, and not to work so well, toward this goal. I doubt I'd be alone in finding such information of great potential benefit - and since I never had the benefit of technical mentorship in my own less senior days, I'm rather strongly feeling a lack of experience with good examples after whom to model my own efforts.
It can help to know that "paying attention to", "noticing", or "seeing" also has a large component of "ignoring", "filtering", or "pulling attention from."
When I started writing JavaScript I had a hard time noticing missing semi-colons. All of the syntax was new and so my attention was overwhelmed. Everything was new, surprising, salient. Later after I'd read thousands of lines of JavaScript most of it was boring, routine, and unsurprising. It no longer demanded any part of my attention. Lower level filters had edited it out of my vision. Only the surprising, off, or "wrong" things remained, such as missing semi-colons.
This process of becoming familiar with what correct looks like is active learning. Also, just like any other set of knowledge, it is hierarchical. You first see the bark, then the trees, then eventually the forest. Each level takes time.
As an educator and mentor understanding the normal progression of building this hierarchy of understanding is key to helping someone grow in their understanding. You don't jump right to the forest (scalable architectures) when they are still preoccupied by bark (basic syntax).
I'd be curious to hear opinions on the value in this context of automatic linting as performed by flycheck, et al. I've heard the argument made that it serves as a crutch which prevents junior devs from learning to "just see syntax errors", but that sounds to me like the same kind of argument about electronic calculators that older folks made when I was young, and I can't say I've ever seen much sense in that one, either. But perhaps my perspective is simply lacking.
Yes, this.
It takes practice.
Advanced chess players presented with an unfamiliar arrangement of pieces on a board can reliably distinguish between actual/plausible mid-game states and semi-random "impossible" positions. There's a progression from rules and mechanics to openings and tactics to strategy and style. Same goes for music, programming, sports... really any skill or body of knowledge I can think of.
This is a really good observation. I've noticed that people who are successful tend to have an uncanny ability to 'see' the right things. And making good decisions is often a function of a person's ability to be aware of what is happening around them.
It's tricky though because I don't know many techniques for expanding one's field of vision - is it possible to learn to "see the things you cannot see".
I've always found it interesting to observe how people approach code debugging for a similar reason; can they see, not just look. Quite often people will think they 'know' exactly where the problem resides and fail to find the problem despite visible evidence to the contrary. Specifically crafted 'debugging' tests are a valuable part of my interview approach, even for non-technicals.
> It's tricky though because I don't know many techniques for expanding one's field of vision - is it possible to learn to "see the things you cannot see".
It seems to me like a habit of constantly questioning my own assumptions. Or perhaps, list the relevant factors supporting a conclusion, and then remove some of them and see what other reasonable conclusions you can reach if you actively avoid the excluded factors.
I think the "trick" (not really a trick) is to learn to "see things for what they are". As simple as that sounds, I'm always surprised at how difficult it seems to be for some people. And how I can easily get caught in that trap if I am not being disciplined enough with my own thoughts / learning.
Especially obvious when that lack of discipline helps shape the foundation of one's thoughts on a topic.
Most art courses start with some kind of training in "seeing". What are the real proportions of something? Can we percieve a face as a collection of light and dark areas rather than letting our feature-decoding brain make its own impressions? If you look at the colour of something in isolation, how does it compare to its perception in a scene? And so on.
Hey there, I'm from a ML perspective, care to go into more detail? So in particular, are you saying that attention is something that should be taught to the network not learnt on its own?
I just read through the research paper[1] and activity in frontoparietal network looks eerily similar to the reticular activating system[2] that heightens alertness and directs attention.
For example, when you order something new in a restaurant – perhaps anchovy pizza — you should learn whether you like or dislike anchovy pizza, rather than attribute the pleasurable experience to the particular table you’re sitting at. Or when crossing a street, you should pay attention to the speed and direction of oncoming traffic, while the colors of cars can be safely ignored.
So in these two examples it's pretty obvious to most people what we should be paying attention to. But worth remembering, perhaps, that the same problem of 'what is relevant here' also occurs a lot in situations where it is not so clear what is the relevant factor. e.g. You have problems with asthma, you change your diet, the asthma gets better, was the change in diet actually the relevant factor?
And even in those two 'obvious' examples it's possible to get this wrong. e.g. You decide you don't like anchovies, but actually this is just because you went to a bad restaurant, the anchovies were much too salty, and another time you try better quality / better prepared / better dosed anchovies, and like them.
I guess my point is that the deciding what is relevant part of this is not trivial, and strategies and mechanisms for achieving this effectively are important.
Attention is relevant to artificial intelligence also. See http://agi-conf.org/hlai2016/ conferences and research (some on youtube -- search for 'AGI-16').
It's also a large component in many mental disorders (schizophrenia, tourettes, ocd). Attention really comes down the brain filtering out a mess of stimulation. When that filtering malfunctions we see mind do very funny things, e.g. repeated thoughts and delusions.
Without having access to the paper, it would be really nice to see what model of attention they're using, and how it relates to the expected-reward calculations they're positing take place for reinforcement learning. I'd also like to see the authors explain whether the model-free or model-based RL controller in the brain is the one operating to update endogenous attention based on expected value.
I think your brain is optimized around preventing yourself from losing things (such as money, items, relationships, etc.). For example, if the hot water is left on in your house, most adults would rush over and shut it off.
Neither of those sounds like an optimization to prevent loss, so what is your point? That even if you are wrong, you can easily make up alternative theories?
I see this a lot when I'm trying to help people who "should" know what they're doing. I focus on trying to identify what they're not noticing, and then bringing it to their attention. It makes for great light-touch teaching and lets me, in retrospec, provide better introductions/instructions/documentation/whatever by adding in whatever stimuli they overlooked.
OT: the article has a terrible introduction. almost entirely unrelated to the article, and non-sequitorial. It's like they did a keyword search for quotes with "attention":
The Wizard of Oz told Dorothy to “pay no attention to that man behind the curtain” in an effort to distract her, but a new Princeton University study sheds light on how people learn and make decisions in real-world situations.